id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
210497682
pes2o/s2orc
v3-fos-license
Critical Distinctions between Expert and Novice Translators: Task and Professional Satisfaction The nature of translation projects and tasks in the language industry has undergone significant changes due to a widespread adoption of the subcontracting model and recent technological trends. Managing increasing terminological complexity, higher task specialisation, and higher levels of technical expertise have become essential elements of a translator’s professional profile. Nonetheless, the requirement of such a sophisticated professional profile has challenged novice translators in their incipient careers because of limited knowledge and training opportunities. Since many changes have occurred to the profession over a relatively short span of time, this article studies sources of translator satisfaction and dissatisfaction that may affect their perception of work as well as the language industry at large. This study reports results from an ongoing investigation into the ‘expertise effect’ measured through translator satisfaction in relation to two main categories: (a) professional satisfaction and (b) task satisfaction. A student’s t-test is used to compare perceptions of novice and expert translators (N=250), and the results suggest a gap in critical sources of satisfaction between the two populations. The findings could be applied to determine possible means of mitigating career turnover among translators and used by translator trainers to comprehend the needs of novice professionals. Introduction Over the last two decades, the language industry (LI) has seen a rapid and widespread adoption of the subcontracting (or outsourcing) model, and this has been complemented by the advent of Internet-related technologies and the emergence of flatter organisational structures (Dunne 2012;Rodríguez-Castro 2015). These trends are interrelated and have subsequently led to new organisational dynamics and a transformation of the work environment (Rodríguez-Castro 2016). Particularly, these trends have (i) reshaped translator jobs in terms of services offered, (ii) changed the tasks that they undertake and, (iii) led to new professional identities in the LI. In a relatively short span of time, the LI has become a multibillion dollar industry (Rodríguez-Castro 2016) and the European Union of Associations of Translation Companies (EUATC) has estimated average annual growth rates of 5-7.5% (Boucau 2006) with estimates of revenues cited to be approximately USD 30 billion (Dunne 2011;Kelly/Stewart 2010) and as high as USD 43 billion in 2017 (De Palma/Stewart/Lommel/Pielmeier 2017). These figures demonstrate the growth and rapid expansion of the industry, which has in turn contributed to drastic changes in the translating process and consequently reshaped the translation profession itself. As the subcontracting model has become the norm, projectised organisational structures have been more prevalent (Dunne 2012: 143). As clients adopt new decentralised structures, subcontracting translation projects to language service providers (LSPs) has become more common. LSPs have evolved into a 'distributed' network of freelancers and "approximately 90% of langua-ge services are outsourced" (Rodríguez-Castro 2015: 30-31). These new structures not only adopt new methods of Internet-mediated communication but also function from remote work locations with virtual teams. According to Pym et al.,74% of translators serve as independent contractors or freelancers (2012: 3). Therefore, many LSPs and clients telework and perform tasks in virtual teams. Additionally, as outsourcing becomes more prevalent and provides more employment flexibility, new modalities of multitasking have emerged. In fact, multitasking has become a misnomer that refers to one's ability to perform a wide variety of tasks in the same project (e.g., translation and proofreading) while freelancing full-time or part-time. Hence, today's translators enjoy the flexibility of serving in multiple roles (i.e., translator, editor, proofreader) while performing tasks (i.e., services) for a wider client portfolio. These trends have also led to an increase in project volume and complexity (Biel/Sosoni 2017). Since translation has predominately evolved into a mass-production industry, the translation process along with the nature of tasks and roles has drastically changed. Particularly in the case of freelancers, an increased division of labour, specialisation of skills and the systematic reuse of language resources have become crucial in order to handle complexity in larger projects (Garcia 2009). A higher level of technical expertise and diversified professional profile (Ehrensberger-Dow/Massey 2014b) has become necessary for the labour force to undertake new tasks and projects. Today's LI not only demands a constant execution of heterogeneous tasks for translators, but an evolving skillset becomes critical in order to remain competitive. These trends in projectised organisational structures have not only accentuated task complexity and project volume, but have arguably reshaped a translator's professional profile. The translator's professional profile has become increasingly characterised by a widely varied and technically sophisticated skillset, and essentially more dynamic. Since the labour force is in a constant flux and has to strive to remain competitive, this has resulted in higher degrees of task dissatisfaction, particularly for professionals in their incipient careers. Newcomers entering the language industry with limited training and knowledge are struggling to rapidly develop skills while increasing their productivity to meet the continuous pressures of the growing translation services market. Unlike other professions, the language industry is volatile and continues to remain "almost totally unregulated" (Katan 2009: 113). Translators' sources of professional satisfaction remain scarce due to the lack of public recognition or occupational prestige. The social perception of the translation profession is monolithic, i.e., it does not account for the dynamic and specialised nature of the profession and enjoys a relatively low status (Dam/Zethsen 2008Chesterman/ Wagner 2002). The fact that translation can be viewed as a commodity has worsened its professional status or recognition, since price often gets prioritised over quality. More importantly, there is no organised career structure in the profession (Katan 2009: 123-124), and enhancing one's professional profile and skillset can be challenging under the freelancing model. The literature on translation studies cites professional recognition and prestige as the most common intrinsic sources of satisfaction among translators, and "their 'professionality' lies in their individually honed competencies in the field" (Katan 2009: 111). As a nascent and evolving industry, multiple aspects of the translation profession can affect the level of satisfaction that translators perceive. These sources of satisfaction and dissatisfaction affect translators' perception toward the profession and may reshape their professional identity. Consequently, in an industry that depends largely on its human capital, it is crucial to address prevalent sources of dissatisfaction in order to avoid potential career turnovers. Language professionals are faced with many challenges such as a need to quickly adjust to new trends in the profession, or increasingly enhance the professional profile to be able to offer a diversified service portfolio. These challenges can only be overcome by a highly qualified and specialised labour force, and could potentially be perceived as insurmountable by novices in the absence of a structured training environment. Thus, an increasing gap in translator (dis)satisfaction is expected to emerge between novice and expert translators under the outsourcing model. This emerging gap calls for research that attempts to comprehend crucial sources of satisfaction and dissatisfaction between these two indispensable groups of the labour force. This study aims at identifying a potential 'expertise effect' in most prevalent aspects associated with a translator's role(s). Specifically, professional and task satisfaction are assessed to determine major distinctions, if any, between the two groups of study so as to comprehend means of mitigating early career turnover among novice translators and eventually enhancing translator training approaches. The 'expertise effect' in Translation Studies This study seeks to investigate components associated with translator professional and task satisfaction, drawing specific attention to the differences between expert and novice translators. In the literature on translation studies, some authors have used translation competence and translation expertise as near synonyms (Muñoz Martín 2014: 6;Ehrensberger-Dow and Massey 2014a: 63), and the literature on translation process research (TPR) and expertise studies has recently defined the 'expertise effect' as a "comprehensive framework that allows for including a wide variety of task-related cognitive resources, detailing how they interact, and then describing how those resources […] change during the acquisition of expertise." (Shreve/Angelone/Lacruz 2018: 47) Additionally, Shreve (2006) argues the need of declarative and procedural knowledge from a variety of cognitive domains in conjunction with training and experience (28), but insists on hours of deliberate practice, exposure to more varied tasks, and consistent superior performance so that metacognitive knowledge and regulation can be progressively developed until translation expertise emerges (Shreve 2009;Shreve/Angelone 2010). Muñoz Martín (2009) concurs that translation expertise includes "extensive domain knowledge, but crucially also heuristic rules that simplify and improve approaches to problem solving, meta knowledge and metacognition" (25; see also Sirén/Hakkrarainen 2002). Within the expertise framework, this concept acknowledges the existence of a variety of skill levels, and also entails the notion of progressive development or evolution in the translator's acquisition history in five stages, ranging from (1) novice, (2) advanced beginner, (3) competent, (4) proficient, to (5) expert (Shreve/Angelone/Lacruz 2018: 47;Woll 2001: 283). Thus, the 'expertise effect' is "often taken to correlate with the time spent in such [deliberate] practice (Muñoz Martín 2014: 5) or in terms of translation productivity in 'full time equivalents' (FTE) of individual translation activity in a full-time work schedule per year (European Union 2006:c 284/15). Although this range of experience is debatable, it allows us to measure exposure to translation activity and establish major distinctions among groups with significantly different degrees of experience. Therefore, in this article, the term 'expert' is used to refer to professional translators with a minimum of ten years of professional experience (also quantified as FTE), and expertise is assessed in the study through the translator's professional profile. Due to the multiplicity of roles that translators play in the work environment, these roles define the main characteristics of their professional profiles. A translator's professional profile is increasingly sophisticated due to the fact that technical translation has continued to account for approximately 90% of the translation output per year worldwide (Kingscott 2002: 247). Given that scenario, the professional profile identified in this study includes years of experience in the LI, formal education (in translation studies or related disciplines), industry certifications, and specialisation (or subject matter expertise) as well as technical expertise. Typically, specialised or technical translation requires (a) a higher level of subject matter expertise, (b) comprehension of terminological complexity and (c) higher level of technical knowledge (Byrne 2006). Expert translators feel more confident about handling high volume and tight deadlines while meeting quality expectations. Jääskeläinen also highlights an expert's capability to process larger chunks of text and automation of multiple linguistic tasks (2010: 219-222). Concerning segmentation, Dragsted adds that the sentence was not originally a central unit in translator's mental processing. However, with the implementation of translation memories (TM), novices (non-professionals) tend to prioritise segmentation at the sentence-level and perceive that they are benefiting from TM. By contrast, professionals rather focus on clauses or phrases and acknowledge that working at the sentence level complicates their translating process (Dragsted 2005;Dragsted 2006). In comparison to experts, novices may "rarely deal with large-scale translation projects" (Gouadec 2007: 27). Novices may not feel confident with subject matter expertise, may be intimidated by terminological complexity, and may lack technical expertise. Kruger/Dunning posit that novices often manifest a feeling of overconfidence in task completion due to "the double curse of being unskilled and unaware that induces the unskilled to dramatically overestimate their expertise" (1999: 726). Hence, the literature on the 'expertise effect' has been used to determine crucial distinctions in the level of satisfaction between experts and novices vis-à-vis the aspects associated with (a) translation-related tasks and (b) the translation profession as described in the next section. Operationalising Translator Satisfaction This section describes the concepts of translator satisfaction while focusing on two broad categories: (a) professional satisfaction and (b) task satisfaction. The concepts in this study have been adopted from the literature in Occupational and Organisational Psychology as well as Organisational Behaviour, and have provided a framework to establish a unique distinction in the construct of translator satisfaction between extrinsic and intrinsic sources of satisfaction. Extrinsic sources of satisfaction or profession-related aspects are captured in the study under professional satisfaction, whereas intrinsic sources of satisfaction refer to a positive feeling that derives from the task or work itself, and are investigated as elements of task satisfaction in this study. This is a unique distinction that has generally been ignored in the literature of Translation Studies where both task-related and profession-related aspects have typically being studied under the label of 'job satisfaction.' Professional Satisfaction Professional satisfaction is operationally defined as the feelings of status or achievement that an individual develops from a desired level of competency and allows for growth, career path optimization and professional recognition (Chen 2008: 106-107). Professional satisfaction evolves from the sense of identity as a member of an occupational community of language professionals playing similar roles in the industry, and includes external sources of satisfaction that derive from belonging to the translation profession. This study assesses translators' views of the profession and their perceived role in the LI, specifically such crucial components of professional satisfaction as professional self-concept, professional reputation, career commitment and turnover. The relationship between professional satisfaction and professional self-concept or professional identity has drawn limited attention in translation studies. The professional self-concept is defined as the set of attributes, beliefs, values and experiences by which individuals define themselves in their professional lives (Ibarra 1999;Muñoz Martín 2014). The literature in Organisational and Occupational Psychology has long studied that changes in a person's role and tasks lead to changes in one's professional identity. Katan (2009) states that translators can be viewed as "dedicated and mainly satisfied wordsmiths, who take pride in their job" (111), but are also known as intercultural communication experts (Holz-Mänttäri 1984) and "agents of social change" (Tymoczko/Ireland 2003). In particular, "freelancers see themselves as dynamic, business-like people. Choosing to be self-employed gives them professional' status" (Gouadec 2007: 169). Furthermore, it can be argued that translators possess a strong sense of professional pride, but their professional self-concept may not be homogenous. Person-environment fit theory posits that individuals show a higher level of satisfaction vis-à-vis the self-concept when their role is compatible with individual needs and abilities (Kaplan 1983). Translators broadly view themselves at the very end of the food chain, and may not feel that they play a role in the industry, particularly novice translators. For instance, translators who feel their roles and tasks do not meet their career expectations show lower levels of satisfaction with the profession. Additionally, the professional self-concept can be understood through society's views about translators. In this case, the general social perception is that translators are individuals capable of communicating in at least two languages. There is generally a lack of recognition of a comprehensive skillset and specialised training, even though it may vary from country to country. In North America, Canada was the first country to recognize the translation profession. The U.S., Canada and Mexico recognized translation as a distinct industrial sector in 1997 and the European Union (EU) in 2008 (Dunne 2011). Despite language professionals being recognized in renowned institutions like the UN and the EU, low professional recognition and 'very low status is accredited to translators worldwide' (Katan 2009: 111;Dam/Zethsen 2008). Low status may be intrinsically associated with feelings of lack of professional appreciation. Albeit translators value the role of associations in promoting professional recognition, employers do not equally acknowledge these efforts (Dam/Zethsen 2008) as 'certified' status is not highly sought after by employers (Bowker 2005: 19). The lack of occupational recognition could lead to dissatisfaction with the social recognition of the translation profession, this is typically relevant to non-literary translators. As an additional source of professional satisfaction, career commitment, professional commitment and career development have been used interchangeably in the literature. Mayer/Salovey (1993) were forerunners in investigating that the level of an individual's participation in an occupation might significantly vary with the level of priority given to professional involvement. In fact, high levels of career satisfaction are associated with active participation in professional meetings and gatherings. Individuals who have a strong commitment may not be involved in occupational activities but continue with membership (Mayer/Salovery 1993). Lacking professional commitment has been found to be associated with an intention to leave the profession (Hall/ Smith/Langfield (2005). Career development helps a person to acquire current knowledge, upgrade skills, and contribute to the profession. For instance, novice translators may be interested in attending conferences to learn or pursue certifications, but it may not be feasible due to budgetary and time constraints, especially due to limited organisational support. Conversely, experts may lead workshops, present at conferences or provide consulting services (McKay 2006), all instances of undertaking a leadership role in career commitment that typically results in higher levels of satisfaction. Due to the emerging need to possess a highly sophisticated skillset to remain competitive, career involvement exemplifies a crucial source of professional satisfaction in the LI. Notwithstanding the constraints of the outsourcing model vis-à-vis career commitment, coaching and mentoring have been suggested as motivational approaches to promote retention policies for 'knowledge' industries. Particularly, Mortensen et al. (2002Mortensen et al. ( : 1452 conducted a comprehensive study of 2,600 participants and selected being mentored, serving as a mentor, and selfassessed high professional involvement among key aspects of professional satisfaction. Burke (2001) and Burke/McKeen (1995) focused on the motivators of professional satisfaction of managerial and professional women and found that training, career development, and undertaking challenging tasks were the main contributors to high levels of professional satisfaction in the early stages of their careers. Therefore, alternative virtual coaching, mentoring and other mechanisms can be considered for the development of the talent pool in the LI to promote career development among freelancers. The concept of professional reputation and professional prestige (Dam/Zethsen 2008, Dam/ Zethsen 2010 are intrinsically related and used interchangeably in this study. Professional reputation depicts the visibility or public image from the society's assessment of the specific characteristics or reputation of an individual, along with the prestige or perception of the significance of one's work related to personal and social esteem (Stamps 1997). This concept reflects a perception that others value the translator's services and skillset. Broadly speaking, there is a "lack of awareness in society about what constitutes translation competence and its complexity as well as lack of recognition of the importance of translation." (Dam/Zethsen 2010: 205) As translation "does not range as a proper profession" (Dam/Zethsen 2010: 205) and is not regulated, its labour force neither enjoys high visibility nor occupational prestige (or fame); thereby, professional reputation in the LI may derive from the translator's branding recognition (McKay 2006) or a history of the successful completion of prestigious projects from Fortune 500 clients. As a translator's client portfolio increases, it leads to increasing workload, new occupational opportunities and long-term business relationships with clients, which in turn result in career satisfaction. Broadly speaking, translators feel a special pride when a long-term client needs a translation service or when they are referred by a client, since it implies increasing professional recognition. As recognition increases, a translator can see financial benefits, occupational flexibility, and an ability to choose clients and projects; thus increasing the level of professional recognition that can positively impact professional satisfaction. The factors of professional self-concept, reputation and career commitment can arguably be crucial sources of professional satisfaction (Chen/Chang/Yeh 2004;Burke 2001;Ibarra 1999). The level of professional satisfaction is expected to vary with expertise, with experts probably having a more positive attitude toward their career and to the profession. Experts may feel that they play an essential role in the industry by being more actively involved in associations, by producing publications, leading workshops or serving as mentors, resulting in higher levels of satisfaction toward career involvement and a more fulfilled professional self-concept. Conversely, the literature identifies career turnover as an indirect measurement of professional dissatisfaction. Work behaviour outcomes have been widely investigated by expectancy-value approaches, these studies suggest that a negative self-concept contributes to high career turnover rates and indicates dissatisfaction with the career path and the profession (Tett/Meyer 1993). In the LI, career turnover is likely to be higher among novices while experts are more likely to stay in the profession for such reasons as a feeling of 'investment' and a shortened window of opportunity to change careers (Kyndt et al. 2009). The concept of turnover is crucial in this study since one's decision to leave the profession -not just the job -may arise from any of the aforementioned sources of professional dissatisfaction. Task Satisfaction Task satisfaction is "a psychological construct that is associated with individual perceptions of the specific tasks that compose the work associated with a role" (Rodríguez-Castro 2016: 32). In order to comprehend translator task satisfaction, this study focuses on the concepts of self-efficacy (including task scope and task description) and task self-fulfilment (see, e.g., Rodríguez-Castro 2016 for a more comprehensive review on task satisfaction). The concept describes a feeling of success associated with such intrinsic motivators as growth, recognition and enrichment from the work itself (Herzberg 2003: 92-93). Mason and Griffin (2002: 299) studied the relationship between task-specific factors (familiarity, challenge, variety) with procedural skills and concluded that it enhances task satisfaction. In this study, the level of satisfaction toward tasks (namely translating, proofreading, terminology management, etc.) is likely to increase as the feeling of knowing (FOK) increases. FOK is a metacognitive phenomenon that arguably assesses task satisfaction (Koriat 1993) and is intrinsically related to the concept of self-efficacy. Self-efficacy is one's ability to evaluate the success or failure associated with the task being performed (Bandura 1995). Self-efficacy encompasses the self-concept sub-dimension that belongs to the translation expertise construct (Muñoz Martín 2014: 33), and is therefore intrinsically related to 'expertise effect.' As a translator's translation expertise evolves, additional metacognitive knowledge and cues are available for complex problem solving and recognition of successful task completion (Shreve 2002: 162). Translation expertise allows translators to develop task awareness, including a growing understanding of task scope and the task description that determines the complexity of the work itself. Experts are likely to have greater task awareness, i.e. a better understanding of task scope (range of discrete activities included in projects) and task description (deadline, client specifications). Expert translators who have established their reputation in a specific domain are generally offered larger projects (i.e. higher volume, higher income, constant stream of work) and often enjoy the challenge of undertaking such projects with higher terminological and technical complexity. Since autonomy is a function of years of experience, experts are also granted higher levels of free-dom and allowed to take more initiatives (Katan 2009). Unlike experts, novices are likely to feel overwhelmed by terminological complexity, tight deadlines, and implementation of CAT tools, and may not be ready to wear multiple hats. In fact, novices often fail to recognize translation problems and interferences (Biel 2011); task complexity may be perceived as demotivating because they fail to understand scope, project timeline and overall quality expectations. In addition to such intrinsic sources of task satisfaction as self-efficacy, successful task completion in multiple projects can lead to further intrinsic motivators such as task pride and task self-fulfillment. The literature on self-fulfillment posits that individuals exhibit a positive attitude when their professional and personal needs are aligned (Greene/Burke 2007). Self-fulfilment is defined as "a pleasurable or positive emotional state resulting from the appraisal of one's job or job experiences" (Mason/Griffin 2005). In this study, the importance of this concept is twofold: translators are broadly known to take strong pride from performed tasks (Katan 2009), and their dedication to life-long learning is typically attributed to the intrinsic nature of all language professionals (Durban 2010). Furthermore, the motivation for achievement is identified as the strength of a translator's desire to excel, to succeed in difficult tasks, and to do better than the competition. According to Herzberg (2003), successful completion of a task and seeking solutions to complex problems are examples of occupation enrichment (94-96). High-need achievers constantly seek success, and task challenges, thereby gaining intrinsic motivation from project completion, especially challenging projects with highly technical inputs and outputs or projects with tight deadlines. Hence, regarding self-fulfilment, both experts and novices may take a strong pride in their work and satisfaction from "the feeling of translating" (Dam/Zethsen 2016: 181). However, the level of task satisfaction could be relatively higher in the case of experts since they have a longer occupational history. This may lead to higher intrinsic motivation from multiple successful completions, outstanding offers as well as gestures of performance appraisal. Methodological Approach The concepts discussed in Section 3 have been used for the assessment of professional and task satisfaction among expert and novice translators. An online questionnaire was designed as the instrument for data collection, consisting of a task satisfaction index as well as a professional satisfaction index (Rodríguez-Castro 2015). Instrument reliability was measured by using Cronbach's alpha coefficient (0.98 out of 1), and data collection was conducted online during ca. two months. The data are further analysed by using a parametric test that is performed by applying an independent-samples student's t-test in order to statistically compare the two populations (Hale/Napier 2013). The results of the two-tailed t-test are used to determine whether novices and experts exhibit significantly different levels of satisfaction from task-related or profession-related aspects. Participants A total of 250 participants completed a multifaceted questionnaire that included questions on specific aspects of task and professional satisfaction. Professional translators were recruited to participate in the study based on professional experience in the LI or productivity in total FTEs (full time schedule per year, average of 8 hours per day). Their answers to the questionnaire included their role in the LI, commonly performed tasks, and types of services provided. Participants belonged to a wide variety of areas of specialisation (e.g., scientific, legal, health care, business, information technologies), educational background and technical expertise (i.e. implementation of translation and localisation tools, terminological management tools, corpora, etc.) and represented more than ten countries and languages (e.g., Argentina, Chile, Germany, Iran, Japan, Mexico, Spain, UK, United States). Years of experience has been chosen as the threshold for the expert versus novice differentiation; however, it may be noted that, for the sake of brevity, translators with 3-10 years of experience are not included in this study since extreme groups are more suitable to investigate the distinctions between experts and novices. Experts in this study are iden-tified as active language professionals with more than ten years of occupational experience or at least 10,000 hours of full-time work (Muñoz Martín 2014: 5;Shreve 2006: 29), while novices are identified as professionals with less than three years of experience. Based on sample sizes commonly cited in previous questionnaire studies that range between 40s (Pan 2014) to upper 200s (Kyndt et al. 2009), the sample size for the study is considered to be adequate for performing hypothesis testing. It may be noted that out of the 250 participants, 86 respondents were classified as experts while 117 respondents were classified as novices. The remaining 47 respondents had an experience of more than three years but less than ten years. The classification of novices and experts is arguably rigid and may not capture nuances of different degrees of experience, however existing operationalisation in the literature supports this classification (Shreve/Angelone/Lacruz 2018;Shreve 2006;Muñoz Martín 2014). Overview of Questionnaire The reliability of the instrument was tested before starting data collection, and the instrument did not demonstrate any inherent deficiencies that could compromise the data collected from the instrument (Rodríguez-Castro 2015). The instrument was also tested for readability in a pilot phase at the American Translators Association's annual conference (2010) and it was also ensured that the participant's responses would remain anonymous. The readability tests ensured that sentence length, syllables and word lengths were appropriate. Only terminology commonly used in the industry was used, and all possible efforts were made to ensure semantic transparency. Flesch-Kincaid Grade Level Index was used to identify problematic statements that were subsequently revised as part of pilot phase completion. Participants were preliminarily asked about their professional identity and the relationship between specific elements of the professional profile with deadlines (i.e., subject matter and technical expertise). The questionnaire on satisfaction consisted of two sets of questions. The first set included sources of professional satisfaction (Appendix 1) and the second set contained statements concerning sources of task satisfaction (Appendix 2). A quantitative approach has been chosen in this study by using a five-point Likert scale (1-5) that ranges from 1 (very satisfied/highly agree) to 5 (very dissatisfied/highly disagree). In order to reach a broad audience of active professional translators, the questionnaire was distributed online through multiple means of communication (email lists, translation portals, etc.). This effort resulted in participation from wide-ranging specialisations and languages with a variety of job profiles and professional experiences. It may be noted that this study does not account for crosscultural or geographical differences among participants that may be interesting but are outside the scope of this article. Data Analysis Data collected from the online questionnaire has been analysed by using an independent samples Student's t-test (also called t-test) to compare the responses between novices and experts in order to comprehend differences, if any, in their perceptions toward levels of satisfaction. The t-test is widely used in applied linguistics to compare data (Dörnyei 2007) and has been applied to translation studies (Mellinger/Hanson 2017: 87). The t-test allows this study to determine statistically significant differences between the two groups whose means are assumed to be normally distributed. For this test, the null hypothesis (H 0 ) is that the mean values from both populations are identical, whereas the alternate hypothesis (H 1 ) is that the responses are statistically distinct (Mellinger/ Hanson 2017: 88-91). The data analysis toolbox in Microsoft Excel has been used for analysis, this toolbox reports the significance level for H 0, expressed as the p value in the tables. As a result, the significance level for H 1 is reported as 1-p. An overview of the quantitative results is given in Sections 5.1. and 5.2. In addition, the effect size for each t-test was calculated by Cohen's d. Table 1 and Table 2 display the descriptive statistics results to compare the perceptions of experts and novices toward aspects of professional and task satisfaction. Both tables list the mean and standard deviation (SD) in conjunction with the results from the t-test (t and p values). Questionnaire Items -Professional satisfaction Results from the professional satisfaction questionnaire are compiled in Table 1 and discussed in this section. In terms of the professional self-concept, differences between the two groups are significant and experts exhibit a significantly higher level of satisfaction with their perceived role in the industry (p = 0.001) and their occupational status in meeting their professional expectations (p = 0.001). The data yielded non-significant differences between the two groups with regards to social recognition (Table 1, p = 0.17) and professional identification (p = 0.87, Table 1). Additionally, novices and experts differ on their satisfaction levels of professional appreciation. Novices do not feel as professionally appreciated as experts in the current LI (p = 0.05). Novices acknowledge the need for updating their skillset and express dissatisfaction with the opportunities to engage themselves in career development (p = 0.04), and schedule constraints that do not allow for career development opportunities (p = 0.06), as argued in the literature. Expert Novice t p SD SD More precisely, the test did not identify significant differences between the two groups in terms of satisfaction with opportunities to present at conferences (p = 0.27, Table 1), leading training sessions or workshops (p = 0.14) and attending conferences as a kind of career commitment (p = 0.83), albeit experts manifest relatively higher levels of satisfaction (M = 1.76, M = 1.70, M = 1.36), respectively) than novices. The data showed non-significant differences between the two groups with memberships (p = 0.11) and certifications (p = 0.07). Lastly, concerning the benefits of mentoring in career development, novices seem to be highly satisfied with the positive impact of mentorship (t=3.32, p=0.001) compared with experts. This indicates that mentoring is the strongest source of satisfaction among novices in the category of career commitment followed by career development opportunities. Professional reputation is a source of professional satisfaction that novice translators may lack due to their emerging careers. As argued in the literature, having the occupational flexibility of working with providers/clients of individual choice (p = 0.001) and receiving new projects from long-term business relationships (p = 0.03) are crucial sources of professional satisfaction that only experts experience. The test does not identify significant differences between the two groups in terms of satisfaction with being able to accept and reject projects (p = 0.08), the financial benefits from their branding recognition (p = 0.07) along with receiving projects from Fortune 500 companies (p = 0.90). Furthermore, while novices show dissatisfaction with having prestigious projects (M = 2.26), experts show higher levels of satisfaction with financial benefits (M = 1.77) and the flexibility to accept and reject projects (M = 1.67) that novices may not enjoy. To conclude the results from professional satisfaction, respondents were asked about possibility of leaving the profession and finding a new job, as possible signs of professional dissatisfaction. The data shows non-significant differences suggesting that both groups could leave the profession and experts do not differ from novices in potential career turnover (p = 0.70). Questionnaire Items -Task satisfaction Results for the task satisfaction questionnaire are summarized in Table 2. Prior to being asked about specific sources of task satisfaction, participants were asked about deadlines, the contribution of CAT tools in expediting the translating process, significance of subject matter expertise in meeting deadlines, and perceptions of quality/time trade-offs. Table 2) and CAT tools are essential to meet deadlines (M = 2.87, SD=1.62, Table 2). The data shows significant disagreements between both groups regarding the necessity of subject matter expertise in order to translate faster and meet tight deadlines (p = 0.03). The results support the claim that advanced self-efficacy enhances experts' understanding of task scope (p = 0.001) as perceived in much higher levels of satisfaction vis-à-vis task scope among experts. In addition to task scope, this section also considers the results of the questionnaire in relation to such self-efficacy-related sources of satisfaction as task complexity and terminological complexity. Unlike novices, experts are highly satisfied with the level of terminological complexity observed in projects (p = 0.001) and with task autonomy (p = 0.001) with a medium effect size (d = 0.69). In terms of descriptions showing the accurate nature of the tasks, significant differences are reported between novices and experts (p = 0.03, Table 2) suggesting that novices rely on clear details from task descriptions in order to understand task requirements and the nature of the work that needs to be completed. Non-significant differences are reported between the two groups regarding the level of satisfaction with complexity undertaken in tasks, specific types of tasks involved, and deadlines not compromising quality. However, it may be noted that novices are slightly more dissatisfied with lack of variety in tasks (M = 1.67), possibly due to the lack of flexibility or choices in selecting tasks, and expressed their belief that deadlines actually compromise quality (M = 3.32). Lastly, the results suggest non-significant differences between experts and novices vis-à-vis the concept of self-fulfilment. Novices' responses demonstrate somewhat higher levels of satisfaction toward outstanding offers (M = 2.06) and well-done completed tasks that may bring more work (M = 1.40), suggesting that experts do not rely on having outstanding offers or completing a complex project successfully as intrinsic sources of motivation. Unlike novices, experts feel slightly higher levels of self-fulfilment from a nice word on performance from the client or project manager (M = 2.17) with a medium effect size (d = 0.31). Overall, the results identify terminology management, understanding of task scope, and autonomy allowed in tasks as the most significant sources of task dissatisfaction among novices. Discussion and Conclusions The results of this study suggest that there are some significant differences in the sources and levels of professional and task satisfaction among translators. Emerging trends have resulted in challenging working conditions for translators, particularly for new professionals starting in the translation industry as freelancers. Results indicate that even though novices strongly identify with the LI, they are not satisfied with their occupational status and are strongly considering a change of career. However, experts enjoy occupational flexibility resulting from freelancing due to their professional recognition. A translator's professional profile has become complex, typically requiring a high level of subject matter expertise and terminology management skills. Experts have benefited from these trends due to high levels of self-efficacy. This has allowed experts to have greater autonomy in decision making, resulting in higher task satisfaction. However, these aspects are seen to lead to dissatisfaction among novices. It could be argued that experts will always have a higher level of professional satisfaction due to their experience; however, the same cannot be expected about task satisfaction. The relatively lower level of task satisfaction among novices accentuates potential challenges for the industry as Internet-related technologies increasingly automate low-skill translation tasks. Establishment of mentoring, training and certification programs to develop career paths could encourage novices to acquire higher-level skills while they comprehend the work environment of the industry and hone skills toward translation expertise. Investment in training initiatives and certification programs, as well as increasing the 'professionalisation' of the language industry could accelerate the development of subject matter and technical expertise among novice translators. These efforts could be enhanced by close collaboration between university programs in translation studies and translation associations. Such collaborations can foster mentorship programs, internships, and job shadowing opportunities. A case in point is mentoring, which is a cost-effective and feasible approach that could be inculcated via virtual or face-to-face modes. An example of such an existing program is the American Translators Association (ATA) Mentoring Program (ATA 2017). The ATA Mentoring Program pairs volunteering mentors with mentees over a year, both mentors and mentees can be individuals or translation/interpreting companies that share same fields of interest and equivalent goals of the mentoring program. In such mentorship programs it is important to find similarities in areas of specialization, specific business goals, specific language pairs, and certification needs. Mentorship programs could allow a mentee to work with a certified translator to pursue a certification goal, or provide a newcomer an opportunity to learn about the language industry in order to remain competitive. Mentorship could also be provided by language service providers within an organisation by project managers and freelancers. Internships and job shadowing experiences can also allow novices to prepare themselves with multiple applicable skills that can in turn result in intrinsic task satisfaction for novices when they work in the language industry. In summary, it is crucial to bridge the large satisfaction gap between experts and novices since it is necessary to constantly nurture young professionals in order to develop motivated translators who are committed to building a career in the language industry. Despite the heavy adoption of automation, the language industry is largely dependent on indispensable human capital. As a nascent industry that needs new talent, structured career development and careful management of training programs can be introduced to boost the confidence of novices and accelerate their acquisition of advanced skills that are necessary to be successful in the language industry.
2019-11-07T15:30:17.018Z
2019-11-03T00:00:00.000
{ "year": 2019, "sha1": "4d90248aaffed21873a9fe6c2930df1205500405", "oa_license": "CCBY", "oa_url": "https://tidsskrift.dk/her/article/download/117021/165101", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "42f88971323ea40a7940ab0c074ca31164b21850", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
198450124
pes2o/s2orc
v3-fos-license
Photobiomodulation for Alzheimer’s Disease: Has the Light Dawned? The publisher's final edited version of this article is available at Photonics Go to: Abstract Next to cancer, Alzheimer’s disease (AD) and dementia is probably the most worrying health problem facing the Western world today. A large number of clinical trials have failed to show any benefit of the tested drugs in stabilizing or reversing the steady decline in cognitive function that is suffered by dementia patients. Although the pathological features of AD consisting of beta-amyloid plaques and tau tangles are well established, considerable debate exists concerning the genetic or lifestyle factors that predispose individuals to developing dementia. Photobiomodulation (PBM) describes the therapeutic use of red or near-infrared light to stimulate healing, relieve pain and inflammation, and prevent tissue from dying. In recent years PBM has been applied for a diverse range of brain disorders, frequently applied in a non-invasive manner by shining light on the head (transcranial PBM). The present review discusses the mechanisms of action of tPBM in the brain, and summarizes studies that have used tPBM to treat animal models of AD. The results of a limited number of clinical trials that have used tPBM to treat patients with AD and dementia are discussed. Introduction to Photobiomodulation Photobiomodulation (PBM) describes the therapeutic use of red or near-infrared light to stimulate healing, relieve pain and inflammation, and prevent tissue from dying. PBM used to be called "low-level laser (or light) therapy" (LLLT) but the name was changed to reflect the fact that the term "low" was undefined, lasers were not absolutely required, and inhibition of some processes was beneficial [1,2]. Photobiomodulation therapy (PBMT) describes the use of PBM as a treatment for various diseases or disorders. PBM was discovered over 50 years ago by Endre Mester in Hungary working with hair regrowth and wound healing in mice [3]. Since then, PBM has gradually become more accepted by the medical profession, physical therapists, and the general public. This increase in acceptance is partly due to the increased availability of lightemitting diodes (LEDs) with wavelengths in the red and NIR regions and substantial levels of power density (up to 100 mW/cm 2 over fairly large areas). Most available evidence suggests that LEDs perform equally well compared to lasers of similar wavelengths and power density [4]. However, LEDs have the advantages of more safety, lower cost, and better suitability for home use. Mechanisms of PBM It is the first law of photobiology that a photon must be absorbed by a specific molecular chromophore in order to have any biological effect. The chromophores that have been postulated to be useful in PBM, absorb at different wavelength regions of the electromagnetic spectrum (blue, green, red, NIR), and are shown in Figure 1 and discussed below. Proposed chromophores for PBM that can absorb different wavelengths of light. It should be noted that there is considerable overlap between the chromophores, and that the NIR absorbed by structured water is likely to be longer wavelength (>950 nm). Cytochrome C oxidase (CCO) is the terminal enzyme (unit IV) in the electron transport chain situated in the outer mitochondrial membrane. The electron transport chain, through a series of redox reactions, facilitates the transfer of electrons across the inner membrane of the mitochondria. The net result of these electron transfer steps is to produce a proton gradient across the mitochondrial membrane that drives the activity of ATP synthase (sometimes called unit V) that produces the high-energy adenosine triphosphate (ATP) from ADP. CCO mediates the transfer of electrons from cytochrome C to molecular oxygen. CCO is a complex protein, composed of thirteen different polypeptide sub-units, and also contains two heme centers and two copper centers. Each of these heme and copper centers can be either oxidized or reduced, giving sixteen different oxidation states. Each of these oxidation states has a slightly different absorption spectrum, but CCO is almost unique amongst biological molecules in having a significant absorption in the near-infrared spectrum. In fact, Britton Chance estimated that over 50% of the absorption of NIR light by biological tissue could be attributed to this single enzyme as a chromophore [5]. In many publications, CCO has been shown to be a biological photoacceptor and transducer of signals activated by light in the red and NIR regions of the spectrum [6,7]. Specifically, absorption of the photons delivered in PBM, seems to promote an increase in the availability of electrons for the reduction of molecular oxygen in the catalytic center of CCO, increasing mitochondrial membrane potential (MMP), and increasing levels of ATP, cyclic adenosine monophosphate (cAMP), and reactive oxygen species (ROS), all of which indicate increased mitochondrial function, and can trigger initiation of cellular signaling pathways [8]. However recently, the CCO hypothesis has been brought into question. Lima et al. [9] genetically engineered two different kinds of cells to not express any active CCO, and found they responded equally well to 660 nm light, compared to wild type cells. Although other units in the electron transfer chain, such as complexes I-IV and succinate dehydrogenase also show increased activity as a result of PBM, CCO is still believed to be one of the primary photoacceptors. This notion is supported by the fact that low-level light irradiation such as PBM causes increased oxygen consumption, and is bolstered by the fact that the majority of oxygen consumption occurs at complex IV, and moreover that the addition of sodium azide, a CCO inhibitor, abrogates the effects of PBM [10,11]. Moreover, rho-zero cells that lack functional mitochondria do not respond to PBM, in the same way as their wild-type counterparts [12]. Nevertheless, despite the amount of évidence in favor of CCO being a major chromophore for red and NIR light, mounting evidence is suggesting that this is not the whole story. Lima et al. [9] investigated two cell lines lacking CCO, one mouse line with the Cox10 knocked out (that could not synthesise the heme a cofactor) and a second human line with a mutation in the mtDNA gene coding for tRNA lysine (that lacked three critical CCO subunits). PBM (660 nm) caused increased cell proliferation in both wild type and CCO knock out cells, together with increased ATP and citrate synthase levels. These results showed that functional CCO was not required for its ability to enhance metabolism and cell proliferation. A recent editorial [13] from Sommer in Ulm, Germany suggested that the effects of red and NIR light (especially pulsed at low frequency such as 1 Hz) on the interfacial water layers (IWL) inside cells could be an alternative explanation. If these IWL were inside the mitochondria, then the lowering of viscosity as a result of the energy absorption, could allow the molecular rotor, which is ATP synthase, to rotate faster and produce more ATP. On the other hand if the IWL were localized within the plasma membrane, light absorption could increase the uptake of nutrients accounting for increased proliferation. Regardless of the actual chromophore, PBM can trigger retrograde mitochondrial signaling [14]. This refers to signals and communications passing from the mitochondria to the nucleus of a cell, rather than vice versa. The aforementioned mitochondrial changes result in an altered mitochondrial ultrastructure, and triggering of mitochondrial biogenesis [15]. As a result, membrane permeability and ion flux at the cell membrane are altered, in turn leading to the altered activity of activator protein-1 (AP1) and NFκB [16]. There is emerging evidence that other primary chromophores such as opsins, flavins and cryptochomes, may mediate the biological absorption of light, particularly at shorter wavelengths (blue and green). Opsins contain a cis-retinaldehyde molecule as a chromophore that is photoisomerized to the all-trans isomer, thus producing a change in protein conformation and initiating a signaling cascade [17]. Flavins and flavoproteins contain a chromophore such as riboflavin, flavin mononucleotiode, or flavin adenine dinucleotide and can carry out redox reactions when excited by light [18]. Cryptochromes are a special sub-class of flavoproteins that act as bluelight receptors in plants, animals and even humans [19]. Although evidence proving that light-gated ion channels can be cited as mechanisms of action in PBM is sparse at the present time, it is gradually increasing. PBM is most likely to affect transient receptor potential (TRP) channels. First discovered in a Drosophila mutant as the mechanism responsible for the vision of insects, they are now known to be sensitive to light [20], in addition to a wide variety of other stimuli. TRP channels are calcium channels, and are modulated by phosphoinositides [21]. Light-gated ion channels have attracted immense attention in the field of optogenetics [22]. However the majority of these studies employ ion channels similar to bacterially derived channelrhodopsin [22]. The majority of research relating PBM to light-gated ion channels has been done by testing the TRPV "Vanilloid" subfamily of TRP species. Evidence from studies done by various groups [23][24][25][26] have led to the general consensus that TRP channels are most likely to be activated by green light. However, because green light lacks the same penetrating ability of infrared or near-infrared light, it lacks practical clinical application. However, Ryu et al. found that exposure to infrared (2780 nm) wavelength light attenuated TRPV1 activation, causing a decrease in generation of pain stimuli [24]. A similar, but far less dramatic antinociceptive effect was also observed when TRPV4 was exposed to light of the same wavelength. TRPV4 was also shown to be responsive to 1875 nm pulsed light, although it cannot be ruled out that the results were due to thermal stimuli rather than light stimuli [25], as water is the primary absorber of infrared in this region. It is clear that water must be by far the most important chromophore at infrared wavelengths (>900 nm), considering its molecular absorption coefficient and its relative abundance in cells and tissues. Nevertheless PBM as usually carried out, does not produce excessive heating of the tissues, especially within the brain. In fact the most noticeable heating effect (if any) is felt on the skin of the scalp. How then can we explain that PBM can have powerful effects on the brain at wavelengths as long as 1064 nm [27,28]? One answer may lie in the concept of 'nanostructured water' or 'interfacial water' elaborated by Pollack [29][30][31]. This exclusion zone (EZ) water (which may be the same as the IWL discussed above [13] absorbs optical radiation which produces distinct physical changes in parameters such as viscosity and pH. Since the EZ water layers occur on intracellular membranes, it is reasonable to suggest that ion channels embedded within these membranes (for instance in mitochondria), may be triggered by these physical changes. Since bulk water does not absorb IR light to the same degree as EZ water, this would explain why biochemical changes can take place within the cells, while there is no detectable bulk heating of the tissue, as would have been expected if the IR energy was absorbed by all water molecules. Go to: Alzheimer's Disease and Dementia Dementia is the clinical term used to describe a broad range of brain disorders that affect cognitive and executive functioning and memory [32]. The diagnosis of dementia requires a change in mental function with a more pronounced decline than one would expect due to the normal aging process [33]. In 2015, 46.8 million people throughout the world were estimated to be suffering from dementia, with 58% living in low and middle income countries and this number is expected to double every 20 years [34]. Alzheimer's disease (AD) is the most common type of dementia (60% to 70% of cases) followed by vascular dementia (25%), and Lewy body dementia (15%) [35]. AD was first described by Alois Alzheimer (1864-1915) who published his report in 1911 [36]. About 70% of the risk is probably genetic, with many genes proposed to be involved [37]. Other risk factors include a history of head injury, depression, and hypertension. AD is characterized by diffuse atrophy of the entire brain (especially of the cortex), accompanied by extracellular beta-amyloid plaques and intraneuronal neurofibrillary tangles composed of hyperphosphorylated tau protein [38]. The precise mechanisms of AD remain a subject of hot debate [39]. A wide variety of other investigational drugs have been tested in clinical trials, but so far without much success. The following section will summarize some of the hypotheses. The amyloid hypothesis has been the predominant explanation for decades. Aβ peptides (40 or 42 amino acids) are formed by sequential enzymatic cleavage of amyloid precursor protein (APP) by beta and gamma secretases. An increase in the level of Aβ 42 leads to amyloid fibril formation, which eventually develop into senile plaques. However the failure of several drug trials that have targeted the amyloid peptides (beta and gamma secretase inhibitors) and amyloid plaques (immunotherapy approaches using monoclonal antibodies) has led to the concept that the amyloid plaques may be markers rather than causes of the brain deterioration [40]. An alternative hypothesis focuses on tau [41]. Tau is a microtubule-associated protein involved in microtubule assembly. There are two isoforms expressed in the adult human brain (4R and 3R) mainly in axons of neurons. In AD brains, 3R and 4R tau is accumulated in a hyperphosphorylated state that forms neurofibrillary tangles (NFTs) in cell bodies, or threads if they are formed in dendrites or axons. Many different brain disorders are characterized by tau pathology and are known as "tauopathies" [42]. These include frontotemporal dementia, corticobasal degeneration, Richardson syndrome, Parkinson's disease, chronic traumatic encephalopathy, and age-related tau astrogliopathy. Neuroinflammation and reactive gliosis are hallmarks of AD [43]. Accumulating evidence suggests that that microglia with the M1 phenotype are important players in AD [44]. Not only do the M1 microglia pump out pro-inflammatory cytokines, but these cells down-regulate their phagocytic functionality, and therefore fail to clear the amyloid plaques. Any therapy (such as PBM) that can switch the microglial phenotype from M1 to M2 may be helpful for AD. The increased incidence of AD in patients suffering from hypertension and irregular heartbeat, gave rise to the hypothesis of "micron strokes" [45]. Micro-strokes caused by fibrous eythrocyte emboli or micron-sized cholesterol crystals could act as "seeding points" for the growth of amyloid plaques as a healing response. A related hypothesis concerns the influence of vascular dysfunction and micro-hemorrhages [46]. Vascular dysfunction is often described as causing vascular dementia, but there is increasing evidence that it plays a role in AD as well [47]. These micro-hemorrhages have been correlated with plaque formation [48]. These micro-hemorrhages in cerebral vessels, could act as triggers to activate the innate immune system. They could also be indicative of sites of breakdown of the blood-brain barrier, which is considered as one of the early markers of cognitive dysfunction [49]. Oxidative stress has been implicated in the pathogenesis of AD [50]. The evidence includes increased levels of certain metals in AD brains such as iron, aluminum, and mercury that can generate free radicals. Increased lipid peroxidation, 4-hydroxynonenal, oxidative damage to protein and DNA, advanced glycation end products (AGE), malondialdehyde, carbonyls, peroxynitrite, heme oxygenase-1 and SOD-1 in neurofibrillary tangles and amyloid plaques. However although a diet high in antioxidants offers some protection, supplementation with antioxidants has largely failed to show any benefits [51]. Reductions in mitochondrial activity and glucose metabolism are widely seen in AD [52]. Changes in cytochrome c oxidase and morphological changes in mitochondria have been found. Activation of the integrated stress response and the transcription factor ATF4 may be caused by mitochondrial dysfunction. Finally, another hypothesis implicates changes in the gut microbiome [53]. The bacteria themselves may secrete bacterial amyloid that may trigger cross-seeding of amyloid plaques, or else the bacteria may over-stimulate the innate immune response [54]. Bacteria themselves, such as Porphyromonas gingivalis, have been found in AD brains [55]. Other pathogens such as viruses and spirochetes may be involved in the brain, and Aβ peptide may function as an antimicrobial defense peptide [56]. Go to: Mechanisms of PBM in the Brain As will be seen in the following section, a bewildering array of different mechanisms have been proposed to account for the benefits of transcranial PBM (tPBM) on the brain. These are schematically shown in Figure 2.
2019-07-26T10:57:11.237Z
2019-07-04T00:00:00.000
{ "year": 2019, "sha1": "9d66698cc920d7efb5e5e9c13daa4420fe38a5db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6732/6/3/77/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6e4acbbd750a566b865456458f8d9cef16d6eb8a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221359687
pes2o/s2orc
v3-fos-license
Probiotic Escherichia coli Nissle 1917-derived outer membrane vesicles enhance immunomodulation and antimicrobial activity in RAW264.7 macrophages Background Probiotic Escherichia coli Nissle 1917 (EcN) has been widely studied for the treatment of intestinal inflammatory diseases and infectious diarrhea, but the mechanisms by which they communicate with the host are not well-known. Outer membrane vesicles (OMVs) are produced by Gram-negative bacteria and deliver microbial molecules to distant target cells in the host, which play a very important role in mediating bacteria-host communication. Here, we aimed to investigate whether EcN-derived OMVs (EcN_OMVs) could mediate immune regulation in macrophages. Results In this study, after the characterization of EcN_OMVs using electron microscopy, nanoparticle tracking and proteomic analyses, we demonstrated by confocal fluorescence microscopy that EcN_OMVs could be internalized by RAW 264.7 macrophages. Stimulation with EcN_OMVs at appropriate concentrations promoted proliferation, immune-related enzymatic activities and phagocytic functions of RAW264.7 cells. Moreover, EcN_OMVs induced more anti-inflammatory responses (IL-10) than pro-inflammatory responses (IL-6 and TNF-α) in vitro, and also modulated the production of Th1-polarizing cytokine (IL-12) and Th2-polarizing cytokine (IL-4). Treatments with EcN_OMVs effectively improved the antibacterial activity of RAW 264.7 macrophages. Conclusions These findings indicated that EcN_OMVs could modulate the functions of the host immune cells, which will enrich the existing body of knowledge of EVs as an important mechanism for the communication of probiotics with their hosts. Background It is increasingly recognized that probiotics play an important role in maintaining intestinal health and regulating immune function in humans and animals [1]. Many probiotics have been developed as pharmaceutical products or dietary supplements to treat intestinal dysfunctions and diseases, such as diarrhea, irritable bowel syndrome and inflammatory bowel disease (IBD) [2,3]. A large number of in vitro and in vivo studies have indicated that the probiotic-mediated effects are mostly achieved through indirect ways, such as strengthening the intestinal epithelial barrier, regulating the immune system, and competing with pathogens for adhesion to mucosa [4,5]. Growing attention has been paid to elucidate the molecular mechanisms of communication between probiotics and their hosts. Escherichia coli strain Nissle 1917 (EcN) is a wellknown probiotic isolated by Alfred Nissle from the faeces of a soldier who was not infected during the outbreak of Shigellosis [6]. EcN is not pathogenic due to the lack of virulence factor genes in its genome when compared to pathogenic E. coli [7]. It is known that EcN can be well colonized in the human intestinal tract and modulates intestinal homeostasis and microflora balance [8,9]. EcN has been developed as a microbial product under the brand name Mutaflor, which is widely distributed in Central Europe to treat intestinal inflammatory diseases and infectious diarrhea [2]. Numerous studies have confirmed the immunomodulatory mechanisms of EcN, including induction of antimicrobial peptide expression, increase of immunoglobulin A and mucin secretion, enhancement of the intestinal barrier and promotion of anti-inflammatory immune response [10][11][12]. Although many publications have revealed the EcNmediated effects, it is still unclear how the EcN establishes the crosstalk with their hosts. Almost all Gram-negative bacteria and some Gram-positive bacteria produce nano-meter membrane vesicles into the extracellular space, which are called extracellular vesicles (EVs) [13]. EVs secreted by Gram-negative bacteria are derived from the outer membrane and thus termed as outer membrane vesicles (OMVs). These vesicles are characterized as spherical bilayered phospholipid structures with diameters between 20 and 200 nm [14]. Purified OMVs contain a variety of bioactive molecules, such as cell-wall components, periplasmic proteins and bacterial nucleic acids, and recently have been considered as a key intercellular communication platform [13]. OMVs can deliver these components in a stable and efficient manner directly to host cells and affect their biological functions, including inducing pathogenesis, modulating immune response and signaling [14,15]. Many studies in the last few decades on pathogens, such as Vibrio cholerae [16], Staphylococcus aureus [17] and Salmonella [18], have indicated that OMVs cause cytotoxic responses of target cells by delivering virulence factors. Recently, several studies have found that OMVs derived from commensal bacteria or probiotics also play a key role in microbiota-host interactions. OMVs secreted by Bacteroides fragilis [19] and Akkermansia muciniphila [20], two species of symbiotic bacteria in the human intestinal, were found to deliver immunomodulatory molecules to intestinal immune cells and induce anti-inflammatory immune responses. Bifidobacterium is a widely used Gram-positive probiotic, which produces EVs as mediators to activate intestinal immune cells [21]. Recently, vesicular proteins associated with adhesion and immune regulation were identified from EcNderived OMVs (EcN_OMVs), indicating that the secretory vesicles have the potential to modulate the interaction of EcN with its host [22]. Moreover, Fábrega et.al and Alvarez et.al demonstrated that EcN_OMVs are involved in the induction of immune and defense responses in the intestinal mucosa barrier [23,24]. In this study, we further investigated whether the EcN_OMVs mediate the regulations of host immune cells. We analyzed the proteome of EcN_OMVs and then evaluated the regulation of the immune responses and antimicrobial activities by mouse macrophage RAW264.7 cells upon stimulation with EcN_OMVs in an in vitro model. This finding will enrich the existing body of knowledge of EVs as an important mechanism for the communication of probiotics with their hosts. Preparation and proteomic analysis of EcN_OMVs Purified EcN_OMVs were obtained from the culture supernatant of EcN using a series of filtration and centrifugation steps. After the OptiPrep density gradient ultracentrifugation, we found a large number of particles in fractions 4-6 and 9-10 using the nanoparticle tracking analysis (NTA) (Fig. 1a). The density ranges for these fractions were from 1.127 to 1.175 g/mL and 1.271 to 1.295 g/mL, respectively, which is in accordance with the previous results [25]. These vesicles-containing fractions were pooled and subsequently visualized using scanning electron microscopy (Fig. 1b) and transmission electron microscopy (Fig. 1c). These micrographs revealed that these vesicles were spherical particles with a size range of 50-150 nm. This finding was confirmed by the result from NTA characterization of EcN_OMVs, which showed a peak at 99.2 nm for the size distribution of these vesicles (Fig. 1d). Furthermore, 189 proteins were identified by the LC-MS/MS analysis, and their subcellular localizations are presented in Fig. 2e. The subcellular localization of these identified proteins showed that 36.5% (69), 28.0% (53), 20.1% (38) and 9.5% (18) Fig. 1 Visualization and characterization of OMVs derived from probiotic Escherichia coli Nissle 1917 (EcN_OMVs). a The particle numbers of the resulting fractions (1-10) after EcN_OMVs purification by Optiprep density gradient ultracentrifugation. Representative scanning electron micrograph (b) and transmission electron micrograph (c) of purified EcN_OMVs (indicated by red arrows). d Size distribution and concentration of these vesicles determined by nanoparticle tracking analysis. e Subcellular localizations of EcN_OMVs proteins identified by proteomic analysis. f Identified proteins according to Clusters of Orthologous Groups of proteins (COG) belonged to the cytoplasm, outer membrane, periplasm and inner membrane, respectively. According to the functional classification by the Clusters of Orthologous Groups of proteins (COG), most of these proteins were significantly classified into metabolism, transporter activity, translation and transcription (Fig. 2d). All proteins identified in EcN_OMVs, their subcellular localization and COG functions are presented in Table S1. We found that many vesicular proteins, mainly of outer membrane origin, might contribute to the probiotic behavior of EcN, including intestinal adhesion and colonization (such as the flagellins: flgA, flgE and flgK), bacterial survival in host niches (such as many proteins associated with transport activities), antimicrobial activities (such as the murein hydrolase: mltA and mltC), and immunomodulation to the host (such as many outer membrane proteins: OmpA, OmpC and OmpF). These multiple probiotic-related proteins identified in EcN_ OMVs suggested that they might mediate the effects of this probiotic on immune regulation and disease protection. EcN_OMVs taken up by RAW 264.7 macrophages Previous studies have suggested that OMVs secreted by Gram-negative bacteria can be internalized by the host cell and deliver biological components to regulate the host immune system [14,23]. In this context, we sought to confirm whether EcN_OMVs could be taken up by macrophages. EcN_OMVs were stained with the red fluorescent lipophilic compound DiI and the cell membranes and nucleus were labeled with anti-F4/80 antibody and DAPI, respectively. After co-incubation of macrophages with DiI-labeled EcN_OMVs for 16 h, EcN_OMVs were observed in the cytoplasm of the cells by confocal fluorescence microscopy, indicating that these vesicles were internalized by RAW 264.7 cells (Fig. 2). EcN_OMVs at appropriate concentrations promote the proliferation of RAW264.7 cells As shown in Fig. 3a, compared with the control group, the cell viability was significantly increased by 8.5 and 19.35% at 16 h after exposure to 0.1 and 1.0 μg/mL of EcN_OMVs, respectively, while significantly decreased by 8.1% at 16 h after exposure to 10 mg/mL of EcN_ OMVs, suggesting that the vesicles at moderate concentrations were non-toxic to RAW264.7 cells. Lactate dehydrogenase (LDH) is an endoenzyme in normal living cells, and its activity in the cell supernatant reflects the integrity of the cell membrane. As shown in Fig. 3b, there was no significant difference in LDH activity among the EcN_OMVs-treated groups and the control group, indicating that the concentrations of EcN_OMVs used in this study did not cause cell Fig. 2 Internalization of EcN_OMVs by RAW264.7 macrophages. DiI-labeled EcN_OMVs (red signal) were incubated with RAW264.7 cells for 16 h at 37°C. The cell membrane was visualized by immunostaining with anti-F4/80 antibody (5 μg/mL), the macrophage marker, followed by Dylight Fluor-conjugated Goat Anti-Rat IgG (green signal), and the cell nuclei were stained with DAPI (blue signal). The samples were observed using the High-speed spinning-disk confocal microscope damage. Together, these findings revealed that EcN_ OMVs at appropriate concentrations could promote the proliferation of RAW264.7 cells. The concentration of 1.0 μg/mL was chosen as the final concentration of EcN_ OMVs for the subsequent investigation in this study. EcN_OMVs improve immune-related enzymatic and phagocytic activities Acid phosphatase (ACP) is associated with the phagocytosis and clearance of exogenous substances by macrophages. The ACP activity was significantly improved when RAW 264.7 cells were stimulated with EcN_OMVs and heat-killed EcN (Fig. 4a), indicating that both these vesicles and heat-killed EcN activated macrophages and enhanced their immune function. NO is an important messenger molecule secreted by macrophages for immune responses. Compared with the control, stimulations with EcN_OMVs and heat-killed EcN significantly induced NO production in the cell culture supernatants (Fig. 4b). The higher activity of inducible nitric oxide synthase (iNOS) found in RAW 264.7 cells after stimulation with EcN_OMVs and heat-killed EcN was well in accordance with the results of NO determination (Fig. 4c). Furthermore, the phagocytic activity of RAW 264.7 cells was significantly improved at 16 h after stimulation with EcN_OMVs and heat-killed EcN (Fig. 4d). Together, these data suggested that EcN_OMVs could activate RAW 264.7 cells and enhance their phagocytosis. EcN_OMVs induce immunomodulatory cytokine secretion in RAW264.7 macrophages We next evaluated the immunomodulatory effects of EcN_OMVs on RAW 264.7 cells. As illustrated in Fig. 5 a, b and c, both EcN_OMVs and heat-killed EcN induced the significant production of pro-inflammatory cytokines (IL-6 and TNF-α) and anti-inflammatory cytokine (IL-10) by RAW 264.7 cells. Compared to the stimulation with heat-killed EcN, EcN_OMVs promoted lower secretion levels of these cytokines. EcN_OMVs triggered higher induction of anti-inflammatory cytokines (ng range) than that of pro-inflammatory cytokines (pg range). Furthermore, EcN_OMVs and heat-killed EcN also efficiently stimulated the secretion levels of IL-12p40 (a representative Th1-polarizing cytokine; Fig. 5d) and IL-4 (a representative Th2-polarizing cytokine; Fig. 5e). These results revealed that EcN_OMVs efficiently induced the immune responses by macrophages and stimulated macrophages to secrete immunomodulatory cytokines. EcN_OMVs improve the antibacterial activity of macrophages To evaluate whether EcN_OMVs directly affects the ability of macrophage to fight bacterial infection, we examined the bacteria-killing ability of RAW 264.7 cell stimulated by EcN_OMVs using three bacterial pathogens, including E. coli CVCC1554, S. Typhimurium CVCC3757, and S. aureus CVCC4265. As shown in Fig. 6, after bacterial infections for 5 h, the cells treated with EcN_OMVs showed a stronger bactericidal ability against these three pathogens compared with the control group, indicating that EcN_OMVs enhanced the antibacterial activity of macrophages. Discussion In the last few decades, extensive studies have revealed that gut microbiota plays a very important role in the development and function of the host's immune system [26]. The dysbiosis of gut microbiota, which can trigger inappropriate immune activation and inflammation in the intestine, is closely associated with several gastrointestinal diseases and particularly with IBD [27]. Administration of probiotics is considered as a promising strategy to restore intestinal microbiota composition and regulate the host immune response [1]. EcN is one of the most widely used probiotics for the treatment of intestinal disorders. Numerous studies have confirmed the therapeutic efficacy of EcN both in murine models of experimental colitis and human IBD [9,28,29]. Among the mechanisms by which EcN exerts the beneficial effects, immunomodulation is recognized as a key contributor [30]. EcN-mediated immunomodulatory effects are mainly associated with their ability to induce the development and cytokine secretion of different immune cells in the gut. Besides the direct interaction between the probiotic and immune cells, these observed effects also depend on the release of secreted bacterial mediators [31]. Among the many bacteria-derived factors, EVs play an important role in the communication between microbiota and the host [14,23], as they can drive the long-distance transport of interior molecules throughout the intracellular compartments in a concentrated, protected and targeted manner [32]. OMVs carry many effector molecules of their parental bacterium such as lipopolysaccharide (LPS), peptidoglycan and DNA, which can be recognized by Toll-like receptor (TLR) 4, Fig. 4 EcN_OMVs modulate immune-related enzymatic and phagocytic activities in RAW264.7 cells. RAW264.7 cells were stimulated with EcN_OMVs and heat-killed EcN for 16 h. After these stimulations, cell supernatants and lysates were collected for examining the following indicators: a ACP activity in cell lysates; b NO production in cell supernatants; c iNOS activity in cell lysates. d Phagocytic activity of RAW264.7 cells to FITC-labeled dextran at 16 h after stimulations with EcN_OMVs and heat-killed EcN. Data are representative of three independent experiments. *P < 0.05; **P < 0.01; ***P < 0.001; NS., not significant; versus control Nod-like receptor (NOD) 1/NOD2 and TLR9 in host's immune cells, respectively [33]. Therefore, OMVs can mediate certain actions of parental bacterium. Accordingly, it is conceivable that EcN-OMVs could influence the functions of the host's immune cells. In the present work, we found that EcN-OMVs could effectively be internalized by RAW 264.7 macrophages. This finding is similar to a previous study that showed the uptake of EcN-OMVs by Caco-2 cells [23]. Several studies have identified that bacterial OMVs enter host cells via receptor-mediated pathways or lipid rafts [21,34]. For macrophages, the uptake of OMVs may also be through random phagocytosis, and the detailed underlying mechanism needs to be further studied. Additionally, the phagocytic activity of macrophage is an important indicator of its immune function. The intracellular enzymes in macrophages, such as ACP and iNOS, are closely related to their phagocytic function [35,36]. ACP is a hydrolytic enzyme existing in the lysosome of macrophages, which is involved in the digestive function of various lysosomes [37]. iNOS is a key enzyme in the synthesis of NO, which is an important messenger and effector molecule in the defense system of macrophages [36]. Many studies have shown that the activity of ACP and iNOS can be significantly improved after macrophage activation [35,38]. This study revealed that the intracellular ACP and iNOS activities and the production of NO in the cell culture supernatant were significantly improved after the stimulation of RAW 264.7 cells with EcN_OMVs, indicating that EcN_OMVs Fig. 6 Antimicrobial activity of macrophages stimulated with EcN_OMVs. RAW264.7 cells stimulated with EcN_OMVs were incubated with three bacterial pathogens for 5 h, including two Gram-negative species, E. coli CVCC1554 and S. typhimurium CVCC3757, and one Gram-positive specie, S. aureus CVCC4265. After incubation, the number of viable bacteria in cell lysates was determined using the plate cultivation method. Data are representative of three independent experiments. ***P < 0.001 Fig. 5 Profile of cytokine secretion by RAW264.7 macrophages stimulated with EcN_OMVs. RAW264.7 cells were stimulated with EcN_OMVs and heat-killed EcN for 16 h. After these stimulations, cell supernatants were collected for determining the following cytokines: a IL-6; b TNF-α; c IL-10; d IL-12p40; e IL-4. Data are representative of three independent experiments. *P < 0.05; **P < 0.01; ***P < 0.001; versus control activated macrophages and enhance phagocytic functions of these cells. These findings were also confirmed by the phagocytosis assay of FITC-labeled dextran by RAW 264.7 cells and antibacterial activity assay. It is widely known that immunomodulatory cytokines play a critical role in immune regulation and inflammatory responses. Previous in vitro and in vivo studies have demonstrated that EcN_OMVs could recapitulate antiinflammatory properties of EcN by modulating cytokine expression and production [23,31,39,40]. Fábrega et.al revealed that EcN_OMVs triggered the expression and secretion of pro-inflammatory cytokines IL-6, IL-8 and TNF-α and anti-inflammatory IL-10 in peripheral blood mononuclear cells [23]. Similarly, EcN_OMVs could elicit the expression of IL-6 and IL-8 in human intestinal epithelial cells in a NOD1 dependent manner, and induce the expression of IL-6, TNF-α, IL-10 and TGF-β in human monocyte-derived dendritic cells [31,40]. Another study performed in mice model of experimental colitis has shown that treatment of colitic mice with EcN_OMVs could reduce intestinal inflammation by inhibiting the expression of IL-1β, TNF-α and IL-17 and enhancing the expression of IL-10 in colonic tissues [39]. Consistent with these results, the present study showed that stimulation with EcN_ OMVs induced the expression of TNF-α, IL-6 and IL-10. Several studies have demonstrated that the up-regulation of pro-inflammatory cytokines by EcN_ OMVs was probably due to the presence of LPS or other pattern recognition receptor-ligands, while some yet unidentified vesicular components may induce the activation of IL-10 [23,41]. In this study, some vesicular proteins known to regulate the host immune response, such as the flagellins, outer membrane proteins and cytoplasmic enzymes, were identified in EcN_OMVs. It should be emphasized that EcN_OMVs increased the production of IL-10 to a higher level compared with that of TNF-α and IL-6. IL-10 produced by macrophages can strongly inhibit the production of pro-inflammatory cytokines in cytokine signaling and plays a critical role in the maintenance of immune response balance [42]. These findings indicated that EcN_OMVs mainly functioned as a modulator of immune homeostasis. Our results are also confirmed by the results of Alvarez et.al showing that EcN_OMVs could enhance the function of the intestinal epithelial barrier ex vivo [24]. Besides, this study also showed that EcN_OMVs activated the expression of Th1polarizing cytokine (IL-12) and Th2-polarizing cytokine (IL-4) in RAW 264.7 cells. These polarizing cytokines play a key role in regulating the adaptive immune response to host defense [42,43]. Further in vitro and in vivo studies are required to fully understand the regulation of EcN_OMVs on the adaptive immune response. As multiple molecules may synergistically contribute to EcN_OMVs-mediated immunomodulation, it is challenging to determine which specific molecules play the most important role. Among the composition of bacterial OMVs, proteins account for the largest proportion and are considered essential components for functions of OMVs [13]. Previously, it has been illustrated that certain proteins isolated from the probiotic-derived EVs exerted a similar effect of the intact EVs. B. longum KACC 91563-derived EVs contain a protein ESBP that can induce the beneficial effect of the bacterial EVs [21]. EVs derived from L. casei BL23 carry several proteins associated with the probiotic effects of the bacterium, such as p40 and p75 [44]. In this study, several strain-specific proteins were identified in EcN_OMVs by proteomic analysis, such as several subunits (focA, focF, focG and focH) of the specific fimbriae F1C and iron uptakerelated proteins (iutA) [22]. F1C-fimbriae is closely related to the biofilm formation and intestinal colonization of EcN [45]. The components of iron acquisition systems may enable this probiotic to gain a competitive advantage against pathogens in host niches [22]. Therefore, these strain-related proteins may be involved in the beneficial effects of the probiotic EcN. Besides, polysaccharides such as LPS are also important components of OMVs. LPS in EcN shows the shortened carbohydrate chain and lacks the repeating units of the O-chain compared to wild-type LPS [46]. Several studies have shown that the truncated LPS may be partially responsible for the anti-inflammatory properties of EcN [41]. Accordingly, it can be inferred that the LPS variant from EcN may confer a relevant contribution to the EcN_OMVsmediated immunomodulation. Furthermore, probioticsderived DNA and RNA have also been demonstrated to have inhibitory activity in inflammatory responses. CpG DNA derived from probiotics and commensal bacteria mediates anti-inflammatory responses through TLR 9 signaling [47]. L. gasseri-derived RNA suppresses inflammatory responses through a MyD88-dependent signaling pathway [48]. OMVs produced by several commensal bacteria have been shown to contain DNA and RNA. Whether nucleic acids are enclosed in EcN_OMVs and these molecules are involved in the immunomodulatory activities of EcN_OMVs remains to be elucidated. Although our data present evidence that EcN_OMVs enhanced immunomodulatory effects and antimicrobial function, further studies are needed to arrive at more generalized conclusions. Currently, we have yet to identify these IL-10 inducing anti-inflammatory molecules in EcN_OMVs. Despite the fact that we performed a preliminary proteomic analysis of EcN_OMVs, the role of these vesicular proteins and other non-protein components in regulating the function of macrophages has not been thoroughly validated. Many uncontrollable conditions existing in vivo milieu, such as host lipases that destroy the vesicles, may also influence the interaction between EcN_OMVs and cells. Therefore, in vivo studies are necessary to obtain generalized conclusions. However, notwithstanding these limitations, our results provide support for the biological activity of EcN_OMVs in modulating host immune responses. Conclusions Recent studies have revealed that OMVs released by probiotic EcN strains probably play a very important role in the activation of host immune responses. In this study, we identified vesicular proteins of probiotic EcNderived OMVs using proteomics, and demonstrated that these vesicles could modulate immune responses and antimicrobial activities in mammalian macrophages in vitro. These results indicate that OMVs could mediate the effects of the probiotic EcN on the host, especially the modulation of intestinal immune homeostasis. Although there were some shortcomings in the present study, we demonstrated EcN_OMVs played an important role in modulating the functions of the host immune cells. This finding will enrich the existing body of knowledge of EVs as an important mechanism for the communication of probiotics with their hosts. Bacterial strain and growth condition The probiotic E. coli strain Nissle 1917 was purchased from Ardeypharm (GmbH, Herdecke, Germany). The strain was grown at 37°C in Luria-Bertani (LB) broth with continuous shaking at 180 rpm. Macrophage culture The RAW 264.7 murine macrophage was provided by the Cell Bank of Chinese Academy of Sciences (Shanghai, China). Cells (passages 40-55) and cultivated in complete PRMI-1640 medium (Gibco/Life Technologies Corporation, Grand Island, NY, USA) containing 10% heat-inactivated fetal bovine serum (Zeta-Life, Menlo Park, CA, USA) and Penicillin-Streptomycin solution (100 U/mL of penicillin and 100 μg/mL streptomycin; Sigma-Aldrich, St. Louis, MO, USA) at 37°C in a 5% CO 2 atmosphere. The culture medium was exchanged every 24 h and the cells were passaged every 48 h. EcN_OMVs isolation and purification OMVs were obtained from the EcN culture supernatant as described in our previous study [49]. In brief, bacterial cells were grown for 14 h at 37°C till to late log phase (OD 600 of 0.9 to 1.0) and were removed by centrifugation at 12,000×g for 20 min at 4°C. The culture supernatant was passed through a 0.45-μm membrane (JINTENG, Tianjin, China) using a vacuum filtration device (Corning, NY, USA) to remove large particles such as residual bacteria and cellular debris. The filtrate was then concentrated by an Amicon ultrafiltration system (Merck Millipore, Billerica, Massachusetts, USA) with a 100 kDa membrane (Millipore, Billerica, MA, USA). After an additional filtration with a 0.22-μm membrane (Millipore, Billerica, MA, USA), the concentrate was then ultracentrifuged at 150,000×g for 2 h at 4°C. The EcN_OMVs pellet was washed and resuspended in sterile phosphate buffer saline (PBS; pH 7.4) and then purified by density centrifugation [50]. To remove nonvesicular contamination, the vesicles were purified by discontinuous density centrifugation. The EcN_OMVs fraction at the bottom was mixed with 60% OptiPrep solution (Sigma-Aldrich, St. Louis, MO, USA) to obtain 55% (v/v) OptiPrep solution. A series of 1 mL OptiPrep gradient layers ranging from 5 to 55% (v/v) were overlayed with the vesicle fraction, and centrifuged at 180, 000×g (16 h, 4°C) using a Beckman SW40 Ti swing rotor (Beckman, CA, USA). After centrifugation, each 1 mL fraction was collected from the top of the gradient to the bottom, and the relative particle numbers of each fraction were analyzed by a Nanoparticle Analyser (NanoSight, Malvern, Worchestershire, UK). These vesicles-containing fractions were pooled, diluted in PBS, and then centrifuged (150,000 g, 2 h, 4°C) to completely remove OptiPrep. The purified EcN_OMVs were uniformly dispersed in sterile PBS followed by filter sterilization with a 0.45-μm membrane (Millipore, Billerica, MA, USA). The EcN_OMVs sample was stored at − 80°C for future use. The protein quantification of EcN_OMVs was determined by BCA Protein Assay Kit (TaKaRa Bio, Beijing, China). Nanoparticle tracking and electron microscopy analyses Nanoparticle tracking analysis (NTA) was conducted to determine the diameter size and particle number of EcN_OMVs using an NS300 nanoparticle analyzer (Malvern, Worchestershire, UK) [50]. Morphological characteristics of EcN_OMVs were detected with scanning electron microscopy using a Field Emission Scanning Electron Microscope (S-4800, Hitachi, Tokyo, Japan) and transmission electron microscopy using a JEM1011 Electron Microscope at 100 kV (JEOL, Tokyo, Japan), as described previously [51]. Proteomic analyses Triplicate biological EcN_EVs samples were sent to Hangzhou PTM Biolabs (Hangzhou, Zhejiang province, China) for proteomic analysis. In brief, proteins (10 μg) of EcN_OMVs were lysed by sonication on ice in lysis buffer (8 M urea, 1% protease inhibitor cocktail, 2 mM EDTA) and separated by 12% SDS-PAGE gel. Major protein bands were extracted from the gel, and then digested trypsin (Promega) at a 1:50 w/w (trypsin to protein) overnight at 37°C according to an in-gel digestion protocol [52]. The tryptic peptides were processed by the UPLC coupled to tandem mass spectrometry (MS/ MS) (LC-MS/MS; Thermo Electron, San Jose, CA, USA) [53]. The obtained MS/MS data were analyzed by the Maxquant search engine (v. 1.5.2.8). Database searches were analyzed by using the UniProt database against the EcN genome draft sequence, as described previously [22]. The identified proteins were classified by subcellular localization and Gene Ontology (GO) biological processes according to our previous study [49]. Visualization of EcN_OMVs uptake by RAW 264.7 macrophage To evaluate whether EcN_OMVs were taken up by macrophage, they were stained with lipophilic fluorophore dialkylcarbocyanine iodide (DiI; Sigma-Aldrich, St. Louis, MO, USA) as described previously [54]. In brief, the purified vesicles were resuspended with a certain volume PBS in the presence of 1 μM DiI and incubated for 1 h at 37°C in a water bath. The DiI-labeled EcN_ OMVs pellet was obtained by ultracentrifugation at 150, 000×g (2 h, 4°C). To completely remove the unbound DiI, the pellet was resuspended in PBS and washed three times. After final ultracentrifugation, the DiI-labeled vesicles (3 μg) were resuspended in PBS and then incubated with the RAW 264.7 macrophages for 1 h in a 6-well plate (Corning, NY, USA) at 37°C in a 5% CO 2 atmosphere. After the incubation, the cells were collected, washed three times with PBS, fixed with 4% paraformaldehyde for 30 min in PBS, and then penetrated with 0.5% TritonX-100 (Sigma-Aldrich, St. Louis, MO, USA) for 5 min at room temperature. The cells were then blocked with PBS containing 5% bovine serum albumin for 2 h at room temperature. Cell membranes were immunostained with anti-F4/80 Ab (5 μg/mL; Sigma-Aldrich, St. Louis, MO, USA) followed by Dylight Fluorconjugated Goat Anti-Rat IgG (Abbkine, Redlands, CA, USA) [55]. The cell nucleus was stained with 4, 6diamidino-2-phenylindole (DAPI) (10 μg/mL; Sigma-Aldrich, St. Louis, MO, USA). Subsequently, the samples were placed over glass slides and visualized by an Andor Revolution XD spinning-disk confocal microscope (Andor Technology, UK) with a 63 × oil immersion objective lens. Cytotoxicity of EcN_OMVs The LDH activity in the cell culture supernatant and cell proliferation activity were detected to evaluate the cytotoxicity of EcN_OMVs. RAW 264.7 cells (1 × 10 5 cells/ well) were grown in 24-well plates, and treated with various concentrations of EcN_OMVs for 16 h. After incubation, the cell culture supernatant was collected for the determination of the LDH activity using the LDH kit (Nanjing Jiancheng Bioengineering Institute, Jiangsu, China). For cell proliferation activity determination, RAW 264.7 cells (2 × 10 4 cells/well) were grown in 96well plates, and treated with various concentrations of EcN_OMVs for 6 h followed by completely discarding the cell culture and adding new cell medium for 24-h incubation. After this period, the cell viability was detected by using the CCK-8 cell viability assay kit following the manufacturer's protocol (Nanjing Jiancheng Bioengineering Institute, Jiangsu, China). Each treatment group was detected in triplicate, and three independent assays were performed. Phagocytic activity of RAW 264.7 macrophage RAW 264.7 cells (5 × 10 4 cells/well) were grown under the same conditions for the cell proliferation activity assay. After the 24-h incubation, each well was added with FITC-labeled dextran (Sigma-Aldrich, St. Louis, MO, USA) and incubated for 30 min followed by discarding the cell culture and washing three times with PBS. The cells of each well were fully lysed with 200 μL Triton X-100 (1%) and the relative fluorescence units were measured by using the Synergy™ HTX Multi-Mode Microplate Reader (BioTek Instruments Inc., Winooski, VT, USA). Each treatment group was detected in triplicate, and three independent assays were performed. Determination of immune-related enzyme activity and cytokine level RAW 264.7 monolayers (1 × 10 5 cells/mL) were grown in 24-well plates, and stimulated with EcN_OMVs (1.0 μg/mL) and heated-killed EcN (the ratio of bacteria: cell = 25:1) for 16 h. After the incubation, the cell culture supernatant and cells from each well were harvested for determination of cytokines and immune-related enzymatic activities as follows: cytokines in the supernatant including IL-4, IL-6, IL-10, IL-12p40, and TNF-α; NO production in the supernatant; intracellular enzymatic activities including ACP and iNOS. Cytokine levels were determined by using the corresponding ELISA Kits (R&D System, Minneapolis, USA) according to the manufacturer's protocol. NO production and enzymatic activities were determined by using the corresponding assay kit (Nanjing Jiancheng Bioengineering Institute, Jiangsu, China). Each treatment group was detected in triplicate, and three independent assays were performed. Antimicrobial activity assay Three different bacterial pathogens were purchased from the China Veterinary Culture Collection Center (Beijing, China), including two Gram-negative species, E. coli CVCC1554 and S. Typhimurium CVCC3757, and one Gram-positive specie, S. aureus CVCC4265. Antimicrobial activity assay was performed as described previously [36]. In brief, RAW 264.7 monolayers (1 × 10 5 cells/mL) were grown in 24-well plates and treated with EcN_ OMVs (1.0 μg/mL) for 16 h followed by completely removing the supernatants and washing three times with PBS. The fresh antibiotic-free medium containing each bacterial pathogen was added to each well at a 100:1 bacteria/macrophage ratio. After incubation for 3 h at 37°C, the cells were washed three times to remove nonadhered bacteria, and the fresh antibiotic-free medium was added. The cells were continuously incubated for 2 h (a total of 5 h of pathogen invasion). Subsequently, the cells were washed three times and then lysed with 1% TritonX-100 for 5 min at 37°C. Cell lysates were immediately coated onto the corresponding agar plates for CFU determination. Each treatment had three replicates, and three independent assays were performed. Statistical analysis All data were presented as mean ± SE. Student's t-test was used for the analysis of differences between the two groups. One-way ANOVA analysis followed by Newman-Keuls's multiple comparison test was used to compare the means among greater than two groups. Statistical significance was declared at P < 0.05. All data analyses were performed using Graph Pad Prism software 5.0 (San Diego, CA, USA). Additional file 1 Table S1. EcN_OMVs proteins identified in this study
2020-08-29T13:01:46.330Z
2020-08-27T00:00:00.000
{ "year": 2020, "sha1": "ee763e939cc5a6a34fe4c6a60628863c3ab4e777", "oa_license": "CCBY", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/s12866-020-01953-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a2fa17a987653983d869c3f0b793d64538f62b6d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
26940643
pes2o/s2orc
v3-fos-license
FEBRILE INFECTION-RELATED EPILEPSY SYNDROME: A RARE CASE PRESENTATION Febrile infection-related epilepsy syndrome is characterized by super refractory status epilepticus that is resistant to conventional antiepileptic drugs. This case report critically analyses the treatment options adopted in a hospital to manage this syndrome. Despite the aggressive efforts that were taken, the patient succumbed to the complications of the illness and side effects of the treatment strategies adopted. This shows that the treatment options currently available are in adequate, so an intensive research in the area of pathogenesis of status epilepticus is required to frame treatment strategies that can bring out better outcomes. INTRODUCTION Febrile infection-related epilepsy syndrome (FIRES) is an acute onset epileptic encephalopathy in which previously healthy children present with prolonged treatment becomes resistant status epilepticus [1,2]. The clinical profile consists of an initial phase of febrile infection, followed by an acute phase marked by recurrent seizures (partial or secondary generalized seizures, facial myoclonus) during which the fever resolves. In the chronic phase survivors present with drug resistant epilepsy and neuropsychological impairment. Mortality rate in the acute phase is about 30%. Treatment requires polytherapy (on an average 6 antiepileptic drugs [AEDs] per patient). Early placement of patients on ketogenic diet might optimize seizure control and cognitive outcome after FIRES. The significant role of immunotherapy has not been proven [3,4]. CASE REPORT A 11-year-old female child, with no significant history of neurological disorder, was admitted with case of status epilepticus following a period of febrile illness 3 weeks back. Shortly after the onset of fever she had developed abnormal behavior in the form of poor responsiveness and further developed seizures. Later the frequency of seizures increased and was admitted in the local hospital and evaluated. Magnetic resonance imaging (MRI) brain showed linear horizontal enhancing vessel extending from subependymal location of frontal horn suggestive of vascular malformation, venous angioma, occult cryptic vascular malformation with no evidence of obvious encephalitis or meningitis. Electroencephalography (EEG) showed bilateral temporo-occipital and temporal spikes. Other investigations done locally were shown in Table 1. While at the local hospital she had multiple episodes of seizure. She was started on mannitol, antibiotics (tablet septran [sulfamethoxazole 400 mg+trimethroprim 80 mg]), and antivirals (acyclovir 250 mg) there. A course of intravenous (IV) immunoglobulins (Ig) and steroids were also given. However she developed status epilepticus, she was ventilated and started on midazolam (10 mg/10 ml) infusion apart from AEDs (tablet eptoin 100 mg, phenobarbitone 60 mg, tablet phenytoin 100 mg, tablet levipil 100 mg, tablet clonazapam 1 mg, tablet lacosamide 50 mg, tablet frisium 5 mg, and tablet valparin 500 mg) and was referred to our hospital which is tertiary care referral center. On admission she was on ventilation, in view of recurrent seizures AEDs were optimized and titrated as per clinical requirement. MRI brain showed mild leptomeningeal enhancement especially involving bilateral superior fronto parietal. Antibiotics were started as per available culture sensitivity report. Supportive treatment in the form of blood product transfusion and electrolytes were given. In view of super refractory seizures, she was started on thiopentone infusion 1 g, injection fosphenytoin sodium 150 mg/2 ml, injection phenobarbitone 200 mg, tablet clonazepam 2 mg, injection levipil 500 mg/5 ml and which was gradually tapered. She was on multiple AEDs. She was given two cycles of plasmapheresis. Tracheostomy was done in view of need of prolonged ventilation. IV antibiotics (meropenem 500 mg, colistimethate sodium 1,000,000 IU) were given as per c/s report. She was continued on AEDs and ketogenic diet. She was gradually weaned from bilevel positive airway pressure (BiPAP) but developed sudden asystole without prior desaturation or tachypnea and revived with resuscitative measures. Her ECHO done was normal. She developed bed sores for which plastic surgery consultation was taken. Debridement and exploration was done for the nonhealing bed sores. Later developed bleeding from debridement site and had altered coagulation parameters which were managed appropriately. In view of hypoalbuminemia and nonhealing ulcers her diet was changed from ketogenic diet to high protein diet. She had minor seizures during her hospital stay and EEG showed left parieto temporo-occipital epileptiform abnormalities. Follow-up EEG were done and her AEDs were stepped down. IV Albumin was given for 3 days in view of hypoalbuminemia, 1 pint packed red blood cells was transfused in view of low Hb. Gradually her ventilator requirements were weaned down and BiPAP was started. Off BiPAP trials were given, but in view of tracheostomy leak, it was changed. Her serum phenobarbitone and eptoin levels were within normal limits. Nerve conduction velocity showed evidence for predominantly axonal type of sensory motor neuropathy affecting both lower limb more than upper limb and most probably suggestive of critical illness neuropathy. She was on antibiotics for bloodstream associated infection. Need for ventilation during BiPAP weaning trials has been explained in detail to her parents. Progress showed for few days. In spite of intensive care, she had a cardiorespiratory arrest and was not revived as per the wishes of family. Case Report which is pharmacoresistant and persist for several weeks to months. Survivors of acute phase progress into chronic phase results in pharmacoresistant status epilepticus and significant cognitive impairment [1,[3][4][5]. The refractory seizures continue to occur despite treatment with multiple AEDs. DISCUSSION The largest, multicenter review in this area included 77 children. The acute mortality rate was 11.7% and 93% of the survivors had refractory status epilepticus. Only 18% of the survivors were cognitively normal. 16% had borderline cognition, 14% had mild mental retardation, 24% had moderate mental retardation and 12% had severe mental retardation, and 16% were in a vegetative state [4]. Death may occur in most refractory cases. Therapy with benzodiazepines is given in early status epilepticus (IV midazolam/IV or rectal diazepam). IV antiepileptics are preferred in established status epilepticus. If seizures continue to persist for up to 2 hrs, despite the above treatment, then general anesthesia is recommended where dosing is based on EEG burst suppression approach (thiopental/pentobarbital/midazolam). This approach is associated with declining cognitive status. Anesthesia is recommended to prevent excitotoxicity. The disadvantage of anesthesia is hypotension, cardiorespiratory depression, and development of acute tolerance. Pentobarbital follows zero order kinetics, so it has a tendency to accumulate and prolong recovery phase [6,7]. Although the effectiveness of AEDs has not been clearly established in super refractory status epilepticus, it is a conventional practice to administer antiepileptics along with anesthesia. The AEDs conventionally used are carbamazepine, lacosamide, levetiracetam, phenobarbital, phenytoin, topiramate, and valproate. However, there is no evidence that any of these is less or more effective than other [8]. The recognition that super refractory status epilepticus may be due to antibodies directed against neural elements and that inflammation plays an important role in epileptogenesis have led to potential use of steroid and immunotherapy. However there are no clear guidelines about dose, duration of therapy, and evaluation of effectiveness. Steroids are associated with gastrointestinal ulceration, sodium and fluid retention, and psychiatric disturbances. Major adverse effects of Ig are coagulation disorders and hypertension [9][10][11]. Early ketogenic diet treatment, especially during acute phase, may optimize both seizure control and cognitive outcomes in FIRES [4]. Ketogenic diet is associated with acidosis, constipation, hypoglycemia, and hypercholesterolemia. The successful use of magnesium infusion and hypothermia in status epilepticus has also been reported [12,13]. CONCLUSION The etiology of FIRES has not been clearly defined. Optimum seizure control is not obtained using conventional multiple AEDs. Clear knowledge about the underlying pathophysiology is required to frame treatment strategies. Several hypotheses have been put forward and the scope for future research in this area is huge. A child with refractory seizures requires the care of a highly skilled team with expertise in pediatric intensive care, neurology, epileptology, nursing, pharmacy, and dietitian.
2019-03-16T13:12:52.535Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "73e716093df378d96c0739672bd1553c243acf2e", "oa_license": "CCBYNC", "oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/16045/10045", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a9fcae2e0888576af01b382a3e07237dadc83ba5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
146306705
pes2o/s2orc
v3-fos-license
The Courage to Create: The Role of Artistic and Spiritual Activities in Prisons Artistic and spiritual activities should be considered as important elements in varied and diverse responses to offenders' needs: they value humanity and seek well-being. This article examines the role of interventions delivered to prisoners that do not fit within the categories of psychology, education or training (for example, pastimes such as visual and performance arts, meditation and yoga), and maps an alternative terrain to traditional concepts of rehabilitation and treatment. Whilst acknowledging the need to evidence effectiveness in order to satisfy policy makers, victims, and the wider public, we explore the constraints of quantifying the impact of these activities. This article examines the role of artistic and spiritual activities with prisoners. In particular, it is concerned with the way in which calls for evidence-based policy and practice may undermine diverse and responsive work that occurs beyond the realm of accredited offending behaviour programmes. For a number of years, activities facilitated by the arts sector have been successfully taking place with prisoners; however, limited empirical evidence exists in relation to their outcomes. This may partly be due to the ideological conflict between the arts and sciences but, equally, as a result of government funding prioritising interventions of a psychological nature. But, as a result of increasing acknowledgement of the role that voluntary and charitable organisations play in the criminal justice sector, it is possible to see an emerging recognition of the need to evaluate this work. This article initially sets out the links between spirituality, creativity and affect before moving on to consider the way in which affect is integral to the commission of, and response to, crime. A review of government policy which focuses on partnership approaches with the voluntary and charitable sector is then explored, which is particularly pertinent for artistic and spiritual practitioners as they predominantly originate from this region. The latter section of the article highlights the merits of these approaches for prisoners, drawing on some of the limited research currently available in this area, before concluding with some reflections on both the ideological and methodological difficulties of seeking to evaluate artistic and spiritual activities. We finish by challenging both researchers and policy makers to consider a more democratic and empowering approach to evaluation which will require the courage to move beyond what may be politically acceptable. During the early spring of 2009 a paper sculpture entitled 'Bringing Music to Life' was put on display in the South Bank Centre. The piece used the cut and folded sheet music of Beethoven's Ninth Symphony to represent an orchestra and choir, and was considered to be an exquisite piece of art -until the identity of the artist was revealed. The Royal Festival Hall had bought the piece of work from the Koestler Trust, and it had been made by the double rapist and child murderer, Colin Pitchfork, whose legal team was in the process of questioning his sentence length (Purves 2009). To the Koestler Trust the offence committed by the artist is not of concern; it is the artistic value (both intrinsic and extrinsic) of the work which is significant. To some readers of the Guardian, TheTimes, Sun, Independent, Daily Mail and Leicester Mercury, the value of the artwork was inextricably linked to Pitchfork's offences. The media furore over the sculpture raises questions as to why creative activities with prisoners are negatively regarded by some sections of the community. If Mr Pitchfork had been celebrating achievement in an academic examination or an accredited programme, it is hard to envisage that this emotionally-heightened reaction would have ensued. Pitchfork's creative act can be viewed as the manifestation of his being; something of which his victims' families did not want to be reminded. Perhaps it is because the creative act embodies the living, breathing essence of the Self, what Negus and Pickering (2004, p.2) call 'the external manifestation of divine creation' that this negative reaction arose. Spirituality, Creativity and Affect The term 'creativity', with its origins in Judeao-Christian culture, still invokes concepts of religious practice, but it was not until the 18th Century that this word was linked to the processes of 'doing art'. Even when the word was used to explain the role of artistic endeavour, it often had overtly religious or spiritual overtones. The concept of creativity changes over temporal and cultural space, much as crime and culture do, and in the modern era creativity is considered to be an indicator of, for example, individuality, sub-cultural attachment or non-consumerism. This leads to the suggestion that some have replaced the quest for spiritual or religious meaning with artistic activity (Negus and Pickering 2004;Misra, Srivastava and Misra 2006). The link between creativity and spirituality is the concept of affect; the feeling which is promoted by worshiping, creating or doing something risky. Affect is something more than the emotional response to a particularly involving activity. Massumi (2002) further explains that: processes larger than ourselves. With intensified affect comes a stronger sense of embeddedness in a larger field of life -a heightened sense of belonging, with other people and to other places. (p.214, italics in original) Halsey and Young (2006) use the concept of affect to explore and explain the feelings of graffiti writers and ask us to consider the visceral nature of the physical and emotional impact of committing this form of 'crime'. They note the pride and feeling of community spirit that is invoked through the process of creating a piece of artwork, and consider its importance in helping to explain the graffiti writers' behaviours. This conceit is wonderfully illustrated by a scene in The Shawshank Redemption when DuFresne, after barricading himself in an office, plays a piece from The Marriage of Figaro over the prison speaker system. The narration provided by Red highlights the beauty and transformative effect of the music, ending with '. . . every last man in Shawshank felt free'. If affect, which is something more than simply an emotional response to committing an act of crime (Katz 1988;De Haan and Loader 2002;Karstedt 2002), is important in the commission of crime, might it play a role in the curtailment of offending? The association of affect to social capital (Freiberg 2001) and from social capital to theories of desistance (see, for example , Farrall 2004;Giordano et al. 2008) should be an avenue for further exploration, but in the climate of punitive attitudes towards offenders, combined with the demands for a primarily experimental evidence-base for policy making, the role of affect, and the artistic and spiritual activities which may ignite it, is often ignored by policy makers. Spiritual practices rarely feature in any criminal justice policy despite there being growing evidence of their ability to foster positive change. Rucker (2005) notes that a number of men with convictions for violence found yoga and meditation brought them 'self-mastery'. In essence, they learned to control their emotions, feelings and temper. The empowering practice enabled the men to take control, but in a respectful and peaceful manner. Winkelman (2003), who used shamanic drumming as part of a substance abuse rehabilitation programme, reports significant benefits on 'physiological, psychological and social' levels to aid the recovery process (p.4). Derezotes (2000) reveals that the adolescent sex offenders who did yoga particularly valued the spiritual aspect of this activity. In addition, the practice allowed them to constructively channel their anger as a result of their increased levels of self-awareness and self-control. In her review of the arts, (whilst acknowledging some methodological limitations), Hughes (2005) found evidence of personal and social development. Relationships between prisoners and staff were enhanced as a direct result of the informal contact that these activities encouraged. This, in turn, helped to maintain good order, with a corresponding reduction in adjudications for problematic behaviour. Artistic/spiritual activities can have a 'humanising' effect on prison culture (Wrench and Clarke 2004) and, because prison work is highly demanding and emotional (Crawley 2004), practices like yoga can help prison staff re-engage with their feelings and release stress (Prison Phoenix Trust 2009). Recognising and working with emotion is 99 r 2010 The Authors Journal compilation r 2010 The Howard League and Blackwell Publishing Ltd highly significant in criminal justice, and public opinion concerning crime prevention policies is heavily influenced by feelings (Freiberg 2001). Denial of affect will serve only to reinforce the lack of emotional intelligence pervading modern society leading to increased aggression, depression, social ineptitude and crime (Goleman 1996). Prisoners need to undertake activities that not only address their offending behaviour, but engage them holistically and enhance their emotional well-being. Acknowledging the role and impact of affect, therefore, must be at the heart of criminal justice policy, practice and research. Government Responses and Third Sector Involvement The history of punishment, the treatment model, 'nothing works', 'What Works?', popular punitivism and the effects of new managerialism on the criminal justice system (CJS) are well-documented. So, too, is the development of evidence-based, cognitive-behavioural interventions (CBT) which aim to reduce reoffending (see, for example, Hollin and Bilby 2007). These programmes may offer some success with some prisoners in some circumstances (Falshaw et al. 2003), but the lack of unequivocal evidence of recidivism (Chitty 2005) has led to calls for a refocusing of Ministry of Justice policy. Offending behaviour programmes should only 'continue to be offered as part of the range of interventions for prisoners but fitted into a much wider rehabilitation agenda' (Home Affairs Select Committee 2004, p.234). Alternative practitioners have long understood that a 'one size fits all approach' does not recognise and value prisoners' diversity, and proponents of the 'Good Lives Model' assert that risk/needs approaches, such as CBT, are essentially negative, focus on the eradication of unacceptable behaviours rather than 'promoting pro-social and personally more satisfying goals' (Ward and Brown 2004, p.245). A rigid reliance on accredited programmes may be misguided when evidence of effectiveness may sometimes be considered elusive (Johnston and Hewish 2008). The goals of HM Prison Service include the duty to look after prisoners with humanity, as well as rehabilitating offenders to lead crime-free lives on release from prison. If we accept that an element of humanity is the need and desire to express ourselves in, sometimes, a nonverbal and creative manner, then we must also acknowledge that this interpretation demands the provision of artistic and spiritual activities within the prison estate. In addition, there is now an increasing indication that the government recognises the benefits of an eclectic approach to work with offenders because of its focus on partnerships with the 'Third Sector' (Ministry of Justice 2008), the creation of the Arts Alliance in November 2008 (Anne Peaker Centre 2008) and through running interventions that link CBT programmes with elements of drama-based approaches (Blacker, Watson and Beech 2008). The strengths of the Third Sector (defined by the Ministry of Justice as voluntary and community organisations, social enterprises, co-operatives and mutuals) lie in its ability to be 'flexible, non-bureaucratic and responsive' and to 'offer holistic provision to deal with multiple needs 2008, p.16). The fact that the Third Sector is able to draw on local networks, including minority and faith communities, is an added advantage in the work it undertakes and the social capital it reinvests (Ministry of Justice 2008). However, a number of Third Sector organisations feel nervous about the government's call to partnership (Silvestri 2009). Johnston and Hewish (2008) argue that the spontaneous and emotional nature of their work does not sit well with the 'What Works?' programme and Prison Service Order 4350, which governs the accreditation of interventions. The reconfiguration of activities to meet accreditation requirements will, they fear, eradicate the very essence of what makes them successful: sensitivity, adaptability and variety. In the winter of 2008, a course facilitated by The Comedy School at HMP Whitemoor was cancelled by the Justice Secretary, Jack Straw (Guardian, 21 November 2008) after being successfully run for ten years (Dugan 2009). Straw believed that a comedy course was not a justifiable use of taxpayers' money, despite the fact that it seemed to address elements demanded by the decency agenda and had been running with a Third Sector organisation for a considerable amount of time. On the back of the press coverage of this story, Prison Service Instruction 50/2008 was issued to help governors decide what constitutes acceptable activities in prisons: 'the public acceptability test'. This guidance places victims and the public at the heart of these decisions in that governors must 'avoid those which would generate indefensible criticism and undermine public confidence in the Service' (National Offender Management Service 2009, p.1). How they are to establish what the public thinks is not included. Greater community engagement is at the heart of the government's agenda to respond to victims and the wider public in matters of crime and disorder (Casey 2008) and, whilst this aim is laudable, provocative media campaigns that seek to elicit emotive and punitive responses, (illustrated by the Pitchfork and Comedy School cases and the inclusion of Part 7 of the Coroners and Justice Act 2009 on criminal memoirs), often prevent wellinformed public debate on crime and disorder issues. It is pertinent that creativity is often regarded as resistance to the 'excessively bureaucratic and manipulative' (Fisher 2002, p.1) and that artistic and spiritual activities require space; space to take risks and be impulsive (Ashmore 2008). This is supported by Matarasso (1997) who notes that: encouraging people to take risks may not seem to be the most useful impact the arts could claim, but risk is fundamental to the human condition, and learning to live with it is a prerequisite for growth and development . . . (p.59) But prisons are known for neither risk nor impulse and, whilst it is not suggested that they could successfully operate without some level of discipline within a well-organised regime, the lack of space for prisoners to demonstrate any control and self-direction is concerning. If dependency is to be avoided, prisoners need to be empowered to take some degree of responsibility over their daily lives (Prior 2001). Indeed, many accredited programmes aim to build such agency and self-efficacy (Kemshall and Canton 2002). Alternative approaches of a spiritual and/or artistic nature can provide the necessary motivation to engage the most disaffected prisoners and empower them to take part in other prison-based interventions and programmes (Hughes 2005;Digard, Grafin von Sponeck and Liebling 2007;Cox and Gelsthorpe 2008;Wilson, Caulfield and Atherton 2009). Indeed, a document produced by the Department for Education and Skills (2004, p.30) clearly recognised the importance of the arts curriculum in learning and skills provision for adult offenders. Arts-based and informed activities illustrate the importance of creativity in social movements, by allowing learners to improve their levels of self-esteem and enabling them to develop a set of skills of import in both personal and professional lives. These forms of activities not only have an impact on their own, but we must also consider the importance of sequencing artistic/spiritual interventions with, for example, CBT-based programmes as well as the inclusion of creative elements in other forms of empirically-evidenced interventions with offenders (Blacker, Watson and Beech 2008). The Value of Artistic and Spiritual Activities The value of engaging prisoners in 'purposeful activity' has long been recognised and is part of Her Majesty's Inspectorate of Prisons criteria against which prisons are assessed (HM Inspectorate of Prisons 2008). Prisoners should not only engage in education and training but also have time and space to foster and develop positive relationships, enhancing the 'dynamic security' of the prison (HM Inspectorate of Prisons 2007). Therefore, artistic and spiritual activities should constitute purposeful activity as they produce these type of benefits but, with the current economic downturn and the prison service facing budget cuts of at least d80 million per year (The Times Online, 9 February 2007), it is easy to see how creative activities that are viewed as lacking practical utility may be the first to be cut. The government's Green Paper, Reducing Re-offending Through Skills and Employment (Department for Education and Skills 2005), made only the briefest acknowledgement of the arts in work with offenders, and activities of a spiritual nature are conspicuously absent; the main focus being on training for jobs such as 'welding, carpentry, metal work or fork-lift driving' (p.24), industries, it should be noted, that are currently facing increasing levels of unemployment as a result of economic downturn (Fitzgibbon 2009). Yet, Robinson (2001) reminds us that 'in 1998 the Government estimated that the[se] creative industries had generated annual revenues of d60 billion, a tenfold increase in ten years' (p.41). It is concerning, therefore, that the Green Paper failed to consider this sector as a serious contender for the future employment of prisoners. Recently, however, the government seems to have reassessed the role of creativity and the arts in all aspects of education for children and adults. At the Royal Society of the Arts, the Minister for Higher Education and Intellectual Property, David Lammy (2009) noted that not only did an understanding and practice of arts and creativity enable us to understand 102 r 2010 The Authors Journal compilation r 2010 The Howard League and Blackwell Publishing Ltd the development of our society and culture, but it taught people 'soft skills' which were valued by employers. While the speech mainly focused on concepts of a liberal arts tertiary education, the point being made is still the same; the arts are acclaimed, but only in relation to the economic benefit that they might bring to the country. Creativity, art and their associated skills are, in the current climate, neither 'acceptable' nor 'purposeful' unless they can be commodified and are seen as having economic value. Identifying the extrinsic value of artwork, and the skills needed to create it is, at the very least, problematic and is, in itself, a defining feature of this form of endeavour. But this leads us to further question the role of any activity which is not defined in terms of its ability to produce artefacts for consistent and continual consumption (Loader 2009) nor identifies the creators as legitimate or illegitimate consumers (McCulloch and McNeill 2007). The attribution of value only to activities that generate consumables in this way fails, of course, to encompass spiritual practices whereby the inward development of Self is often hidden or difficult to verbalise. Similarly, music projects in prison have also found that participants felt 'a sensation of peace and connection that they could not do justice to through verbal description' (Digard, Grafin von Sponeck and Liebling 2007, p.5). But, interestingly, there has been a recent change of emphasis where subjective well-being is a key driver for economic policy making. The Government Office for Science (2008), in light of the UK's changing economy, commissioned a project which sought to examine how to make best use of the UK's material and mental resources. The Foresight project sets out a number of key challenges facing the UK due to the shifting demographics, global economy, science and technology, and nature of society. The report proposes that policy makers harness and promote 'mental capital' in order to foster well-being in place of traditional economic policies which emphasise monetary wealth. This more sustainable and holistic model of well-being has been promulgated by the new economics foundation (Aked et al. 2008). Within the findings, prisoners are acknowledged as one of the groups at high risk of poor mental health and, in order to increase their 'mental capital', they need to build resilience. Resilience is the development and sustention of protective factors which lead to 'positive adaptation in the face of significant adversity or trauma' (Sutherland et al. 2005, p.15) and help to deter crime. Aked et al. (2008) note that well-being 'comprises two main elements: feeling good and functioning well' (p.1) and that '[E]xperiencing positive relationships, having some control over one's life and having a sense of purpose are all important attributes of well-being' (p.2). Artistic and spiritual practices can achieve these aims (Hughes 2005;Rucker 2005), enabling prisoners to build resilience and an array of protective factors which can lessen the negative impact of imprisonment and reduce the risk of recidivism. But this well-being cannot be created through the actions of dedicated practitioners alone. An environment conducive to creativity also has to be cultivated, and the role of prison staff in cultivating this safe space has been identified as an important component for personal growth and transformation to occur (Digard, Grafin von Sponeck and Liebling 2007). The Howard Journal Vol 49 No 2. May 2010ISSN 0265-5527, pp. 97-110 Fisher (2002 argues that 'we live in a world whose institutions are increasingly dominated by ''competence control ''' (p.14). He avers that, as a result of this regulation, it becomes increasingly difficult to take risks and move beyond required performance indicators. This over-emphasis on control and predictability eliminates the necessary space for innovation and creativity. Evaluation and Evidence of Effect The current re-emergence of experimental criminology reflects attempts to predict and control the world around us (Hope 2009). This desire to predict and control is prevalent in prisoner activities and demonstrates the preoccupation for knowledge based on reason and scientific evidence, often to the detriment and exclusion of the spiritual or artistic. This duality is by no means recent, Negus and Pickering (2004) note that, since the concept of creativity has been used to mean the endeavour of artists, science, logic and reason have been set against creativity and spirituality. We certainly now exist in a political environment where 'Science is the largely unquestioned source of authoritative knowledge in the modern world' (Robinson 2001, p.142). Creativity and spirituality might thrive in an environment driven by a decency agenda, but given budget constraints, human resource, and prison estate issues, this may be more problematic than arguing for interventions which are primarily focused on reducing reoffending and are 'proven' to work. Research which illustrates the efficacy and cost-effectiveness of criminal justice (as well as, for example, education and healthcare) policies is required before interventions become provided nationally (McGuire, Mason and O'Kane 2000; Wrench and Clarke 2004). 'What Works?' is, of course, the guiding principle for offending behaviour programmes delivered in the prison service, however, this raises questions for artistic/spiritual interventions, which might not easily fit into research paradigms or evaluation models acceptable to policy makers. The need to evidence effectiveness, in conjunction with the methodological complexities of evaluation (Hollin 2008), often act as a barrier to creative environments and the provision of alternative activities with prisoners. How might researchers provide robust evidence about the efficacy of interventions that sometimes do not identify reduction in reoffending rates as a key outcome (Hughes 2005)? This is particularly problematic when aims are ephemeral and do not lend themselves to investigation via traditional evaluation methodologies (Matarasso 1997), or where those delivering activities have a theoretical objection to the basis of evaluation objectives and ideals (Miles 2004). There is much written about the impact of creativity and artistic interventions with school children, harmed social communities and patients in hospitals (see, for example, Hewitt 2004), but there seems to be little robust evaluative or research work carried out on artistic or spiritual endeavours within criminal justice settings. Whilst there is increasing interest in researching the effects of spiritual practices from a health perspective (see, for example, Daaleman and Frey 2004), little exists 104 r 2010 The Authors Journal compilation r 2010 The Howard League and Blackwell Publishing Ltd in relation to the criminal justice setting. In her literature review of practice and theory of the arts in the criminal justice sector, Hughes (2005) found 'an abundance of success stories' (p.7), but the ability to explain the reasons behind this success is still outstanding. Methodological rigour was often missing in evaluation reports, perhaps not surprising given Matarasso's (1997) assertion that 'people, their creativity and culture, remain elusive, always partly beyond the range of conventional inquiry' (p.72). Miles and Clarke (2006) also reveal that many factors have prevented the arts from evidencing their effectiveness to the required standard of 'What Works?' Time, space, differing cultures, inadequate funding, limited group sizes, access to information, and 'a general reluctance among arts practitioners to break down and specify aims and objectives' (p.61) were all difficulties to be overcome. To ensure that artistic and spiritual programmes do not continue to be the piecemeal, short-term funded projects, not only within the CJS but within social, community settings too (Matarasso 1997;Hughes 2005;Miles and Clarke 2006) a research base, outlining the positive impact of these interventions needs to be developed (Hewitt 2004). Discovering the mechanisms of effectiveness and positive outcomes is important as Matarasso (1997) found that that incomplete interventions and poorlyconceived and facilitated community arts projects, tended to have negative impacts on the people and environments they were supposed to help and support. This finding mirrors research that shows offenders who took part in offending behaviour programmes and did not complete them, were more likely to reoffend than those who had never started a programme in the first instance . The methods associated with democratic evaluation consider public accountability to be at the core of the evaluative model. In this role, the evaluator becomes an information broker, passing the views of practitioners, participants and local organisations back to central government. However, this means that these views are filtered through the ideological and methodological lenses of the evaluators, which may act as a barrier to practitioner participation. Yet, Greene (2006) goes on to note there are people carrying out evaluation research who are committed to using the process as a form of liberation and empowerment for those they are evaluating. This takes the concept of evaluation to its very limit. She notes that these people move from a 'value-neutral' position to a 'value-relative' one and then beyond to a 'value-committed' stance in evaluations. This suggests that they support the ideological basis of the researched programme. However, in carrying out research in a political environment which supports the pre-eminence of randomised control trials and valuefree evidence, value-committed research will not be considered to be an acceptable resource which will influence policy making. But, it might be possible to detect a more courageous government response to work with prisoners that is beginning to recognise 'the possibility of more than one path to truth' (Misra, Srivastava and Misra 2006, p.425). Miles and Clarke (2006) note that the Home Office/Ministry of Justice's insistence on randomised controlled trials as the only way of evidencing success is being reviewed. Perhaps the punitive era of penal policy is about to make way for 105 r 2010 The Authors Journal compilation r 2010 The Howard League and Blackwell Publishing Ltd a more creative, holistic and sensitive response to prisoners' needs for as May (1976) tells us: People who claim to be absolutely convinced that their stand is the only right one are dangerous. Such conviction is the essence not only of dogmatism, but of its more destructive cousin, fanaticism. It blocks off the user from learning new truth, and it is a dead giveaway of unconscious doubt. The person then has to double his or her protests in order to quiet not only the opposition but his or her own unconscious doubts as well. (p.20, italics in original) The Need for Courage Those outside of the prison reform movement may find it difficult to accept the notions of creativity and spirituality within the CJS, as they are often linked to negative perceptions and reactions. Prisoners taking part in comedy programmes, financially benefiting from memoirs of their offending or being radicalised rather than rehabilitated by their religious faith are viewed as unacceptable. So it is, perhaps, unsurprising that government attitudes to these are mixed and sometimes unco-ordinated; a point made by Baroness Stern in the debate on Part 7 of the Coroners and Justice Act 2009. She noted that 'writing, painting and making films are all better activities for society than violence, robbery and theft. We should welcome such rehabilitation and not take away the lawfully earned money of the rehabilitated' (Hansard, Lords Debates (29 October 2009), col. 1288. This not only illustrates unease with creative processes in the CJS, but underlines the notion of commodification of offenders' artistic endeavours. If the government continues to promote mixed messages about the role of artistic/spiritual interventions, for example, playing a significant role in the Arts Alliance while at the same time curtailing the promotion of non-traditional interventions with offenders through Prison Service Instructions, then how are practitioners and researchers to proceed in identifying, delivering and assessing the impact of this work? We conclude by arguing that courage is needed to develop and maintain creative responses when working with prisoners just as methodological variety (Simons and McCormack 2007) is necessary to capture and evaluate those approaches and the 'transformative effects' they can have (Hewitt 2004). It is hoped that effective and meaningful Third Sector partnerships will provide the impetus for such courage and that cultural criminological ideals of having '. . . a healthy disrespect for the rules by which it defines itself ' (Ferrell, Hayward and Young 2008, p.161) will enable us to move this corner of criminology forward too. To consider the effect of artistic and spiritual endeavour means to theorise these concepts, the practice and the outcomes and to consider their relationships with the traditional forms of intervention in prisons aimed at altering offenders' behaviour. Miles (2004, p.109) notes that the attempt of theorising the impact of artistic interventions seems to fall under one of five categories: pedagogy, approaches to learning, delivery methods/facilitation, teaching, and attitudinal change. Some of these theoretical frameworks (delivery methods and attitudinal change) have relevance for criminologists, but there are still elements missing. Our proposition is that the impact of creativity may fit within the concepts of desistance and the related areas of social capital (Farrall 2004), for which we need to work outside of the normal, tightly-bound confines of conventional criminological thought (although we accept that for many criminologists there are no tightlydrawn boundaries). Those who are more comfortable with the terminology of evidence-informed policies and experimental methodologies, understanding and developing the links between the empirically-informed psychological treatment programmes and creativity/spirituality should certainly be part of this debate, with continued support from umbrella organisations such as the Arts Alliance. Artistic and spiritual activity should not just affect participants, but researchers and the prison environment (Wrench and Clarke 2004). Initially researchers and practitioners should work together to develop methodologies that provide robust and meaningful data on the impact of interventions that are acceptable to research participants and the policy audiences of the reports. As with all approaches to crime and offenders, there will be continuing debate about the best ways to implement activities, as well as the ethics of providing these types of opportunities funded by central government. The goal in the medium term should not simply be to affectively feel creativity is a human good that should be extended to those who have committed even the most unacceptable of crimes, but to empirically demonstrate that it can change people's behaviour for the better. 1 Note
2018-11-30T12:33:54.110Z
2009-12-01T00:00:00.000
{ "year": 2009, "sha1": "e4f307502eec2b001884afa5e850c8dde5076459", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/The_Courage_to_Create_The_Role_of_Artistic_and_Spiritual_Activities_in_Prisons/10092209/files/18194057.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "1e7ff4d37f67d0b5d9416c61b5c0c57d15c660f0", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Psychology" ] }
128447232
pes2o/s2orc
v3-fos-license
Mineral Systems in the Mount Isa Inlier Northwest Queensland contains several world class mineral deposits, being one of the worlds leading producers of Zn, Pb, Cu and Ag. Rather than focus on mineral deposit models, as has been done in the past, we are using the mineral system approach (Barnicoat, 2008), where the whole system is studied at a variety of scales and a variety of processes, which culminate in the deposition of mineralisation. Seven Mineral Systems are identified, namely: 1. Shale/siltstone/dolomite hosted Zn-Pb-Ag systems - Western Fold Belt. 2. Ag-Pb-Zn in high-grade metamorphic terrains - Eastern Fold Belt Province. 3. Structurally-controlled epigenetic iron oxide-Cu-Au - Eastern Fold Belt and Kalkadoon-Ewen Provinces. 4. Structurally-controlled epigenetic Cu±Au mineralising system - Western Fold Belt Province. 5. Phosphate Mineralisation in the basal Georgina Basin sequence. 6. U and Rare Earth element (REE) mineralisation. 7. Fe Ore - South Nicholson Group. Introduction The NW Queensland Mineral and Energy Province (NWQMEP) can be regarded as the premier Zn-Pb-Cu region in the world (Geological Survey of Queensland, 2011). The NWQMEP evolved as a Paleoproterozoic-Mesoproterozoic province of the North Australian Craton (NAC), from c. 1900-1500 Ma, in a largely far-field extensional back-arc to intracontinental setting over-riding the NE-dipping convergent margin of the Gawler Craton to the far S (Betts et al., 2003). Three major stacked superbasins developed on c.1900-1860 Ma crystalline basement -the Leichhardt Superbasin (1800-1750 Ma), the Calvert Superbasin (1740( -1670, and the Isa Superbasin (1670-1595 Ma), containing extensional and sag-phase sedimentary packages with some volcanics and magmatic rocks, separated by unconformities. From c. 1680 Ma, an E-facing rifted continental margin may also have developed along the eastern margins of the NAC. Basin development was largely terminated by compressional tectonism of NNW-SSE and E-W orientations from 1600-1500 Ma, accompanied by major felsic magmatism in the E. These events -the Isan Orogeny -produced the current geological setting of the Mt Isa Tectonic/Geological Environment The deposits typically occur in an intracontinental rift to passive margin environment. There is strong basement control on basin architecture and the orientation of faults active at the time of basin formation. The rift environment provides a source for the fluids and fluid pathways; deposition of the orebodies typically occurs late in the extensional cycle and may be related to either sedimentation or inversion of the basins. All significant occurrences are hosted by 2-8 km thick successions of the Isa Superbasin, ranging in age from c. 1660 Ma (Dugald River, Lady Loretta) to 1590 Ma (Century) (Queensland Department of Mines and Energy et al., 2000). These successions are interpreted to represent the products of sedimentation related to thermal subsidence following extension and rifting (Queensland Department of Mines and Energy et al., 2000). Within the sag successions, Zn-Pb-Ag deposits are typically localised in districts or 'sub-basins' of 100-200 km 2 area that are characterised by: 1. Underlying clastic and silty units in the Calvert Superbasin commonly show a network of growth faults and subtle halfgraben structures upon which the sag-phase successions were deposited, with local unconformity. 2. Sag-phase basins form large accumulations of basal clastics overlain by thick and extensive packages of siltstone, dolomite and dolomitic siltstone. 3. Within the broader sag-phase basins smaller (third order) sub basins develop which are characterised by abundant pyritic and carbonaceous shale and dolomitic siltstone which are immediate host rocks to Pb-Zn mineralisation (Huston et al., 2006). Source The source of the metals in most sediment-hosted Zn-Pb-Ag deposits has usually been attributed to clastic rocks in upper crustal sequences which underlie the deposits (for example, Zn from shale, basalt; Pb from arkose, grit, felsic volcanics, granites; Derrick, 1996). Fluid Pathways The pre-existing Leichhardt and Calvert Superbasins provided permeable aquifers and fluid reservoirs for many of the metals that are hosted by the Isan Superbasin or deposited during the Isan Orogeny (Polito et al., 2006). Depositional Mechanisms Depositional mechanisms for Zn-Pb-Ag mineralisation are varied: Extension and thermal input during early superbasin development did not result in the formation of mineral deposits but rather were the storage compartments for fluids drawn down into the system (Murphy et al., 2011;Polito et al., 2006). The main processes contributing to Zn-Pb-Ag mineral deposition were fluid cooling, dissolution of host rock carbonate (with consequent pH increases) and thermochemical sulfate reduction due to the interaction of oxidised Zn-Pb-Ag-transporting saline fluids with organic matter and also mingling with migrated but locally sourced hydrocarbons; inorganically precipitated carbon was also produced (Hinman et al., 1994;Dixon and Davidson, 1996;Hinman, 1998;Broadbent et al., 1998). These processes emphasise the significance of organic-rich and calcareous successions as potential hosts and reductants. Deposits such as Century and Mount Isa exhibit paragenetic stages from early, layer-parallel sphalerite, sphalerite breccias with minor galena and pyrrhotite to vein and breccia-hosted galena with sphalerite, pyrrhotite and euhedral pyrite. A paragenetic evolution from early sphalerite to late galena with euhedral pyrite is consistent with a thermally prograding event, increasing extent of thermochemical sulfate reduction and saturation of hydrothermal pyrite (Murphy et al., 2011). While layer-parallel mineralisation is widespread in the deposits, coarser-grained layers and veins of galena and lesser sphalerite are also developed as a consequence of proximity to possible feeder zones along bounding faults, and/or to recrystallisation and replacement of earlier sulfides by later generations of sulfides formed during metamorphism, especially at Mt Isa and George Fisher. Carbonate-hosted replacement deposits such as Kamarga may have formed by neutralisation of hot acid fluids (Jones et al., 1999). Although synsedimentary to early diagenetic mineralising processes have generally been favoured for formation of these deposits (Waltho and Andrews, 1993;Hinman et al., 1994;Dixon and Davidson, 1996;McGoldrick and Large, 1998;Betts and Lister, 2002), there is a growing body of evidence for a late diagenetic (reminiscent of Mississippi Valley-type deposits) to syntectonic replacement mechanism as an alternative explanation for the formation of the stratabound Zn-Pb-Ag orebodies (Xu, 1996;Broadbent et al., 1998;Rohrlach et al., 1998;Jones et al., 1999). The timing of the mineralising Pb-Zn systems has been fully discussed by Large et al. (2005) and Huston et al. (2006). Alternatively, syngenetic-early diagenetic mineralisation may have been remobilised and enriched, in a manner similar to models generally invoked for Broken Hill-style deposits. At Duguld River, the deposit occurs in metamorphosed carbonaceous shale with a substantial part of the resource resulting from significant structural upgrading, perhaps during the Isan orogeny. Ag-Pb-Zn in high-grade metamorphic terrains: Eastern Fold Belt Province Deposits comprise massive to semi-massive galena, sphalerite, pyrrhotite and pyrite and/or magnetite layers or stacked lenses hosted by thin-bedded calcareous paragneiss and migmatitic quartzofeldspathic gneiss, considered to be metamorphosed immature siliciclastic sediments. Amphibolite, porphyry and pegmatite lenses occur within the gneissic terrain. The complex gangue mineralogy includes calc-silicate mineral assemblages containing garnet( Mnenriched)-fluorite-hedenbergite-pyroxmangite-quartz-magnetitefayalite-pyrrhotite-gahnite (Walters et al., 2002). These stratabound deposits are typically thin, but laterally extensive and were deformed and metamorphosed together with their host rocks (Hoy, 1996). Deposits in the Mount Isa Inlier include Cannington, Pegmont, and Altia ( Figure 1). These deposits have similarities to world class Broken Hill mineralistation >1,000 km south (Gibson et al., 2012 ) and are commonly referred to as 'Broken Hill Type' (or BHT) deposits. They are important sources of Pb, Zn and Ag with Cannington being the world's largest and lowest cost single mine producer of Ag and Pb and a significant producer of Zn. Tectonic/geological environment Broken Hill-type deposits appear to be restricted to the eastern margin of Proterozoic Australia (Fraser et al., 2007), where mineralisation formed in feldspathic clastic rocks that were deposited in a deep water turbiditic basin. The region is modelled in extension as a thin, brittle upper crust above a thermally weakened lithosphere, where connectivity between the two vertically stacked domains appears to be largely along steep crustal scale faults (Murphy et al., 2011). While siliciclastic sedimentary packages are dominant, they also contain rift-related basic layered sills and exhalative Fe formations enriched in Mn and P (Hatton and Davidson, 2004). Depositional Mechanisms Two models for the generation of BHT deposits are: The modified synsedimentary/syndiagentic model (Boden, 1996;Bailey, 1998) with initial introduction and zoning of base metal and Ag mineralisation with Zn dominant and Pb-Ag dominant horizons. This pre-metamorphic zoning could have been developed by processes associated with a volcanogenic sulfide system or a basin dewatering diagenetic system with mineralisation controlled by primary porosity or matrix replacement, associated with the emplacement into the sequence of a series of tholeiitic basic sills (amphibolite). Introduction of the mineralisation is followed by regional deformation and metamorphism. During a post-metamorphic metasomatic event, initial metasomatism of mineralised rocks resulted in anhydrous alteration characterised by hedenbergite-garnet-quartz and the deposition of very minor pyrrhotite and rare sphalerite. This was followed by high and low temperature hydrous stages. tectonic fabrics and is overprinted by later alteration. Quartz-garnetpyroxene-pyroxenoid alteration affects partial melt segregations that occurred during peak metamorphism, suggesting that these skarnlike alteration assemblages developed under fairly deep-seated (ductile) conditions at a late stage of the Isan Orogeny. Peraluminous Fe-Mn-rich metasediments form compositionally banded K-feldsparsillimanite-quartz-biotite-garnet assemblages. Goerthite-quartz assemblages are also diagnostic of this mineralisation. Recent studies and age dating continue to favour a synsedimentary/syndiagenetic origin for BHT deposits (Huston et al., 2006), formed at or just below the sea floor. Despite the extensive polyphase folding and high grade metamorphism evident in BHT deposits, feeder zones and a replacement/exhalative footwall system have been recognised at Broken Hill (Groves et al, 2008). Structurally-controlled epigenetic iron oxide-Cu-Au systems (IOCG): Eastern Fold Belt and Kalkadoon-Ewen Provinces Deposit styles within these systems comprise epigenetic mineralisation as hydrothermal replacements, veins and breccias. Two major but contrasting groupings are identified: 1. An economically significant grouping of larger deposits commonly referred to as IOCG deposits (iron oxide-Cu-Au), and which include the world-class Ernest Henry deposit ( Figure 5), Osborne, Mt Elliott, Roseby, Eloise, Rocklands, Mt Dore and Starra deposits. Recent discovery of the Merlin Mo-Rh deposit adds to the economic significance of this deposit grouping. 2. Smaller deposits (i.e., unlikely to ever achieve production status in the foreseeable future) are voluminous and widespread throughout the Kalkadoon-Ewen basement province, and in the Eastern Fold Belt. They form as narrow 1-5 m quartz vein-type deposits in N-, NW-and NE-trending shear zones, and are closely associated with dilatant structures along the margins of dolerite The skarn model comprises an original metasedimentary package (consisting of an Fe-Mn-(Ca)rich fraction) and an outer Fe-and Mnrich peraluminous metasediment derived from quartz-pelite mixtures with local feldspathic fractions; regional deformation with peak metamorphism reaching upper amphibolite facies; peraluminous anhydrous Fe-rich alteration (quartz-sillimanite-potassium feldspar-biotite-garnet-graphite); anhydrous Ca-rich alteration (quartzapatite-pyroxmangite-hedenbergitefayalite-hornblende-garnet); hydrous Fe-Ca-K alteration (hornblende-biotitepyrosmalite-dannemorite); and a mineralising phase with sphalerite, galena, pyrrhotite, chalcopyrite. A large coherent halo of stratabound almandine (pink garnet)-quartz-apatitebiotite-graphite alteration occurs as an envelope around the mineralised package at Cannington. This alteration has penetrative and amphibolite bodies which occupy the same structures. Host rocks include older granite and volcanics, and metasedimentary cover rocks. Many of these deposits formed the basis of a historic small-scale ("gouger") Cu mining industry during the early to mid-20 th Century. Tectonic/geological environment The grouping of smaller deposits throughout the basement and eastern fold belts are located in largely intracontinental and continental margin environments within the North Australian Craton. Most of the larger IOCG deposits formed in an E-or SE-facing passive continental margin dominated by shallow shelf to deeper water turbiditic sequences. Within the shelf and slope, local rifting promoted the stacking of and juxtaposition of chemically reactive lithologies, including ironstones, carbonaceous siltstones, volcaniclastics, carbonates and mafic sills and dykes (Davidson and Large, 1998). These sequences are 1760-1650 Ma, equivalent to the Calvert and Isa Superbasins. Basin inversion from 1600-1500 Ma resulted in deformation, crustal thickening and the intrusion of voluminous, mainly felsic, magmas to crustal depths of 5-10km (Mark et al., 2006) -the Williams and Naraku Batholiths. Duncan et al. (2011) suggest that metal-rich reservoirs formed in the SE end of the subduction zone, along the southern margin of the NAC, to be tapped by metamorphic and magmatic events during the Isan Orogeny. Most IOCG deposits are related spatially to the Williams/Naraku batholiths (1545-1490 Ma) but mineralising fluids could have been metamorphic (e.g., Osborne, 1600 Ma) or magmatic in origin (e.g., Ernest Henry main stage Cu-Au, 1525 Ma; Duncan et al., 2011). Most IOCG deposits formed in the time range 1550-1500 Ma, as part of the Isan Orogeny. Depositional Mechanisms The major mineralising event in this system occurred within the Isan Orogeny (1600-1500 Ma, from peak metamorphism 1600-1550 Ma through to major granite intrusion (Williams and Naraku Batholiths) from 1545-1490 Ma. Most Cu-Au mineralisation is preceded by region-wide Na-Ca alteration manifested as albitediopside-calcite-actinolite assemblages which are overprinted by Cu-Au±Fe. The region-wide small-scale Cu deposits are generally unrelated to granite plutons, but form as narrow (1-5 m) quartz-calcite-chlorite filled shears in a diverse range of host rock age and composition. They contain little or no magnetite, and show a spatial and possible genetic relationship to basic dykes and sills; in addition, mineralising fluids were likely to be saline because of metamorphism of extensive scapolitic (?evaporative) metasedimentary cover sequences, and likely contributions from regional deep-crustal sources. The IOCG deposits by contrast contain abundant magnetite (and hematite), and formed in larger structures associated with dilatancy, rheology contrasts, brecciation and replacement of brittle host rocks (e.g., 1740 Ma intermediate volcanics at Ernest Henry). Ore fluids were high temperature (300-500 o C), highly saline (26-70 wt% NaCl) and oxidised (Mark et al., 2006). Magnetite formed in some deposits from mineralising fluids (e.g., Ernest Henry), while in others fluid replaced existing host-rock ironstones. Metal formed by pH changes due to wallrock interaction, redox changes and reduction of some fluids by carbonaceous rocks (e.g., Mt Dore). Two IOCG events are recognised:-1. Ironstone-hosted deposits such as Osborne and Starra formed c. 1600-1565 Ma (Perkins and Wyborn, 1998;Gauthier et al., 2001;Duncan et al., 2009;Baker et al., 2010), and possibly as early as 1680 Ma (Oliver and Rubenach, 2009). 2. Breccia and shear-hosted deposits such as Mt Elliott, Ernest Henry, Lady Ella and Mt Dore formed post-peak metamorphism and synchronous with the period of granite emplacement (1555-1485 Ma) (Perkins and Wyborn, 1998;Wang and Williams, 2001;Duncan et al., 2009;Baker et al., 2010). Older mineralising events in this system are sparse. The Tick Hill Au-only deposit (511,000 ozs mined at 22 g/t Au) formed in a high strain domain possibly related to the Wonga event extension from 1750-1730 Ma, and could be related to roof zones of Wongaage granites (Forrestal et al., 1998). Structurally-controlled epigenetic Cu±Au mineralising system: Western Fold Belt Province At Mount Isa, the ore forming system involves similar chemical processes to other sediment-hosted Cu systems, but represents the relatively high temperature end of the spectrum of syn-diagenetic to low-grade metamorphic ore-forming environments (Queensland Department of Mines and Energy et al., 2000). Historically, theories on the genesis of the Mount Isa Cu orebodies have ranged from igneous telemagmatic replacement to syngenetic deposition followed by remobilisation. A variation on this model is the progressive buildup of the Cu ores as a feeder system to syngenetic Pb-Zn. Today, the deposit is almost universally regarded as replacement late in the deformation history of the Isan Orogeny (Perkins, 1990). Tectonic/geological environment There is strong fault control on deposit location at a range of scales. The regional faults are numerically modelled as fluid pathways which, in extension, draw down fluids; in compression, the convective cells break down and fluids are expelled upwards, typically ponding in permeable hanging wall positions. Discrete element modelling at the district to deposit scales indicates that stress anomalies associated with a particular compression direction during D 4 deformation played a critical role in the localisation of Cu deposits (Murphy et al., 2011). Fault bends, jogs and intersections are regarded as key localisation features. Derrick (2008) has shown that the Isa and Mammoth Cu deposits are controlled by an array of earlier growth faults developed in basin extension from 1770-1700 Ma, reactivation and inversion of these normal faults in the Isan Orogen at 1500 Ma produced favourable sites of folded faults and accompanying dilatancy and jogs e.g., along the folded Paroo Fault which forms the immediate footwall to Isa Cu mineralisation. The Source The Mount Isa Cu mineralisation occurs within the Urquhart Shale within the Mount Isa Group. The deposit comprises crosscutting chalcopyrite within a zoned siliceous to dolomitic alteration halo ("silica-dolomite"). Within the Isa mine, the mineralisation lies above a shallow basement fault separating the Mount Isa Group from the Eastern Creek Volcanics (Perkins, 1990). A common interpretation is that the Cu has been sourced by leaching from the Eastern Creek Volcanics (e.g., Smith and Walker, 1971) and therefore the proximity to this unit is a prerequisite for Cu ore formation. Traces of chalcopyrite and either bornite or pyrite locally occur in veinlets, mainly in intensely hematitised metasediments within the Eastern Creek Volcanics (Heinrich et al., 1995). Fluid Pathways For Mount Isa-style deposits, a protracted development of an alteration system beginning with an early K-feldspar and mica alteration, then formation of fractures and dolomite veins and ending with late massive proximal dolomitisation and silicification occurred during the Isan Orogeny. The phase of dolomitic alteration in the host rocks was associated with epidote-sphene and chlorite-albite alteration in the Eastern Creek Volcanics (Heinrich et al., 1995). As the ore fluids moved away from their source they were focussed along brittle/ductile shear zones, interacting to varying degrees with a range of rock types, partly modifying their character. Limited silica-dolomite alteration is also evident in some of the smaller deposits (e.g., Mt Kelly, Lady Annie), in crack-seal breccia and fibrous extensional veins (van Dijk, 1991). Many of the smaller deposits are hosted within a 1670-1655 Ma stratigraphic triplet, comprising basal carbonaceous siltstone, massive ?algal chert and dolomite. Late mineralising Curich fluids intersecting this zone in fault and shear settings may deposit Cu in dilatant sites along competency boundaries of the chert, by pH change from dolomites, and reduction of the fluid by carbonaceous matter. Depositional Mechanisms It is postulated (Heinrich et al., 1993;Matthai et al., 2004;Wilde et al., 2006) that ore deposition was due to mixing of an oxidised brine that circulated within metabasalts with a sulfur-rich fluid from overlying Mount Isa Group metasedimentary rocks or a younger Mesoproterozoic basin at the site of deposition. Copper deposition was primarily a function of wall-rock reaction; mixing is a necessary consequence of the evolving permeability and porosity regime rather than an essential element of ore deposition. Cu ore precipitation involved one or a combination of depositional mechanisms including: cooling; a number of wall-rock reactions (reduction by carbonaceous matter, replacement of quartz and dolomite). Dissolution of carbonate minerals, feldspar, and micas buffered pH at somewhat neutral values, optimising Cu extraction (Wilde et al., 2006); fluid mixing between magmatic and one or more fluids of a different origin (mantle/metamorphic/basinal evaporate/meteoric) (Kendrick et al., 2006). Phosphate Mineralisation in the basal Georgina Basin sequence Phosphorite deposits in the Georgina Basin have been described by de Keyser and Cook (1972), Southgate (1988), Southgate and Shergold (1991) and Draper (1996). The deposits in the Mount Isa region are in the Cambrian age Beetle Creek Formation, Border Waterhole Formation and Thorntonia Limestone. They consist of beds of consolidated pelletal phosphorites interbedded with chert, carbonate, shale, siltstone and volcanic materials. The phosphorite beds average 11m (but range up to 36 m) thick and consist of dense pellets of apatite in a cherty and carbonate matrix. The phosphorites range from dense pelletal rocks consisting almost exclusively of francolite (one of the collophane group minerals) to siliceous and calcareous phosphorite, phosphatic chert and phosphatic siltstone, and grade into fossiliferous limestone. Chert (silica) and clay are the main dilutants and the deposits have comparatively low levels of heavy metals (for example, <5 ppm Cd). The phosphorites comprise apatite + fluorapatite + francolite + dolomite + calcite + quartz + clays (montmorillonite or illite) ± halite ± gypsum ± Fe oxides ± siderite ± pyrite ± carnotite (Queensland Department of Mines and Energy et al., 2000). Tectonic/geological environment Phosphate deposits occur in an intracontinental or shallow continental margin setting and require predominantly carbonate sedimentation (Draper, 1996;Southgate and Shergold, 1991). General criteria for phosphate deposition are as follows: A low paleolatitude A broad shallow downwarp adjacent to a seaway High productivity in the vicinity Minimal terrigenous sedimentation in a shallow marine environment A major transgression A trap such as a bay or carbonate bank. Early Cambrian NW-SE rifting initiated widespread sedimentation in the Georgina Basin and the phosphatic sediments developed in shallow water basins and shelves adjacent to the Proterozoic land mass. The Duchess-Phosphate Hill deposit formed in the S, in the Burke River embayment, while other deposits (e.g., Lady Annie, Lady Jane, Thorntonia, Phantom Hills) formed along the W and NW margins of the Proterozoic land mass. Depositional Mechanisms Phosphate deposits and occurrences are present in two predominantly carbonate sequences. In each of these sequences, the 'retrogradational parasequence sets of the transgressive sytems tract (Southgate and Shergold, 1991)' comprise a repeating suite of phosphorite, phosphatic limestone and organic rich shales. There is a subaerial exposure surface between the two sequences. The phosphate bearing facies were controlled by relative sea level, paleogeography and paleotectonics and there is evidence of structural compartmentalisation of phosphatic facies. Recently, a blanket of Y+ REE-rich material has been found overlying phosphate mineralisation in the Georgina Basin in western Queensland. As well as Y, the deposit also contains Neodymium (Nd) and Dysprosium (Dy) (Alston, 2011). The origin of the REE enriched blanket is not known. Uranium and Rare Earth element (REE) mineralisation Uranium mineralisation is known from several different settings in the Mount Isa Inlier. These are: Unconformity-related mineralisation The unconformity-related mineralisation at Westmoreland (Hills and Thakur, 1975;Rheinberger et al., 1998;Wall, 2006;Polito et al., 2005) is spatially related to either: NE-trending structures with proven or suspected tholeiitic dyke filling; NE-and NW-trending structures; volcanic sills; E-trending structures with volcanic dyke filling; quartz breccias of NW-trending regional faults; and/or proximity of the contact between the uppermost unit of the Westmoreland Conglomerate and the overlying Seigal Volcanics. Faults at the deposit scale may be related to larger strike-slip fracture zones extending for tens of kilometres. Mineralised zones do not show any signs of pervasive deformation but are displaced by later faulting. Mineralisation in the principal deposits is present as horizontal, vertical or hybrid styles. Horizontal-style mineralisation is relatively extensive and sheet like, up to 20 m thick, within the uppermost portion of the Westmoreland Conglomerate and close to the Seigal Volcanics contact. This style of mineralisation flanks the NE-trending Redtree Dyke and is best developed immediately adjacent to and on one side of the dyke only. Vertical-style mineralisation forms subvertical, relatively irregular lenses to 30 m thick that are hosted by sandstone of the Westmoreland Conglomerate, although some mineralisation extends into the dolerite dykes. These lenses are adjacent to the Redtree Dyke and their geometry closely mimics that of the dyke-joint system. Hybrid mineralisation is developed in the overlap zone between the horizontal and vertical styles of mineralisation and is, in detail, a combination of both styles. The overlap zone can be up to 50 m thick (Queensland Department of Mines and Energy et al., 2000). Shear-hosted mineralisation Lenticular to tabular, stratabound uraniferous beds and zones are hosted by metamorphosed basic volcanics and pelitic and psammitic sediments of the Eastern Creek Volcanics in the Leichhardt River Fault Trough in the Calton Hills-Paroo Creek and Spear Creek-Mica Creek areas. Secondary U mineralisation is generally not readily discernible at the surface of the known deposits, which were located with radioactivity detectors. Most deposits are uneconomic to subeconomic, but some such as Valhalla, Skal, Anderson's Lode (Counter) and Warwei-Watta represent significant U resources. Skarn-hosted mineralisation The Mary Kathleen U deposit lies S of the D 3 , NE-trending Cameron Fault, and is sited in the axial surface of a tight, slightly asymmetrical syncline (the Mary Kathleen Syncline) that can be traced southward for >5 km. The western limb of this structure is cut off by the Mary Kathleen Shear, and the eastern limb by the 1737±15 Ma Burstall Granite. Slightly younger rhyolite dykes W of the granite have similar compositions and an identical radiometric age (Solomon et al., 1994). The Burstall Granite and associated rhyolite dykes also have elevated U contents (7 and 12 ppm U, respectively). The orebody is hosted by a reduced (magnetite-poor) calcic exoskarn formed by replacement of calcareous rocks of the Corella Formation. The ore comprises fine-grained uraninite disseminated through allanite-apatite enriched rocks that cross-cut the garnetdiopside skarn (Queensland Department of Mines and Energy et al., 2000). Similar skarn-hosted REE-Cu-Au mineralisation occurs to the S, at the Elaine Dorothy prospect. Unconformity-related Mineralisation Pitchblende is the main ore mineral and occurs in both the Westmoreland Conglomerate and altered basic dyke rocks. In the sandstones, it occurs interstitial to detrital grains, along fractures, and in veins up to 10 mm thick. It is present as massive, structureless, or rarely euhedral grains, as colloform masses and as thin films of sooty pitchblende. Pitchblende in the dyke rocks occurs as fine aggregates, as thin films and as veins. Secondary U minerals occur as fine disseminations and filling pore spaces. The most abundant secondary U minerals are torbernite, metatorbernite and carnotite. The upper, weathered parts of mineralised systems contain uraninite, torbernite and carnotite, with traces of autunite, bassetite, ningyoite and coffinite. The deeper and unweathered portions of the deposits contain uraninite, autunite, ningyoite, bassetite and coffinite, and minor brannerite. Other ore minerals include pyrite, marcasite, chalcopyrite, galena, sphalerite, Co-Ni sulfarsenides, bismuth, bismuthinite, bornite, chalcocite, digenite, covellite and Au. Thorium is present in alteration products of detrital Th-bearing minerals as thorogummite and florencite. Hematite is abundantly present as the specular type or as a finely disseminated earthy variety and is intimately associated with the primary mineralisation (Queensland Department of Mines and Energy et al., 2000). Shear-hosted mineralisation Shear-related deposits are hosted in metabasalts and interbedded metasediments within N-trending to E-W structures in the Eastern Creek Volcanics, and in steep N-S trending mylonite zones in metabasalt and metasediments at Valhalla. Skarn-hosted mineralisation The orebody at Mary Kathleen consists of elongate lensoidal ore shoots that are up to 50 m thick and roughly parallel the margins of a broader garnet mineralised zone. The relationship of the ore shoots to stratigraphy is obscured by garnetisation in the upper part of the orebody but the ore lenses are broadly stratiform at depth. The ore is largely a replacive breccia with clasts of early skarn breccia in an allanite-garnet ore matrix. The spatial relationships between ore, the Mary Kathleen Shear and the axial trace of the Mary Kathleen Syncline indicate that ore formation postdated major folding and was synchronous with shearing under amphibolite facies conditions, consistent with a syn-regional metamorphic age for ore genesis. The primary structural control on ore formation was the development of ore in and around tensile veins and/or secondary shears in a competent skarn host, along a major boundary between skarn-dominated rocks to the E and regionally metamorphosed, 'unskarned' metasediments and Wonga Granite to the W (Oliver et al., 1986). Uraninite-bearing ore at Mary Kathleen has a U-Pb age of 1550-1500 Ma (late D 2 -D 3 ), compared with 1737± 15 Ma for the Burstall Granite, 1700± 60 Ma for banded skarn and 1620-1500 Ma for the main regional metamorphism and deformation. Depositional Mechanisms U-REE enrichment is related to reaction of highly saline and oxidised fluids (Isan Orogeny) with earlier, slightly reduced (magnetite-poor) skarn (Oliver et al., 1986). Iron Ore: South Nicholson Group Oolitic Fe formations occur in the Mesoproterozoic South Nicholson Group of the South Nicholson Basin in the Constance Range area. Up to 10 (generally <4) lenticular, Fe-rich beds occur in the 45-180 m thick Train Range Ironstone Member, some 275-520 m above the base of the Mullera Formation. The Train Range Ironstone Member also contains thinly bedded, alternating dark grey shales, siltstones and sandstones. One to four ironstone beds are present at any one place and the potentially economic ore occurs in the "Main Ironstone Member" -the lowest Fe-bearing unit of significant thickness (Harms, 1965). Tectonic/geological environment Limited observations of the Train Range Ironstone Member suggest that much of the ironstone represents deposition in the upper parts of shallowing-up cycles, i.e., in prograding parasequences during sea level highstands. The presence of both chamositic and sideritic ooidal ironstones indicates growth of Fe minerals on siliceous nuclei in shelfal environments, perhaps on offshore or nearshore bars. The existence of sandstones with rip-up clasts of ironstone as an intraformational conglomerate suggests that erosion and redeposition of pre-existing layers occurs, indicating either a renewed transgressive phase, or local development of channels within an overall prograding succession. Sediment starvation at times of maximum flooding also generates Fe-rich deposits (Burkhalter 1995), and Carter and Zimmerman (1960) state (p 13) that "some of the smaller lenses appear to be concretionary". Although they speculate that these could result from later weathering, it is also possible that they represent sedimentstarved horizons, i.e., maximum flooding surfaces within the basinal sediments (Sweet, 2012). Mineralisation Outcropping ironstones are a variable mixture of ochrous red hematite, finely crystalline blue-black hematite, limonite, quartz grains, quartz cement, shale and clay minerals, and rare relict siderite. The ironstones vary in appearance from oolitic forms to a sandstone with a hematite matrix, and have been derived from primary ironstone by surface weathering. Grades range from 20-62% Fe, depending on the silica content of the parent rock (Harms, 1965). Oxidised ironstone extends to 12-30 m vertically. The transition zone appears to have some Fe enrichment, and the near-surface zone has probably been enriched in silica. Below the water table, the ironstones contain oolites of ochrous or finely crystalline hematite, siderite and/or chamosite, and silica grains in a matrix of siderite, hematite, minor microcrystalline quartz and carbon. Oolites range from 0.2-3 mm in diameter and successive shells may consist of different Fe minerals. Veins of quartz-pyrite, siderite-pyrite and calcite cut the ironstones. Disseminated syngenetic pyrite occurs along bedding planes, especially in carbonaceous shales associated with the ironstone beds, and in siderite-rich bands. Siderite partially or completely replaces some or all of the other Fe minerals. It also replaces quartz grains and appears to have formed late in the deposition or during diagenesis. The highest grade beds are oolitic and contain 50-55% Fe at the surface. Lower grade beds contain <20-25% Fe and are siliceous. Fifteen individual deposits have been investigated and resources were calculated for three deposits, which contain a total resource of 368 Mt @ 45.4% Fe and 9.1% SiO 2 , including 40 Mt of oxidised ore @ 57.0% Fe and 10.0% SiO 2 (Queensland Department of Mines and Energy et al., 2000).
2019-04-24T13:02:35.463Z
2012-03-01T00:00:00.000
{ "year": 2012, "sha1": "06a5d00dfffd86ee79faf0d177574279f7abf749", "oa_license": "CCBYNC", "oa_url": "http://www.episodes.org/journal/download_pdf.php?doi=10.18814/epiiugs/2012/v35i1/011", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "bce2cf9ad12f1883e4c8d55cd28d9d5c68236ee0", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
17424505
pes2o/s2orc
v3-fos-license
Non-Fermi liquid behavior in U and Ce intermetallics In this paper we review the current experimental and theoretical situation of the description of non-Fermi liquid behavior (NFL) in U and Ce intermetallics. We focus on the magnetic and thermodynamic properties. We also discuss a recent theoretical interpretation of this behavior in terms of Griffiths-McCoy singularities close to the magnetic quantum critical point (QCP). We show how an effective Hamiltonian which contains both the RKKY coupling and the Kondo interaction can be written after high energy degrees of freedom away from the Fermi surface are traced out. We argue that dissipation due to particle-hole excitations close to the Fermi surface is a relevant perturbation at low temperatures and we estimate the crossover temperature T^* above which power law behavior in specific heat and magnetic response occurs ($C_V/T \sim \chi(T) \propto T^{-1+\lambda}$ with $\lambda<1$). Below T^* a new regime dominated by dissipation is found and deviations from power law behavior are expected. I. INTRODUCTION The basis for the study of metals was set by Landau almost 50 years ago in his studies of He 3 . During all these years the Landau theory has been a paradigm used to explain the experimental behavior and electronic properties of quantum Fermi liquids [1,2]. Initially the theory appeared as a phenomenological framework with a few parameters fixed by experiments. The presence of unknown parameters reflected, at that time, the lack of a microscopic theory. However, it was an extraordinary and necessary first step. Landau himself also established the route for the microscopic explanation for the validity of the theory. The Landau theory became the main tool for the study of the effects of correlations in electronic systems, and its foundation was eventually established on microscopic grounds using field theoretic methods [3,4]. The theory is supposed to hold at temperatures much lower than the Fermi temperature of the system. It is based mostly on the assumption that the interaction among electrons is short ranged (due to screening) and they are such that a perturbative expansion in the interaction converges. Thus there is a one-to-one correspondence between the interacting system of electrons and a weakly interacting system of quasiparticles. Moreover, the physics of the quasiparticles is completely determined by the Fermi surface. As consequences of Landau's theory the thermodynamic and response functions of the electron fluid are smooth functions of the temperature. One would have for instance a temperature independent Pauli susceptibility, χ(T ) ∝ constant, a temperature independent specific heat coefficient, γ(T ) = C V /T ∝ constant, and a Korringa law for the NMR relaxation rate, 1/(T 1 T ) ∝ constant. Furthermore, at low temperatures one expects the electronic resistivity to behave like ρ(T ) = ρ 0 +AT 2 , where ρ 0 is the resistivity due to impurities and A is a coefficient which comes from three different sources: electron-electron Umklapp processes [5], electron-electron interactions mediated by phonons [6], and the inelastic scattering of electrons by impurities [7]. These predictions have been confirmed in a wide class of metals and are considered the trademark of Fermi liquid behavior. Violations of Fermi liquid behavior have been expected for a long time in the context of one-dimensional conductors [8] due to strong restrictions in phase space for electronelectron scattering. It turns out, however, that it is very hard to experimentally observe NFL behavior in one-dimensional systems. In the case of organic conductors [9], which are considered the prototype of one-dimensional metals, there is always a crossover to higher dimensional behavior (that is, Fermi liquid behavior) due to coupling between chains at low temperatures. The only clear observation of NFL behavior in low dimensional systems appears in the very special case of the edge states of quantum Hall bars [10] where NFL behavior is due to the Landau level degeneracy. Indeed, there is a strong controversy about the possibility of NFL behavior in dimensions higher than one. The subject was raised by Anderson in the context of high temperature superconductors [11]. Although the possibility of NFL behavior in 2 dimensions has not been discarded there are today strong arguments against it [12][13][14][15][16]. In 3 dimensions Landau's Fermi liquid theory is assumed to be the correct starting point. We stress that even in the presence of disorder Fermi liquid theory should be valid in 3 D (at least when the disorder is weak enough to be treated in perturbation theory) [17]. Thus, it is indeed very surprising that for such a broad class of U and Ce alloys (which clearly show three-dimensional behavior) deviations from Landau's theory are so abundant. Actually, there are nowadays so many examples of alloys presenting NFL behavior that the discovery of a new compound which exhibits such a behavior is not a surprise. We organize this paper as follows: in the next section we give a brief overview of the theoretical and experimental situation on the problem of NFL behavior in U and Ce intermetallics. We apologize in advance to any whose work we have unintentionally left out. We concentrate on the situation of the thermodynamic and magnetic response in these systems and leave the important problem of transport for a later publication. In Section III we discuss the problem of Griffiths-McCoy singularities in insulating magnets; in Section IV we discuss the Kondo lattice problem and show how the RKKY interaction and the Kondo effect appear at the Hamiltonian level when high energy degrees of freedom are eliminated from the Hamiltonian; in Section V we discuss the differences between the insulating case and the metallic case for the formation of Griffiths-McCoy singularities; finally Section VI contains our conclusions. II. OVERVIEW OF THE NFL BEHAVIOR IN U AND CE INTERMETALLICS The systems we are considering in this paper are metallic alloys of rare earths or actinides which can be classified as (1) Kondo hole systems, in which the rare earth or actinide (R) is substituted by a non-magnetic metallic atom (M) with a chemical formula R 1−x M x (a typical example is U 1−x Th x Pd 2 Al 3 ); (2)Ligand systems, where one of the metallic atoms (M1) is replaced by another (M2) but the rare earths or actinides are not touched and thus have the formula R(M1) 1−x (M2) x (as, for instance, UCu 5−x Pd x ). Often these alloys order magnetically at x = 0 (ordered Kondo lattices) and long range order is lost at some x = x * as shown in Fig.1. For x > x * the ordered state is replaced by a metallic state which shows physical properties which deviate strongly from the predictions of Fermi liquid theory. This state is called non-Fermi liquid state (NFL). In NFL systems it is usually observed that even in the paramagnetic phase the specific heat coefficient and the magnetic susceptibility do not saturate as expected from the Landau scenario. The theoretical reason for this anomalous behavior is still not completely understood; many different theories have been proposed and the subject is very controversial. One possible reason for singular behavior in the thermodynamic and response functions of the system is due to closeness of these systems to long-range order. The idea that a quantum critical point (QCP) could be responsible for NFL behavior was proposed by Hertz [18] and later extended by Millis and Continentino [19]. Indeed, there is strong evidence that QCP physics is responsible for NFL behavior in CeCu 6−x Au x , where NFL behavior can be fine tuned via magnetic fields or pressure to the QCP [20]. It turns out, however, that even for this compound there is controversy about the correct description of the QCP [21,22]. Another system recently studied which seems to be in this category is CeNi 2 Ge 2 , which has been shown to have a minimum amount of disorder [23]. Hertz also studied the problem of disorder in a XY magnet and found disorder to be a relevant perturbation [24]. In this context Hertz conjectured that disorder could lead to clustering of magnetic moments. More recently it has been shown that quenched disorder has a strong effect on the properties of quantum antiferromagnets and leads to very unconventional critical behavior [25]. In all the cases studied so far the data for the susceptibility and specific heat have been fitted to weak power laws or logarithmic functions [26]. The resistivity of the systems discussed here can be fitted with ρ(T ) = ρ 0 + AT α where α < 2. Neutron scattering experiments in UCu 5−x Pd x [27] show that the imaginary part of the frequency dependent susceptibility, ℑ(χ(ω)), has power law behavior, that is, ℑ(χ(ω)) ∝ ω 1−λ with λ ≈ 0.7, over a wide range of frequencies (for a Fermi liquid one expects λ = 1). Moreover, consistent with this behavior the static magnetic susceptibility seems to diverge with T −1+λ at low temperatures [27]. What is interesting about UCu 5−x Pd x is that it has been shown in recent EXAFS experiments that this compound has a large amount of disorder [28] consistent with early NMR and µSR experiments [29]. Even in stoichiometric systems like CeAl 3 there is evidence of spatial inhomogeneity [30]. In CePd 2 Al 3 it has been shown that while polycrystalline samples show a magnetic phase transition at finite temperatures the critical temperature is driven to zero in single crystals due to internal stresses which suppress the moment formation [31]. Moreover, UCu 4 Pd is supposed to be exactly at the QCP for antiferromagnetic order. All these properties have also been seen in a similar alloys such as UCu 5−x Al x and UCu 5−x Ag x [32]. Another system which is also close to a magnetic order is U 1−x Y x Pd 3 which shows spin glass order [33]. It was the study of this system which lead Andraka and Tsvelik to propose that the NFL behavior observed in this compound was due to the spin glass transition at the QCP [34]. This point of view has also been explored by other researchers in the field [35]. Another interesting example where NFL behavior happens close to a QCP is in the system U 1−x Th x Cu 2 Si 2 which shows a ferromagnetic QCP [36]. Thus, NFL behavior has been observed in systems with very different types of magnetic ordering. Indeed, the magnetic behavior in these systems is very rich. Very recent frequency dependent susceptibility measurements have found signs of super-paramagnetism (that is, cluster physics [37]) close to the QCP of UCu 5−x Pd x [38]. Fluctuating magnetic moments were also found in µSR experiments in Ce(Ru 1−x Rh x ) 2 Si 2 close to the QCP [39]. Actually, pressure experiments in the same compound have revealed the importance of disorder for the appearance of NFL behavior [40]. The same type of physics is found in very complex systems such as U 3−x Ni 3 Sn 4−y [41] and even in stoichiometric alloys like Yb 2 Ni 2 Al where a coexistence of magnetic and paramagnetic phases have been observed [42]. The phenomena observed here are indeed very similar to those observed in simpler magnetic alloys close to a magnetic phase transition as has been shown in recent experiments in Ni x Pd 1−x close to a ferromagnetic QCP [43]. Indeed, the phenomena of clustering, superparamagnetism, and magnetic fluctuations has been discussed long ago in simple alloys such as Ni x Cu 1−x both theoretically and experimentally [44]. As in their f-electron counterparts, the specific heat and magnetic susceptibility of these systems deviate strongly from Fermi liquid behavior in a relatively broad region around the quantum critical point. Indeed, the phenomenon of NFL behavior is not limited to f-electron systems but is also very common in d-electron compounds [45]. Another source of NFL behavior is of single impurity nature. Single impurity approaches for the NFL problem are very important because the Kondo effect [46,47] is known to happen in the dilute limit of Kondo hole systems (e.g., in U 1−x Th x Pd 3 for x ≈ 1) and has been suggested as the source of heavy fermion behavior [48,49] in undiluted ligand systems (e.g., CeAl 3 ) [50,51]. The Kondo effect is probably one of the most studied problems in many-body theory. It has been understood from many different points of view, from renormalization group (RG) calculations [52] to the exact analytic solution [53] and from the conformal field theory point of view [54]. In the anisotropic case the Kondo Hamiltonian can be mapped via bosonization into the dissipative two level system (DTLS) [55,56]. The mapping between the Kondo problem and DTLS was developed in order to understand the problem of a quantum phase transition in the DTLS [57,58] and was believed to be valid close to the QCP. Nowadays, extensive numerical simulations have shown that the mapping is valid over most of the parameter space [59]. In the single channel Kondo problem up and down spin electrons spin-flip scatter against the magnetic moment. Nozières and Blandin proposed that when the number of scattering channels is increased one can obtain a local NFL ground state [60]. This is the so-called multichannel Kondo effect. In 1986 D. Cox proposed an elegant mechanism for NFL behavior in U systems based on a multichannel effect of quadrupolar origin [61]. The same mechanism has been studied in the context of the Kondo lattice [62]. There is still controversy about the applicability of the quadrupolar Kondo effect to the compounds we are discussing [63]. The quadrupolar Kondo effect requires a non-magnetic Γ 3 ground state which has been confirmed to exist in PrInAg 2 [64]. It turns out that the experimental situation in this compound is far from clear: while the magnetic susceptibility seems to show NFL behavior the specific heat is well described by Fermi liquid theory [65]. Moreover, the exponents predicted by multichannel effects are not consistent with the experimental data in many compounds. Another source of NFL behavior based on single impurity physics is the so-called Kondo disorder approach in which it is assumed that due to the intrinsic disorder in the Kondo lattice there is a broad distribution of Kondo temperatures going down to a vanishing one [29]. This kind of approach has been applied to UCu 5−x Pd x [66]. Naturally, the main criticism to the single impurity approaches is that the systems where NFL behavior is observed are concentrated. Furthermore, NFL behavior usually occurs close to a QCP where interactions among the moments and tendency to magnetic ordering is very important. Thus, the problem of single impurity versus QCP physics as a source of NFL behavior remains a very controversial one and debate among researchers is still in progress. In the next sections we discuss a possible explanation for the NFL observed in these alloys which is due to Griffiths-McCoy singularities close to the QCP point [67]. The origin of these singularities is due to the competition between the RKKY interaction, which leads to magnetic order, and the Kondo effect, which leads to magnetic quenching in the presence of disorder due to alloying. III. GRIFFITHS-MCCOY SINGULARITIES The simplest way to understand the nature of Griffiths singularities is to imagine the dilution of a magnetic lattice by non-magnetic atoms. Long range order is lost at percolation threshold when the last infinite cluster of magnetic moments seize to exist. Above the threshold the system is composed of finite clusters of magnetic atoms. Griffiths showed that when a magnetic field is applied to the percolating lattice there is a non-analytic contribution of the clusters to the free energy [68]. This contribution comes from rare large clusters. The classical problem was studied in great detail by many researchers in the 70's [69]. An important special model related with the problem of Griffiths singularities was proposed by McCoy and Wu [70] and studied more recently by Shankar and Murthy [71]. The McCoy-Wu model is a rectangular Ising model with disorder in only one direction. The importance of this model relies on the fact that it is the only known exactly solvable model with disorder. Moreover, it was shown that in this model while the system orders magnetically at some temperature T c the magnetic susceptibility diverges before the system reaches T c . This strange behavior is again due to Griffiths singularities. Although classical Griffiths singularities are rather weak (and for a long time researchers believed only the singularity coming from the infinite cluster would be observable experimentally) there is recent experimental evidence for their existence in some Ising magnets [72,73]. It turns out that from a statistical mechanical point of view a classical 2 D Ising problem is equivalent to a 1 D quantum Ising model at zero temperature. Thus, the McCoy-Wu problem maps, at zero temperature, into the random transverse field Ising chain. The random transverse field Ising model was studied in great detail by D. Fisher, who was able to calculate asymptotically exact expressions for many physical quantities close to the QCP [74]. The random transverse field Ising model has also been extensively studied analytically [75,76] and numerically [77] in one and higher dimensions and evidence for Griffiths-like singularities was obtained. In particular, Thill and Huse proposed a quantum droplet model for the problem where the magnetic cluster is treated like a single degree of freedom (or two level system). This kind of treatment agrees very well with the exact 1D calculation. It has been shown that the same type of singularities can also happen in more complicated random magnetic systems such as in the XY model [78]. One of the main characteristics of the Griffiths-McCoy singularities is the divergence of the physical quantities at zero temperature with non-universal power laws. The magnetic susceptibility, for instance, behaves like χ(T ) ∝ T −1+λ and diverges in the paramagnetic phase if λ < 1, and the nonlinear susceptibility diverges with an even stronger power law, χ nl (T ) ∝ T −3+λ (indeed it has been shown that for the pure Ising model λ → 0 at the QCP [75]). Experimental evidence for a divergent non-linear susceptibility was found in the compound U 1−x Th x Be 13 [79]. That Griffiths singularities should be important in some single band correlated systems was proposed by Bhatt and Fisher [80] and extended more recently by Sachdev [81]. A similar type of phenomenon happens also in magnetically doped semiconductors where a singlet phase was proposed by Bhatt and Lee [82]. It was shown that for a single band Hubbard model in a disordered environment the magnetic properties could be explained in terms of a quantum spin glass ground state [83]. We have proposed recently that the Kondo lattice problem is in the same class of problems as the random transverse Ising model [67]. Dipolar interactions are too small to account for the ordering temperature in these systems (which range from 100 K down to 10 K in the pure compounds), and the direct exchange between f orbitals is very weak since the spatial extent of the f orbitals is small. As is well-known the magnetism in Kondo lattices comes from the localized f-moments which are weakly hybridized with the conduction band. In the pure compound (say, UPd 2 Al 3 or UCu 5 ) the moments interact with each other via the RKKY interaction which is propagated by the conduction band [84] and in the presence of disorder and spin-orbit effects the RKKY interaction becomes short ranged [85]. Moreover, most of the heavy fermion alloys are magnetically anisotropic because of crystal field effects or spin-orbit coupling which are known to be very important in these systems. In this case the magnetic phase diagram can be very rich since an anisotropic spin exchange interaction, or Dzyaloshinsky-Moriya (DM) exchange interaction [86], is generated. These anisotropies have been observed long ago in alloys of rare earths of the form R-CrO 3 [87,88] where R is a rare earth. In a Kondo hole system (say, U 1−x Th x Pd 2 Al 3 ) the destruction of magnetism occurs mainly by the dilution of the magnetic lattice and the QCP is the percolation threshold for the lattice. In the ligand systems (like, say, in UCu 5−x Pd x ) the dilution is a more subtle effect because the magnetic atoms remain on the lattice. In order to understand how dilution quenches a magnetic moment in a ligand system one has to look at what happens in the related heavy fermion materials which do not show long range order. In heavy fermions the magnetic moments are quenched by the Kondo effect [50] as described, for instance, in the dynamical mean field (or d → ∞) theories of the Anderson lattice [89]. Thus, it is reasonable to assume that in ligand systems the effect of doping is to affect locally the hybridization between localized moments and conduction electrons. This naturally leads us to the picture proposed long ago by Doniach [90], namely, there are two relevant energy scales in the problem: the Kondo temperature of the moment T K which is an exponential function of the exchange J between local moments and the itinerant electrons (0) is the density of states at the Fermi surface) and the RKKY temperature scale, T RKKY , associated with magnetic ordering of the moments which scales with J 2 . As shown in Fig.2 there is a critical value of J (say, J c ) for which these two energy scales become of the same order of magnitude. For J < J c we have T K < T RKKY and therefore, as the temperature is lowered the system orders magnetically before the moment is quenched. If J > J c , that is, if T K > T RKKY , as the temperature is lowered the moment is quenched before it has the chance to order. Although this picture is quite naive it has been confirmed in mean field theories [91,92], numerical calculations [93] and pressure experiments in ordered [94] and disordered Kondo lattices [95] . In the absence of disorder this competition leads to a finite ordering temperature T N which vanishes at a QCP at J c as shown in Fig.2. Moreover, the competition appears explicitly in a simple problem of two interacting moments in the presence of a Fermi sea which is the two impurity Kondo problem [96]. Numerical works on this problem have confirmed the theoretical expectations [97]. Thus, in our scenario long range order is lost in a ligand system by the local quenching of the magnetic moments. Again one has to deal with a quantum percolation problem. The QCP is again the percolation point of the magnetic lattice. Away from the QCP only magnetic clusters can exist. As one further dopes the system away from the magnetic phase one eventually finds a heavy fermion ground state where all the moments are quenched [50] (unless, of course, there is a structural or another magnetic phase transition in the intermediate region). In order to describe mathematically the competition between the Kondo effect and the RKKY interaction we may start with the mean field description of the ordered phase (this is a good first approximation since there is true long range order in the system at finite temperatures and quantum fluctuations are small). On the one hand the conduction electron band is renormalized by the average field created by the ordered magnetic moments. Unless there are commensuration effects between the magnetic ordering vector and the Fermi momentum the electrons remain gapless (such commensuration effects are very unlikely in these systems with complex unit cells). On the other hand the conduction electrons produce an effective medium for the propagation of the RKKY interaction. Suppose one dilutes slightly the ordered state by changing a local exchange constant J to a value much larger than the average. In this case, like in the Doniach argument, the local moment instead of participating in the collective magnetic state will prefer to form a local singlet. That is, one has again a simple Kondo effect with a renormalized conduction band. The magnetization of the system has to drop. Mathematically this quenching of the magnetic moment can be described in terms of the anisotropic Kondo problem. Actually, as we discussed, the magnetism in these systems is anisotropic and therefore the Kondo effect does not have SU (2) symmetry. Indeed there are very recent inelastic neutron scattering experiments in Ce 1−x La x Al 3 which show evidence for anisotropic Kondo behavior [98]. Therefore, as mentioned previously the Kondo effect can be mapped into the DTLS. As we discuss in the next section the XY part of the Kondo problem becomes a transverse field -the origin of this term can be thought as a transverse magnetic field applied by the electron spin on the magnetic moment -and the Ising component of the Kondo exchange describes the coupling of the magnetic moment to a heat bath -which represents the fact that each time the magnetic moment flips it produces particle-hole excitations at the Fermi surface. It turns out that the coupling to the heat bath becomes small in the limit of large anisotropy and in zeroth order can be disregarded. Thus, we have argued [67] that in this extreme limit the Kondo lattice problem maps into the random transverse field Ising model which, as we said previously, has been shown to present Griffiths-McCoy singularities with power law behavior at low temperatures. Recent experiments have shown that power law behavior is consistent with measurements of magnetic susceptibility and specific heat in these systems [99]. In particular, for the Griffiths phase one has χ(T ) ∼ γ(T ) ∝ T −1+λ with λ < 1. In the next section we show that the residual coupling with the particle-hole bath changes the behavior of the response functions below a certain energy scale. The Griffiths phase picture has been very successful in describing the power law behavior of the physical quantities in some of the systems mentioned above, especially, UCu 5−x Pd x where structural disorder was clearly measured [29] and the exponents measured from specific heat and susceptibility agree well with each other (λ ≈ 0.72) [99]. In other alloys such as U 1−x Th x Pd 2 Al 3 (λ ≈ 0.8 from specific heat and λ ≈ 0.63 from susceptibility data) [99] or Ce(Pd 1−x Ni x ) 2 Ge 2 (λ ≈ 0.7 from specific heat and λ ≈ 0.84 from susceptibility data) [100] the agreement between exponents is not as good (although, we should stress, power law behavior is clearly observed). Moreover, in systems like U 0.2 Y 0.8 Pd 3 the divergence seems to be stronger than power law or logarithm [99]. One possible explanation for these stronger divergences might be related with the possibility that the disorder in these systems is correlated and not random. In this case one expects stronger divergences [101]. Furthermore, in some of the systems described above there may be a return to Fermi liquid behavior at very low temperatures (for instance, in UCu 4 Pd is evidence of thatbelow 0.1 K [38] while in CeRhRuSi 2 it seems to occur below 1 K [40]). We argue below that some of these problems can be resolved if the metallic character of the electronic environment is taken into consideration. IV. GRIFFITHS SINGULARITIES AND THE KONDO LATTICE PROBLEM It is intuitively obvious that the Doniach argument lead to an inhomogeneous behavior when it is taken locally instead of globally. The main problem here is how this argument can be tested at the Hamiltonian level. In this section we show how this can be accomplished for the Kondo lattice model from the renormalization group point of view [102]. It is imperative in the context of the systems discussed in this paper to take into account spin-orbit effects since these are very important for f-electron magnetism. In this case the exchange between local moments and conduction electrons is not isotropic in spin space and can be generally written as [103]: where κ = 1, 2 labels the spin states in the diagonal basis, J a,b are the effective exchange constants between the localized spins, S a (i), and the conduction electron spin, In the simplest case of uniaxial symmetry (which is going to be discussed throughout this paper) one has J a,b = J a δ a,b where J z > J x = J y = J ⊥ . The main problem in studying the competition of RKKY and Kondo effect in the Hamiltonian (1) is related with the fact that both the RKKY and the Kondo effect have origin on the same magnetic coupling between spins and electrons. What allows us to treat this problem is the fact that the RKKY interaction is perturbative in J/E F while the Kondo effect is not. Moreover, the RKKY interaction depends on electronic states deep inside the Fermi sea while the Kondo effect is a Fermi surface effect. Thus, it seems to be possible to use perturbative renormalization group approach to treat the RKKY interaction while for the Kondo effect one needs to do a better job. This kind of treatment was proposed recently in the context of the two impurity Kondo problem [104]. We will consider, for simplicity, the case where the Fermi surface for the electrons is spherical (non-nested, non-spherical Fermi surfaces can be treated in an analogous way). The local electron operator can be written in momentum space as We now separate the states in momentum space into three different regions of energy as shown in Fig.3, namely, Ω 0 where k F − Λ < k < k F + Λ; Ω 1 where k < k F − Λ; and Ω 2 where k > k F + Λ where Λ is an arbitrary cut-off. Observe that in this case the sum in (2) can now also be split into these three different regions. The problem we want to address is how the states in region Ω 0 close to the Fermi surface renormalize as one traces out high energy degrees of freedom which are present in regions Ω 1 and Ω 2 . We can perform this calculation perturbatively in J/E F . For that purpose it is more convenient to use a path integral representation for the problem and write the quantum partition function as in terms of Grassman variablesψ and ψ and where the path integral over the localized spins also contains the constraint that S 2 (n, t) = S(S + 1). The quantum action in (3) is the free action for the conduction electrons and is the exchange interaction between conduction electrons and localized moments. We can now split the Grassman fields into the momentum shells defined above, that is, we rewrite the path integral as where the index 0, 1, 2 refers to the degrees of freedom which reside in the momentum regions Ω 0 , Ω 1 and Ω 2 , respectively. In this case the action of the problem can be rewritten ψ, ψ]. Notice the free part of the electron action is just a sum of three terms (essentially by definition since the non-interacting problem is diagonal in momentum space). Moreover, the exchange part mixes electrons in all three regions defined above: Since we are interested only on the physics close to the Fermi surface we trace out the fast electronic modes in the regions Ω 1 and Ω 2 assuming that J a,b ≪ µ. In this case, as we show elsewhere [103], besides the renormalization of the parameters in free action of the electrons in the region Ω 0 , we get the RKKY interaction between localized moments, that is, the effective action of the problem becomes: dt J R a,b (r n )S a (r n , t)τ b α,γψ α,0 (r n , t)ψ γ,0 (r n , t) where Γ R a,b (r n − r m ) is the cut-off dependent RKKY interaction between the local moments and J R a,b (n) is the Kondo electron-spin coupling renormalized by the high energy degrees of freedom. The renormalization can be calculated order by order in perturbation theory [103]. We observe further that the perturbation theory here is well behaved and there are no infrared singularities in the perturbative expansion. Thus, the limit of Λ → 0 is well-defined. In this limit Γ a,b (r n − r m , Λ → 0) becomes the usual RKKY interaction one would calculate by tracing all the energy shells of the problem. Observe that there are no retardation effects in tracing this high energy degrees of freedom since they are much faster than the electrons close to the Fermi surface and therefore adapt adiabatically to their motion (the situation here is somewhat similar to the one in the Born-Oppenheimer approximation where the ions are much slower than the electrons and therefore can only renormalize the coupling constants). Hamiltonian (8) is the basic starting point of our discussion and contains the basic elements for the discussion of magnetic order in the system. As discussed in the Doniach's argument, the RKKY interaction tends to order the magnetic moments while the Kondo coupling tends to quench it. It is the interplay of these two interaction which leads to the physics we discuss here. The way this quenching occurs its fundamental for the understanding of physics of this problem and it is discussed in the next section. V. DISSIPATION IN METALLIC MAGNETIC ALLOYS In order to understand our line of argument it is important to study a very simple case of (8) where RKKY interactions are not present, that is, the single impurity Kondo problem. As we have said previously this problem is well understood and here we will just quote a few of the important results for our discussion. Notice that the single impurity Kondo effect should occur in the case of Kondo hole systems when x ≪ 1 or in moment quenching in ordered ligand systems when x ≈ 1. For the single impurity the mathematical description simplifies greatly because we just have to solve a scattering problem in terms of in-coming and out-going waves. Thus, the problem is effectively one dimensional with a boundary condition at the impurity position [55,56]. In this case we can use the technique of bosonization to understand the basic physics. First of all we can show that the renormalization of the Ising component of the Kondo interaction (8) is given in terms of the phase shift δ of the electrons by the impurity: where v F is the Fermi velocity and Since we are treating states very close to the Fermi surface we linearize the electron dispersion close to the Fermi surface: in which case the the conduction band Hamiltonian is written as where c p,σ creates an electron with spin σ, momentum |k| = p + k F and angular momentum l = 0. Thus, in writing (12) we have reduced the problem to an effective one-dimensional problem. We introduce right, R, and left, L, moving electron operators which are used to express the electron operator as In any impurity problem the right and left moving operators produce a redundant description of the problem since they are actually equivalent to in-coming or out-going waves out of the impurity. Therefore we have two options: either we work with right and left movers in half of the line or we work in the full line but we impose the condition ψ R,σ (x) = ψ L,σ (−x). We will use the last option. Thus, from now on we drop the symbol R from the problem and work with left movers only. The left mover fermion can be bosonized as where and K σ is a factor which preserves the correct commutation relations between electrons, that is, {ψ σ (x), ψ σ ′ (y)} = δ(x − y)δ σ,σ ′ . The basic operators in bosonization are the charge and spin densities (k > 0): which are written as bosonic operators b k and a k , and obey canonical commutation relations [a k , a † p ] = [b k , b † p ] = δ p,k . In terms of the boson operators, the Kondo Hamiltonian (8) for a single impurity becomes Moreover, this Hamiltonian can be brought to a simpler form if one performs a unitary transformation which transforms the Hamiltonian to (we drop the b p modes since they decouple from the impurity) An important observation here is that the unitary transformation does not affect S z . Observe that (21) describes the physics of a two level system coupled to a bosonic environment [55]. The basic physics of the Kondo problem becomes rather simple from the point of view of (21): while the XY component of the Kondo interaction (associated with the coupling J ⊥ ) flips the local spin and acts as a transverse field, the Ising coupling (associated with J z ) leads to a dissipative effect such that each time the spin flips it produces particle-hole excitations at the Fermi surface. It can be shown that the Kondo temperature of the anisotropic Kondo effect can be written as [55,59]: where E c is a cut-off energy scale of the order of the bandwidth and Observe that for J z , J ⊥ ≪ E c the Kondo temperature (22) looks very similar to the SU(2) expression k B T K ≈ E c exp{−1/(N(0)J)}. Indeed, from (22) we have Notice that the Kondo temperature of an anisotropic Kondo problem is not a single parameter quantity since it depends on the Ising component J z and the XY component given by J ⊥ . Moreover, we have α < 1(J z > 0) in the case of the antiferromagnetic coupling and α > 1(J z < 0) for the ferromagnetic coupling. As is well-known the ferromagnetic Kondo effect is related with the formation of a triplet state and therefore to the freezing of the moment (and not quenching!). When J z ≫ 2v F (the limit of large uniaxial anisotropy) we see from (9) that J z R → πv F and the Hamiltonian reduces to with the decoupling of the spin degrees of freedom to the bosonic modes. This is the dissipationless limit of the problem. Observe that in this limit the eigenstates of the system are eigenstates of S x , that is, the transverse field. We can immediately see from (22) that in this limit (26) which is a large Kondo temperature. In ref. [67] we proposed that the Kondo effect which happens in U and Ce alloys has the structure of (25) since most of these systems are not cubic and therefore can be highly anisotropic. Moreover, even in cubic systems the alloying can produce deformations in the unit cell which can produce large local anisotropies. Thus, if we disregard the residual coupling between the conduction electrons and impurities the magnetic problem reduces to the transverse field Ising model: where ∆ R ∝ k B T K is the renormalized tunneling splitting of the transverse field. It is obvious that the situation reproduces Doniach's argument: while the RKKY coupling Γ z works in the direction of making the local spin an eigenstate of S z (and therefore to order it, | S z | = 1) the Kondo effect through ∆ R pushes the local moment to be an eigenstate of S x and therefore leads S z = 0. In this picture the results for the case of insulating magnets follow immediately and one expects power law divergences of the physical quantities in the paramagnetic phase. In one dimension the problem of the Kondo lattice has been studied with the use of bosonization and has been solved exactly at a particularly anisotropic point called the Toulouse point [106] and at half-filling [107]. Moreover, this problem was studied in great detail numerically [108]. Honner and Gulácsi have argued that the Kondo chain indeed maps into the transverse field Ising model, and have shown that in the disordered case that Griffiths singularities appear close to the transition line from ferromagnetic to paramagnetic behavior [109]. This trend seems to be reproduced in other calculations for the same problem [110]. Indeed, power law behavior of the susceptibility was obtained for the Anderson model in one-dimension with exponents very close to the ones obtained experimentally [111]. Since the problem of Griffiths-McCoy singularities is essentially the problem of clusters (zero dimensional objects) surrounded by a metallic environment it seems rather natural that (27) reproduces the magnetic behavior of the Kondo lattice. The question that arises in the context of (27) is: what is the effect of the residual interaction of the cluster with the conduction electrons? In the paramagnetic phase we assume that the clusters do not interact with each other. In this case one can focus entirely on the behavior of a single cluster and its metallic environment. This problem is actually very close to the problem of macroscopic quantum tunneling of magnetic grains [112] and it is known that dissipation is a relevant perturbation to this problem especially at low temperatures [113]. Consider for instance the problem of N spins in a cluster. Since we assume the cluster to be in the ordered phase there must be two states of the cluster which are nearly degenerate. For instance, a ferromagnetic state with all the spins up has the same energy of a ferromagnetic state with all the spins down (since the environment is paramagnetic it does not bias any specific configuration). At very low temperatures the only way for the system to relax is to flip all the N spins at once. As we have seen in the case of the single impurity Kondo problem (but we can prove it to be true for the two impurity Kondo problem as well [103]) requires that the XY component of the Kondo Hamiltonian to act N times over the ground state wavefunction. Since each spin flip requires an energy of order J ⊥ the total energy in this case is of order Γ z (J ⊥ /Γ z ) N that is, the splitting between the low lying states of the cluster are split by and therefore is exponentially small -as expected for the insulating case, as well. Each time the cluster flips we expect that particle hole excitations to be created at the Fermi surface. Since the cluster is coupled to the electronic bath by its order parameter (magnetization in the case of a ferromagnetic cluster or staggered magnetization in the case of the antiferromagnetic cluster) we expect that coupling to the bath to be extensive with the cluster size, that is, proportional to N (notice that in (21) with α defined in (23) the coupling to the bath is proportional to √ α). In this case we see that the dissipation parameter α has to scale like N 2 (J z /E c ) 2 whereJ z is the Ising coupling of the cluster to the bath which is a function of the microscopic couplings and has to be calculated from cluster to cluster [103]. Thus, like in the case of a single impurity Kondo problem we can define a cluster Kondo problem with a characteristic Kondo temperature T K (N) or tunneling splitting ∆ R given by (using (22) and (28)) where γ = ln(J ⊥ /Γ z ) and N c = E c /J z depends on the coupling constants of the problem. The importance of N c rests of the fact that when α > 1 there is no real Kondo effect. Thus, for N > N c the cluster freezes and quantum fluctuations are completely suppressed. Indeed, for N = N c the Kondo temperature in (29) vanishes. Therefore, N c gives the size of the largest cluster for which the Kondo effect still takes place. We can invert (29) in order to give N as a function of the splitting as Notice that there are two well defined limits of this expression depending whether ∆ R is larger or smaller than ∆ * where If ∆ R ≫ ∆ * we have N ≈ ln(Γ z /∆ R )/γ and therefore the splitting is completely determined by γ and we have the same situation as in an insulating magnet. When ∆ R ≪ ∆ * and N/N c ≈ 1 the cluster becomes decoherent and the situation is not described by the power law behavior. Thus, ∆ * defines an energy scale above which power law singularities should be found and below which a new behavior dominated by dissipation is present. The consequences of this dissipative regime will be discussed elsewhere [103]. But a point we have to make is that since ∆ * is exponentially dependent on N c the dissipative regime is going to be exponentially small. Above a temperature scale T * = ∆ * /k B we expect the temperature dependence of the physical quantities to be dominated by power law behavior. We also believe this kind of behavior is responsible for the deviations from power law at T < T * which is observed in some U and Ce intermetallics. VI. CONCLUSIONS In this paper we have reviewed the theoretical and experimental situation on the NFL behavior in U and Ce intermetallics. We argue that the situation is inconclusive and important issues regarding the description of the problem in terms either of single ion or correlated behavior have not been solved. We argue that the Griffiths-McCoy scenario is, so far, the only one which explains the existence of NFL behavior close to the QCP but not exactly at the QCP. Disorder in these systems is very important and help to pin the pieces of the ordered phase inside of the paramagnetic phase (especially because NFL behavior is only observed in alloys). We described the problem of Griffiths-McCoy singularities in insulating magnets. This problem is now on a very firm basis, and we know that these singularities lead to power law behavior of the response and thermodynamic functions. We believe that power law can describe quite well the NFL behavior observed in a rather large temperature range in many of the systems discussed here and especially in UCu 5−x Pd x . We have shown that the Kondo lattice Hamiltonian can be studied in a renormalization group sense by tracing out the higher energy degrees of freedom deep inside or very far away from the Fermi surface and that the effective Hamiltonian contains the basic ingredients required for the local Doniach description of these systems. We argued on the basis of the mapping of the single impurity Kondo problem into the dissipative two level system that in the limit of high local magnetic anisotropy the Kondo problem indeed maps into a transverse field Ising model, which has been shown to present Griffiths-McCoy singularities in its phase diagram. We also have argued that the power law behavior disappears at temperatures smaller than T * = ∆ * /k B (which is probably quite small) where the situation is dominated by dissipative physics. What we have shown, therefore, is that like in the Kondo disorder picture [29,66] there is a distribution of Kondo temperatures which is not of single ion character but has to do with the Kondo temperature of isolated clusters. Observe that the distribution is not arbitrary but determined completely by the statistical distribution of clusters in a percolation problem. If a residual interaction between clusters exists close to the QCP then with the lowering of the temperature a quantum spin glass state is a possible ground state [76]. In this case a return to Fermi liquid behavior with a strong temperature crossover is expected. Otherwise, if the clusters are truly non-interacting then the quantum super-paramagnetic state dominated by Griffiths-McCoy singularities can exist and real singularities in the response functions must be observed. zation in solids", Int. Jour. Mod. Phys. B 6, 1355 (1992).
2014-10-01T00:00:00.000Z
1999-11-06T00:00:00.000
{ "year": 1999, "sha1": "24e725a41f6b207f28f3d1ef70ae0dd95e3dde7d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "24e725a41f6b207f28f3d1ef70ae0dd95e3dde7d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11965421
pes2o/s2orc
v3-fos-license
Experimental Aspects of Higgs Physics at the ILC Recent progress in Higgs boson studies for the International e+e- Linear Collider (ILC) is reported. These studies include extended simulations of the measurement of the Higgs mass, measurements of the Higgs boson branching ratios at higher center-of-mass energies, and methods for extracting the Higgs boson self-coupling. Also, the interplay between the LHC and the ILC in the measurement of the top Yukawa coupling and in the extraction of the supersymmetric Higgs sector parameters is discussed. Introduction With an e + e − center of mass energy in the range of 90 to 1000 GeV, a well defined initial state, and a clean experimental environment, the International e + e − Linear Collider (ILC) will be an ideal accelerator at which to study the properties of Higgs bosons. An extensive literature [1][2][3][4][5] exists detailing how the ILC can measure the masses, widths, couplings, and quantum numbers of Higgs bosons in a model independent manner with high precision. In this paper recent progress on experimental aspects of ILC Higgs boson physics will be presented. The discussion includes beam-related systematic errors in Higgs mass measurements, hadronic Higgs decays, Higgs boson phenomenology at √ s = 1000 GeV, and LHC/ILC synergy. Beam Related Systematic Errors in the Higgs Mass Measurement The Higgs mass will be measured in the process e + e − → Zh using the recoil mass technique and the direct multi-jet technique if the fully hadronic decay branching ratio is high enough. In the recoil mass technique the mass of the system opposite a Z boson decaying to e + e − or µ + µ − is measured without regard to the Higgs decay. In the direct multi-jet technique, jets from the Higgs decay are combined with the jets or leptons from the Z boson decay in a kinematical fit, and the mass is extracted from the fitted four-vectors. In both techniques a kinematical constraint utilizing the beam energy is employed. Systematic errors in the measurement of the beam energy scale and the differential luminosity distribution therefore contribute to the total Higgs mass error. Differential luminosity measurement The effects of beamstrahlung are described by a double differential luminosity distribution d 2 L/dx 1 dx 2 , where x 1 and x 2 are the energy fractions of the electrons and positrons. Most analyses utilize the acollinearity distribution of Bhabha events to reconstruct this distribution. Studies indicate that under idealized conditions where the function d 2 L/dx 1 dx 2 factorizes into two identical one-dimensional distributions of the form f ( , the differential luminosity can be measured to an accuracy of 1%, assuming 3 fb −1 at √ s = 500 GeV [6]. In a study of beam related systemtatic errors on Higgs mass measurements, Raspereza has shown that a 10% measurement error of the differential luminosity parameters a i leads to a 10 MeV error on the Higgs mass when the direct mult-jet technique is used [7]. Beam energy scale The beam energy scale can be measured to an accuracy of about 200 ppm by combining data from beam energy spectrometers upstream and downstream of the interaction point with beam energy estimates from physics processes such as e + e − → γZ, ZZ, e + e − and µ + µ − [8]. The dependence of the Higgs mass error on the beam energy scale error has been studied by several groups. Raspereza studied the direct mult-jet technique assuming a 120 GeV Standard Model Higgs boson and found δM H /δE cm = 0.5 for the bbll final state and δM H /δE cm = 0.4 for the bbqq final state. Due to the fact that no Higgs decay information is utilized, the recoil mass technique has a much stronger depedence on the beam energy measurement with δM H /δE cm = 2.9. The statistical error and energy scale systematic error for a 120 GeV standard model Higgs boson are summarized in Table 1 Hadronic Branching Ratio Measurement For some time European and American working groups have come to different conclusions about how well hadronic Higgs branching ratios can be measured [2,3]. This is illustrated in the last two columns of The Higgs potential V (η H ) is probed through measurements of the triple and quartic Higgs self-couplings λ andλ : where η H is the Higgs boson field and v is the Higgs vacuum expectation value. In the Standard Model λ =λ = M 2 H /(2v 2 ), so that a comparison of the measured values of λ,λ and M H will constitute an important test of electroweak symmetry breaking models. The triple Higgs self-coupling λ will be measured at the ILC at √ s = 500 GeV in the Higgstrahlung process e + e − → ZH * → ZHH. Assuming 500 fb −1 luminosity, studies have shown that an accuracy of δλ/λ = 0.28 can be achieved for a Higgs mass of 120 GeV [11]. Recently, the possibility of measuring the triple Higgs coupling at the ILC at √ s = 1000 GeV using the WW fusion process e + e − → ν eνe W * W * → ν eνe H * → ν eνe HH has been considered [12]. Assuming 1000 fb −1 luminosity and 80% left-handed electron polarization, a study of e + e − → ν eνe HH → ν eνe bbbb found that a triple Higgs coupling accuracy of δλ/λ ≈ 0.12 could be achieved [13]. Further improvement is expected by extending the analysis to decay topologies other than HH → bbbb. Higgs branching ratios CLIC studies have demonstrated that Higgs production through the W W fusion process e + e − → ν eνe H at √ s = 3000 GeV can be used to probe rare Higgs decays [14,15]. A recent study has shown that such decays can also be probed through W W fusion at the ILC at √ s = 1000 GeV [16]. Consider, for example, the bb decay of a 200 GeV Higgs boson, which is inaccessible at √ s = 350 GeV, and the γγ decay of a 120 GeV Higgs boson, whose branching fraction can be measured with a relative accuracy of 25% at √ s = 350 GeV with 500 fb −1 luminosity [17]. The visible mass distributions for these two scenarios are displayed in Figure 1 assuming √ s = 1000 GeV, 1000 fb −1 luminosity, -80% initial electron polarization and +50% initial positron polarziation. A measurement of the cross-section times branching ratio leads to relative branching ratio errors of 9% for the bb decay of a 200 GeV Higgs boson and 5% for the γγ decay of a 120 GeV Higgs boson. Top Yukawa coupling The top Yukawa coupling g ttH will be probed at the LHC by measuring the cross-section for gg → ttH → ttbb, ttW + W − . When ILC Higgs branching ratio measurements are combined with LHC cross-section measurements, the top Yukawa coupling can be measured with a relative accuracy of ∆g ttH /g ttH = 0.13 − 0.17 for Higgs boson masses between 120 and 200 GeV [18]. At √ s = 800 GeV the top Yukawa coupling can be probed directly at the ILC using e + e − → ttH. A relative accuracy of ∆g ttH /g ttH = 0.06−0.13 can be achieved for Higgs boson masses between 120 and 200 GeV assuming √ s = 800 GeV and 1000 fb −1 luminosity [19]. Consistency test of the SUSY Higgs system and A t measurement A recent study provides examples of how precision Higgs boson measurements at the ILC can be combined with LHC measurements of the masses of SUSY particles to test supersymmetric relationships and extract electroweak scale SUSY parameters [20]. In one example, LHC measurements of the masses of the pseudoscalar Higgs A, the bottom squarksb 1 ,b 2 , and top squarkst 1 ,t 2 are combined with ILC measurements of the masses of the top quark and lightest Higgs boson to predict the branching ratios of the lightest Higgs boson to bb and W W * . The dark blue splotches in Figure 2(a) indicate the allowed regions for the Higgs branching ratios to bb and W W * , while the bands for the ILC's Higgs branching ratio measurements show how well these predictions will be tested. If the ILC branching ratio measurements are consistent with the MSSM predictions then the branching ratio measurements also provide an indirect measurement of the trilinear coupling A t . . Also shown is the expected branching ratio precision from ILC measurements. Table 3 summarizes the accuracy with which Higgs branching ratios, the total Higgs decay width, the top Yukawa coupling and the Higgs self-coupling λ can be measured at the ILC through a combination of 500 fb −1 luminosity at √ s = 350 GeV and 1000 fb −1 luminosity at √ s = 1000 GeV [13,16,19,21,22]. Conclusion In summary, there has been progress in understanding beam related systematic errors in the measurement of Higgs boson masses at the ILC. It has also been shown that Higgs physics research at √ s = 1000 GeV produces improvements in Higgs branching ratio and Table 3: Relative accuracies for the measurement of Higgs branching ratios, the Higgs boson total decay width, the top Yukawa coupling g ttH and the triple Higgs coupling λ obtained through a combination of 500 fb −1 luminosity at √ s = 350 GeV and 1000 fb −1 luminosity at √ s = 1000 GeV.
2014-10-01T00:00:00.000Z
2004-11-16T00:00:00.000
{ "year": 2004, "sha1": "0500b9c40d1650d57d0841bc269ea17e4244b279", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0411221v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "557dd4ca366161341e2d9de25d7e5206fab32cdc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
34321075
pes2o/s2orc
v3-fos-license
Automatic extraction of endocranial surfaces from CT images of crania The authors present a method for extracting polygon data of endocranial surfaces from CT images of human crania. Based on the fact that the endocast is the largest empty space in the crania, we automate a procedure for endocast extraction by integrating several image processing techniques. Given CT images of human crania, the proposed method extracts endocranial surfaces by the following three steps. The first step is binarization in order to fill void structures, such as diploic space and cracks in the skull. We use a void detection method based on mathematical morphology. The second step is watershed-based segmentation of the endocranial part from the binary image of the CT image. Here, we introduce an automatic initial seed assignment method for the endocranial region using the distance field of the binary image. The final step is partial polygonization of the CT images using the segmentation results as mask images. The resulting polygons represent only the endocranial part, and the closed manifold surfaces are computed even though the endocast is not isolated in the cranium. Since only the isovalue threshold and the size of void structures are required, the procedure is not dependent on the experience of the user. The present paper also demonstrates that the proposed method can extract polygon data of endocasts from CT images of various crania. Introduction Understanding the causes and process of brain evolution in human lineage is a central problem in the field of physical anthropology. However, since soft tissues such as brain are not fossilized, endocasts must be analyzed in order to infer brain morphology enclosed in fossil crania. Although manually-replicated endocasts have long been used for the materials [1], recent developments in virtual anthropology have allowed to handle 3D CT images directly [2]. The primary objective of the present research is to extract endocasts as polygons. PLOS The use of CT scanning technology is a promising approach for acquiring geometric data of crania. X-ray CT scanners can obtain cross-sectional images of the target objects, and 3D images of the target objects can be obtained by stacking the 3D images. The surface structure of the cranium obtained from CT images can be computed by isosurface extraction methods such as the Marching cubes algorithm [3]. Once the endocranial polygons are extracted, they can be used for various anthropological applications. The primary advantage of CT scanning of fossil crania is that the method provides non-destructive measurement. Thus, researchers can analyze crania in virtual space without the need for physical models. Virtual assembly of crania have been reported by several researchers [4][5][6][7][8][9]. Moreover, Amano et al. [10] reported decomposition and reassembly of the Neanderthal Amud 1 cranium in virtual space. Endocranial surfaces may also be used to identify cortical features from sulcus patterns imprinted on the surfaces [1,[11][12][13]. Endocranial models can also be used for variation analysis of brain morphology (e.g., [9,14,15]). Moreover, attempts have recently been made to reconstruct the brain morphology of Neanderthals by warping brains of modern humans based on endocranial morphology [16]. One of the primary issues in endocranial polygon extraction from CT images of crania is the requirement of manual operation. Since the endocranial surfaces exist inside the crania, other surfaces, such as exocranial surfaces, must be manually removed from their isosurfaces. This is a tremendous task because these surfaces are close to each other and manually removing exocranial surfaces sometimes results in the inadvertent removal of endocranial surfaces. An alternative approach is slice-by-slice contouring, which is relatively easy. However, tremendous tasks still exist, since the number of slices is usually large. Thus, endocast extraction becomes a bottleneck in digital anthropological research. Extracting a meaningful region from the geometric data is known as segmentation. This is a classic problem in image processing and geometric modeling, and a number of segmentation methods have been investigated. Commonly used segmentation methods find a discontinuity in the intensity values or geometric features (e.g., curvature) using energy minimization problems such as active contours [17], the level set method [18], graph cut algorithms [19], and variants thereof. These methods work well for scanned images when proper parameter settings or energy functions are designed. However, the parameter settings used in these methods are complicated. For example, Liu et al. introduced a method for extracting human bones based on level set functions [20]. However, this method is designed only for extracting pole-like bones and is not efficient for extracting endocranial polygons. Michikawa et al. introduced a method for extracting vocal tracts based on mathematical morphology [21]. However, this method assumes that the extracted region is almost closed and extracting endocranial spaces that are largely open is difficult. The present paper describes an automatic method for computing endocast shapes as polygonal data from CT images of human skulls. The proposed method is based on the observation that the endocranial region is the largest space in the skull. Based on this observation, the algorithm used in the proposed method is designed so that the background voxels are classified into the primary endocranial space and other smaller spaces. The proposed method consists of three major steps: binarization, segmentation, and polygonization. We first classify the CT images into the skull (foreground) and the background. Next, we extract the endocranial region from the input data based on watershed segmentation [22]. In this step, we first compute Euclidean distance fields from the binary images of the input data. The initial seed voxels of the endocranial region and other regions are assigned to the voxels with larger distance values. Segmentation is then performed by expanding the initial seed voxels based on the distance fields. Polygonization of the endocranial region is achieved by commonly used isosurface extraction methods [3] for the extracted region only. One of the primary advantages of the proposed method is the automation of endocast extraction from human crania. The user needs only two parameters: the size of void structures (e.g. diploic space and cracks) and the isovalue threshold for binarization. This means that the extracted results are not dependent on the user's experience. In addition, the proposed method can extract closed and manifold surfaces of endocranial polygons. This is efficient for various applications, including volume estimation and 3D printing, whereas manual operation requires time-consuming tasks such as hole filling and topological cleaning for generating completely closed manifold surfaces. In the present study, we implemented the proposed method and applied the method to CT images of various types of crania, including fossil crania. The results demonstrate that the proposed method can automatically extract endocranial polygons from CT images. Method Given CT images of crania, the proposed method computes endocranial polygons of crania. The binarization step classifies the input CT image into voxels representing the cranium (foreground) and background voxels. We use a simple binarization with an appropriate threshold t, although other methods (e.g., automatic estimation by Otsu [23]) can also be used. Next, we apply the cavity detection method proposed in [24] based on a black (bottom) hat operator in mathematical morphology [25,26] in order to remove small cavities in the cranium. The segmentation step classifies background voxels in the binary image into endocast voxels and voxels of other types. The proposed method uses the watershed-based method [22] using distance field [27] of the binary images computed in the previous step. Given the binary image and its distance field, we first assign the initial seeds for the endocast voxels and other voxels (Fig 1E). Since the endocranial region is the largest empty space in the cranium, the center voxels of the endocast must have larger distance values. However, since the voxels with the largest distance values usually exist outside of the cranium, we first assign the "other" label to the edge voxels b i (orange lines in Fig 1E) so that the center point of the endocast will be the voxel with maximum distance value. In addition, for each boundary voxel b i , we also assign the "other" label to its neighboring voxel v j that satisfies ||b i − v j || sd(b i ), where d(b i ) denotes the distance at b i and s denotes a scaling factor (s < 1) for assigning the label to the voxels with larger distance values. Since the voxel with the greatest distance from the rest of the voxels must be the center of the endocast voxels, we assign the "endocast" label to this voxel e and nearby voxels v j that satisfy ||e − v j || sd(e). Watershed segmentation is then applied to the binary images by expanding the initial seed voxels based on the distance field. When the expansion is stopped, background voxels are decomposed into the endocast voxels and other voxels. The final step is polygonization of endocranial surfaces from CT images using an extended version of the partial polygonization method using mask images introduced in [21]. Prior to polygonization, We fill voxels with "other" label by the threshold value t used in the binarization step. This manipulation results in closed isosurfaces being obtained by the original Marching cubes algorithm. Results We implemented the above algorithm as a Windows binary in C++. We used Eigen library [28] for linear algebra computation in our implementation, and other parts are developed from scratch. We also applied the algorithm to various types of crania, as summarized in Table 1. Figs 2 through 7 show the results of the endocast extraction. Although imprints of sulci and gyri are not usually identifiable on the endocasts extracted from adult human crania, they are known to be more pronounced in macaques [13]. Our results demonstrated that identification of cortical features from the endocast morphology may be possible for macaques (Figs 5 and 6) but not for adult humans (Figs 3 and 4). Automatic extraction of endocranial surfaces from CT images of crania Discussion In the present study, we developed an automatic method for extracting the endocranial surfaces from CT images, in order to facilitate morphological analyses of fossil endocasts. As shown in Figs 2 through 7, the proposed method can extract only the endocranial surfaces from the CT images of human crania while other parts are not polygonized. In particular, surface bumps on the endocranial surfaces are well preserved for all examples. This is because the polygonization method used in the present study inherits sub-voxel accuracy from the marching cubes algorithm. Since other reconstruction methods also use this for polygonization, the result surfaces by other methods must be same, if the same threshold is given. In addition, the surface is guaranteed to be a closed two-manifold surface. These properties enable very efficient quantitative analysis and post-processing, because commonly used geometry processing tools assume that the input shape is manifold. Due to these two properties, the proposed method is easily combined with other geometry processing (e.g., mesh simplification [29]). Note also that the proposed method can also handle tilted models. For example, the M15 model (Fig 5) is largely tilted. Although conventional slice-by-slice contouring is difficult, the proposed method can extract the endocast of such a tilted model because the computations are done in 3D. We compared the results obtained using the proposed method with those obtained by manual operations. Fig 8 shows the results for KUMA3147 obtained by manual operations [15] ( Fig 8A) and by the proposed method (Fig 8B). According to [15], the results shown in Fig 8A was created using medical imaging software (Analyze 9.0; Mayo Clinic, Biomedical Imaging Resource, Rochester, MN, USA) and reverse engineering software (RapidForm 2006; INUS Technology, Seoul, Korea). The total working time for polygonization was two hours. Note that the polygonal models by [15] are pose-normalized, and we applied a shape registration method to them for evaluation of geometric difference. We confirmed that no significant difference could be found in either of the models. except for the filled regions such as foramen magnum (Fig 8C and 8D). The quantitative difference is very small (0.14 [mm] on average), and the maximum difference appears foramen magnum because these holes are filled in different criteria. Fig 9 shows cross-sections of the CT image and the polygon models. These images show that our segmentation method sometimes expands outside around larger holes. This depends on distance field used in watershed computation. On the other hand, expansion can also be observed in the polygonal models by manual operation as shown in Fig 9D. These differences show that clear criterion for filling these holes do not exist. However, our method provides consistent criterion for segmentation, hence any operators can automatically compute equivalent results from CT images. We also compared the present results with those obtained by a level set method using itk-SNAP [30], a popular segmentation tool in medical imaging. similar. In order to overcome this problem, initial seed points must be carefully determined, but appropriate determination of the seeds is not easy and usually time-consuming. The proposed method requires two major parameters in order to extract endocast models. The first parameter is the structural element size, or the radius of the sphere, used in the morphological closing in the binarization step. This is required for filling the small cavities in the cranium and the extent of the bottleneck will be a guide for parameter tuning. We used r = 6 [voxels] for all experiments. The other parameter is the isovalue for the endocranial surface. The isovalue is a common parameter for creating polygon data from CT images. The best threshold can be easily estimated by volume rendering software. In addition, the endocranial surface is robust to the variation of isovalues. Fig 11 shows the results for KUMA3008 and KUMA3147 obtained using different CT values, namely, -400, 0, and 400. No clear geometrical differences were observed between these results. The geometric differences between these models are approximately 1 voxel pitch of the CT images (0.28 ± 0.37 [mm] (KUMA3008) and 0.23 ± 0.43 [mm] (KUMA3147)). The proposed method can also be applied to non-human primates. Figs 5 and 6 showed the results for the crab-eating monkey (Macaca fascicularis) models. These models have smaller endocasts, and automatic initial seed assignment failed in both experiments. Thus, the endocast is not always the largest empty space in the crania of non-human primates. For these examples, we provide an alternative approach to the initial seed assignment. Given the binary image and its distance field, we binarize the distance field by the thresholdt. The objective of binarization is to obtain two connected components used for "other" and "endocast" labels. The guideline fort is to fill all the bottlenecks connecting to the endocast space. We usedt ¼ 10 [mm] for the M15 data andt ¼ 15 [mm] for the M16 data. Note that this is not necessary for the extraction of human crania because the endocranial region is the largest empty space in the cranium. The computation times of the experiments are summarized in Table 2. The experiments were conducted using a Windows PC with an Intel Corei7-3930K (3.2 GHz) processor, 64 GB of RAM, and an NVIDIA Quadro 4000 graphics processing unit. Although our Right Bottom Composite implementation has not been yet fully optimized, the computation time was less than ten minutes for all examples. We expect the computation time will be improved by, for example, optimizing the graphics processing unit. Although the computation time directly depends on the resolution of the CT images, we believe that the computation time for other samples will not exceed those in the experiments because the cranium models are usually scanned using medical CT scanners and the sizes of the CT images must be similar. Automatic extraction of endocranial surfaces from CT images of crania The proposed method has three major limitations. First, the quality of the polygons largely depend on the results of binarization. Since the CT values of the thin part of the skull will be smaller than expected, it is hard to determine a good threshold and the binarization results may create unexpected voids. Although the proposed polygonization scheme may fill such defects as it is, the scheme should be improved in the future. The second limitation is how to define the boundary surfaces of canal structure. The last limitation is that the proposed method may fail when the assumption that the endocast is the largest empty space in the CT images does not hold. In such cases, other empty regions may be extracted. Conclusion We have presented a method for computing endocranial surfaces from CT images of crania. One of the primary contributions of the present study is to automate endocast extraction using volumetric image analysis technology. The experimental results revealed that the proposed method could extract endocranial polygons from CT images of human crania within ten minutes using a common desktop PC. We expect that the proposed method will accelerate morphological analyses of fossil crania, such as the analysis of individual differences and inference of brain shapes. The proposed method has the potential to extract other cavity structures in the human body (e.g., sinuses). In the future, we would like to extend the proposed method so that other anatomical features can be extracted. As such, we need to introduce other criteria in order to extract target features. In addition, we would like to address some limitations discussed in the previous section in order to allow accurate extraction of cranial surfaces.
2018-04-03T02:37:30.072Z
2017-04-13T00:00:00.000
{ "year": 2017, "sha1": "e08c2a77125df79a39f6a7667c428ee7960c7727", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0168516&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e08c2a77125df79a39f6a7667c428ee7960c7727", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
234081159
pes2o/s2orc
v3-fos-license
Electronic Contract Ledger System Based on Blockchain Technology The rapid development of the information Internet has promoted the development of electronic contracts, but user identities have been stolen and electronic contracts are easily tampered with. Seriously affected the fairness and security of online electronic transactions. For the main body of the transaction, how to confirm that the data identity of the transaction body has not been misused and the transaction electronic contract has not been tampered is the key problem which needs to be solved first. This paper designs the electronic contract platform by combining user identity authentication, encrypted transmission of electronic contracts and blockchain ledger storage. Ensure user identity uniqueness, contract transmission data integrity and non-tampering, and traceability of the signing process, thereby achieving security and fairness. Introduction With the development and popularization of information technology, electronic contracts have gradually replaced paper-based written contracts in the process of enterprise resource management, which plays an increasingly important role. According to the relevant provisions of China's "Contract Law" and "Electronic Signature Law", electronic contracts mainly refer to: Agreement between two parties or multi-party transaction parties in the form of electronic data regarding the establishment, alteration, and termination of property civil rights and obligations. Compared with traditional paper contracts, the time, place, and form of signatures of electronic contracts have changed. The legal system brings shocks and challenges. William C Maloney and Mark A Singleton [1] give a system which provided for controlling, real-time logging, and archiving complex commercial transactions such as the purchase and financing of an automobile. J. Reed Smith et al. [2] examine a model of supply chain contracting with a purchaser that desires to acquire as much of a product as possible by a low price. The above literature has studied the legal validity of electronic contracts, the time and place of signing, and the supervision of electronic contracts. However, the contracting parties of the electronic contract are all data user identities. Because of the transmission of electronic data, user fraud and contract modification challenge the authenticity and legality of the electronic contract, which seriously affects the fairness and security of online transactions. Therefore, when signing an electronic contract, it is necessary to protect the interests of both parties fairly. How to prevent traders from denial and counterfeiting and prevent data from being tampered with will be of great significance. The Problems Faced by Electronic Contracts Compared with paper contracts, electronic contracts have undergone major changes in the signing environment and signing methods. There are also the following three problems in the way of online electronic contract transactions. The Identity of Contracted User is Used Fraudulently In the process of concluding a traditional paper contract, the contracted user determines the rights and obligations of both parties by signing or stamping on the written contract in person. Electronic contract offers and commitments are transmitted in the form of electronic data, and its establishment, change, and termination do not require paper written forms. If the contractor has the only corresponding user data (user name + password) on the Internet, the intention is expressed through "user name + password". When the contractor has no intention to disclose information or is hacked by a hacker using a database collision attack method, it is illegal to implement theft and modification of user information. In order to enhance the security identification of the user's identity, the combination of the USB Key and the user's password is currently used to store the digital certificate. It corresponds private key in the security area of the USB Key chip, which ensures the information security of the user's identity to a certain degree. However, Chen et al. [3] pointed out that a variety of attacks against USB hardware interfaces have appeared, causing related systems to face serious security risks. As an important part of the USB Key, the security chip is an intruder Using semi-invasive attacks, differential energy attacks, laser attacks and other attack methods [4], it is possible to crack the digital certificates and corresponding private keys it stores, so there is a risk that the USB Key will be cracked and copied. Intruders can therefore use the identity of the contracted user to perform abnormal signing operations. As a result, there are signing risks and disputes. Tampering with Electronic Contracts Easily Electronic contracts are usually displayed in the form of data messages, through the electronic data exchange (EDI) of computer networking, the standard agreement of Internet is used to electronically transmit and reach the rights and obligations of both parties. The carrier for concluding electronic contracts is different from traditional contracts. Unlike traditional paper contracts, electronic contract information is stored electronically in a carrier such as a computer or disk, and its modification, transmission, and storage are all carried out in the computer. Electronic data has the characteristics of intangibility and easy modification. In order to ensure the safe transmission of electronic contract data, the common encryption method is the symmetric encryption algorithm, which has the advantages of high efficiency, high performance and flexibility. The symmetric encryption algorithm is used to encrypt and transmit the electronic contract information, but the symmetric encryption key may be cracked. When the key is cracked by the attacker, the content of the transmitted electronic contract data will be intercepted and tampered by the attacker, which will undoubtedly cause damage to the legitimate rights and interests of the transaction subject. Evidence of Electronic Contract With the transformation of the contract from paper to electronic form in the process of enterprise resource management, the signing form becomes more efficient and faster. At present, most thirdparty electronic contract platforms provide electronic contract signing services for both contract parties. The digital watermark technology and electronic signature technology used by them lack the supporting management of regulatory agencies. It is difficult for third-party platforms to effectively guarantee the integrity and reliability of electronic contracts. The main body of both parties may have disputes due to the content of the electronic contract. Unlike the proof of traditional contracts, the socalled "electronic contract" in the form of ordinary data messages needs to be judicial evidence. To become a judicial evidence, the data message needs to be accompanied by a notary agency for evidence collection, custody, and identification in accordance with judicial regulations. By determining the security of the electronic contract signing environment, the time and credibility of the 3 establishment of the electronic contract, the process is cumbersome and costly. And the result of the proof may not be approved by the court, which will affect the validity of the electronic contract evidence [5]. In summary, the identity fraud of electronic contract users, the falsification of electronic contracts, and the complexity of electronic contract forensics have restricted the widespread application of electronic contracts. For both parties to the online transaction, how to ensure that the identity of transaction subject is true and credible, that the electronic contract is true and complete, and that the electronic contract can be easily verified has become a key issue which needs to be resolved. This article will combine identity authentication technology, reliable digital signature technology, and blockchain ledger technology to design and implement an electronic contract service platform. The platform realizes the certainty of trader's identity, the non-repudiation of information sent, the confidentiality of information transmission, the integrity of data exchange, and the speed of complete evidence chain query, thereby ensuring the security and fairness of the platform. Basic Model of Electronic Contract Platform Under the premise of ensuring strong user identity authentication and electronic contract verification services, a basic model of the electronic contract platform is given. This model describes the overall process for users to use the platform, as shown in Figure 1. Figure 1. Basic model of electronic contract platform. As a service platform for signing electronic contracts, its main users are contracted users and verifiers. The contracted user refers to each transaction party related to the contract, and completes the uploading and signing operations of the electronic contract through the transmission interface. The verifiers mainly provide the electronic contract verification and notarization services through the electronic contract signing process data information stored in the block ledger subsystem. Among them, the contracted user needs to perform identity authentication when logging in to the platform, and verify that the user account is used by himself, which effectively prevents the identity of the contracted user from taking risks. The electronic contract is encrypted by digital signature technology to ensure the security of the electronic contract transmission. The block ledger subsystem only records the process information of the electronic contract signing, and does not involve the content of the electronic contract, so as to achieve the confidentiality of the electronic contract. In addition, the platform also includes system administrators, which mainly perform daily monitoring and maintenance of the system. Design Principles of Electronic Contract Platform In order to ensure the fairness and security of the electronic contract platform, solve the problems of the identity of the contracted user, the electronic contract has been tampered with, and the contract cannot be verified. This paper builds a signing platform based on a combination of software and hardware identity authentication subsystem, encrypted transmission of electronic contracts, and block ledger. The specific design principle is shown in Figure 2. Identity Authentication Subsystem. The electronic contract signing method has changed from traditional face-to-face signing to online signing. "User name + password" has become the online identification credential of the user's identity. When intercepted by hackers or the user's own unintentional disclosure, identity fraud is used to sign the electronic contract. The electronic contract disputes caused will undoubtedly affect the application efficiency of the contracted platform. Therefore, the identity authentication subsystem based on the FIDO protocol and the physical unclonable function (PUF) is designed as the second factor of user identity authentication. After the "password" authentication is passed, the second authentication of the authentication token is performed, thereby effectively ensuring the unique correspondence between the data user and the physical user. The identity authentication subsystem uses a combination of software and hardware authentication, where the interaction between the hardware and the server is developed based on the FIDO U2F protocol with national secret algorithm (SM2/SM3/SM4), and the physical non-cloning technology (PUF) [6] integrated in the authentication token, fully guarantee the non-copyability of the authentication token. It is mainly divided into two parts: user registration protocol and user authentication protocol. The chip of the hardware device adopts the national secret security chip, which fully guarantees the unbreakability of the hardware data information. At the same time, in order to enhance the replication resistance of the hardware device, the token supports physical non-cloning functions to ensure that the user has the uniqueness of the device. The technical principle of PUF is to generate encrypted "random differences" introduced by the integrated circuit in the hardware chip due to process limitations during the manufacturing process. The PUF response signal is automatically generated when the device is powered on, and when the device is powered off, the response signal is automatically annihilated. By using the physical unclonable function algorithm, the unique digital fingerprint information of the hardware chip is extracted under power-on, and the extracted encryption information (root key) is used to encrypt the signature key or decrypt the key handle. The authentication token has physical characteristics of the chip cannot be copied and the root key is generated at power-on, and the authentication token does not store the signature public key, key handle and root key information, which can fundamentally guarantee the non-copyability of the token. Electronic Contracts Subsystem. Electronic contracts are transmitted using digital signature technology to prevent them from being intercepted and tampered with during the transmission process. Due to the length of the contract, the hash technology is used to compress the files to be transmitted into fixed-length hash values before encryption. At present, the most hash functions used commonly are SHA1 and MD5, 128-bit or longer. Its working principle is using hash function to hash a variablelength string into a fixed-length hash value, even if the string changes difference smally, the generated hash value will also be different. The hash function can be used to associate the search term with the index value to generate a hash table that is easy to search. This hash function is a one-way irreversible calculation method. From a fixed-length hash value, it is basically impossible to restore it to the original one. This allows the hashing technology to ensure the integrity of the document information, and the hash value comparison can effectively check and detect whether the electronic file has been tampered with. This platform uses the encryption algorithm RSA and AES mixed encryption system [7] to transmit interface data, and uses the SHA256 hash algorithm to verify the integrity of electronic contracts, effectively ensuring that electronic contracts are not tampered with, and achieving data transmission security. The process is shown in Figure 3. The signing sender User1 calculates the electronic contract information F using the SHA256 algorithm to obtain a hash value H1. We can generate the symmetric key E according to the predetermined AES random algorithm or random number table. Then we use the high-efficiency encryption feature of E to encrypt the electronic contract F to obtain F 1 . In order to ensure the security of the AES key E and hash H1, we use the RSA public key received to encrypt E and H1 to obtain the encrypted information E 1 and S respectively. The receiver User2 uses RSA private key to decrypt information received to obtain E and H1 one by one. He calculates the digital signature F 1 to obtain the electronic contract F, and uses the SHA256 algorithm to calculate the new hash value H2 of the electronic contract. If H1 and H2 are consistent, the electronic contract is transmitted securely. Otherwise, it will prompt that the electronic contract has been tampered with. After the electronic contract is signed by both parties. The block ledger server uses digital signatures to digitally sign the electronic contract, uses the SHA256 algorithm to generate a new hash value, and together with the basic information of the contracting parties, generates block data and distributes it to the block ledger database. The digitally signed electronic contract is distributed to the business database. Block ledger subsystem. An electronic contract is an agreement between two parties or multiparty transaction parties in the form of electronic data regarding the establishment, alteration, and termination of property civil rights and obligations. Electronic contracts and paper contracts have the same legal effect. If the contracting parties have contract disputes that cannot be resolved through negotiation, contract verification and litigation may be required. We store the electronic contract process data as blocks, and each block is linked as a blockchain; the block body stores the original data. The block ledger includes data such as block headers, contract transaction details, transaction counters, and block size, as shown in Figure 4. The block header contains all information except the contract transaction information. It mainly includes: • previous block head hash value, which is used to ensure that the blocks are connected in sequence; • time stamp, record the generation time of the block; • random array, the number of random arrays and the generation algorithm are defined in advance; • transaction counter, record the number of transactions contained in each block; • block size, record the size of each block of data, each block is currently limited to less than 1MB. The main body of the block is the contract transaction details, which mainly records key information such as transaction participants, transaction time, digital signature, and contract hash value, which is used to verify the authenticity and integrity of the contract and has not been tampered with. Conclusion This article proposes an electronic contract platform based on block ledgers, which ensures the fairness and security of the platform through user identity authentication, data transmission encryption, and block storage during the signing process. Introduce the blockchain storage format and the underlying cryptographic technology to form a daily contract transaction into a block, and use the signature information to link the blocks to form a chain relationship. Changing the data of one block in this way must affect the entire chain behind. When an attacker needs to tamper with a block, in addition to obtaining encryption machine decryption authority and database operation authority, it must spend a lot of time to reconstruct the entire chain. At the same time, the identity authentication is strengthened through the second factor of identity, which effectively guarantees the non-counterfeiting and non-replicability of the contracted users. The data encryption ensures the transmission of the electronic contract network. The block account book guarantees that the signing process can be traced back and the electronic contract can be verified. We provide a convenient way for Internet transactions of enterprises.
2021-05-10T00:03:43.065Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "45ac678f974d2205d37b13ec5e7693cefa02f1a1", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1828/1/012112/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "16fa5e39b27fcc2753b565e5810866666136c7a8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Business" ] }
237872414
pes2o/s2orc
v3-fos-license
SOCIOECONOMIC DRIVERS OF LAND USE INTENSIFICATION IN FIJI ISLANDS: A GEOGRAPHICAL APPROACH Shifting cultivation is a common agricultural practice in the Pacific Islands rarely sustainable today since fallow periods are ever shorter due to the demographic growth, farms fragmentation, uncertain land tenure, and pressures from the market economy among other factors (drivers). Official statistical data and maps were utilized to build up chloropleth maps indicating the areas of high land use intensity (LUI) according to farm size ranges and socioeconomic parameters (treatments) for the country. Twenty vector layers were digitized from published maps for eight ranges of farm sizes (from less than 1 to more than 100ha), and converted to raster format with a 170m pixel size. Critical maps were then built by boolean operations displaying areas in which both the land use and the socioeconomic driver were simultaneously ranked as high or very high. Treatments showed significant differences among them (p<0.05), being the most influential those related to human demography. In farms smaller than 3ha size land use is intense when (in order of importance) Indo-fijian population, household size and land availability values are high; while in farms of 20-50ha size it is intense when the values of (in order of importance) population change, Indo-fijian population, land availability, fishing and sugar farming are also high. LUI patterns normally decrease with the increase of farm size, but increases on farms over 20ha size. It is recommended to propose policies that will des-accelerate the rates of land use, such as the facilitation of land ownership over farms of bigger sizes, the gradual replacement of mono cropping by agroforestry systems, and the creation of more employment opportunities in the industry, tourism and services sectors. INTRODUCTION Deforestation and forest degradation are both critical environmental problems with serious long term economic, social and ecological consequences. In many tropical countries the rates of deforestation were based on estimates or surrogate data, rather than on empirical studies (FRA 2000, Boroffice 2006; the varied and complex causes (drivers) interplay in a synergetic way (Megevand 2013). Subsistence farmers and local communities are the most relevant agents of deforestation in the ways they respond to external pressures and incentives (Hoffmanna et al 2018) depending on the region (Mas and Cueva 2015). Shifting cultivation is still an extensive strategy in Oceania (Roos et al. 2016) in which relatively short periods of continuous cultivation are followed by relatively long periods of fallow (FAO 1982). Short fallows trigger yield declines (Kafle 2011). After shifting cultivation both the fallow age and land use intensity influence the recovery of native trees diversity (Mukul 2015); the forest degradation and subsequent rural poverty worsens in a cycle when there are few economic alternatives, unstable or low market prices, no incentives for innovation, and successive subdivision of land at the death of the owner (Chayanov 1966) (figure 1). Patterns of crop planting are determined by variations in rainfall. Mean monthly temperature ranges from 23⁰C in July and August to 27°C in January. The southeaster shorelines of the big islands get 3,000 to 5,000mm per year (FMS 2015). The distribution of the major commercial crops in Fiji is not determined primarily by the physical environment, coconuts and bananas are more related to the absence of alternative cash crops, and rice farms depend on the distribution of Indian farmers rather than on the particular suitability of the soil and climate of the producing areas (Walsh and Crosbie 2006). Only 16% of the land in Fiji is used for arable farming in valleys, river deltas and coastal plains. Eighty four percent of the land is hold under customary ownership, 38% of it is leased, only 85 of the total land area is of freehold and 3.8% belongs to the State (Walsh and Crosbie 2006). The land-use suitability analysis identifies the most appropriate spatial pattern for future land uses according to specify requirements, preferences, or predictors of some activity (Hopkins, 1977;Collins et al., 2001). Overlay analysis is a common method to understand spatial interaction from more than two pieces of spatial information (Miyazaki and Fujii 2011) in which the Boolean intersection and the Boolean union result in classifying areas as suitable for a particular land use if each suitability map meets its threshold, or at least one suitability threshold value accordingly (Malczewski 2004). The hypotheses in this paper is firstly that there are significant differences between impacts of drivers on the intensification of land use, and secondly, that there are significant differences among the land use intensity index values at land holdings of different sizes in the country. MATERIALS AND METHODS Statistical and geographical data used were maps from the Fiji Encyclopaedic Atlas produced in ArcMap (Walsh and Crosbie 2006) and the 2009 national agricultural census which include data from 1970 and 1990. Sixteen maps were selected, scanned, imported into Ilwis open (Ilwis 2020) and georeferenced with WGS84 projection and corner coordinates 15°43'31.29"S, 176°29'04.38"E (top left) and 19°28'03.47"S, 178°25'58.51"W (bottom right), with 5.6 seconds pixel size. Tikina (district) boundaries were digitized, converted into polygons and rasterized. Twenty vector layers were made for eight ranges of farm sizes, with a 170m 2 pixel size. They were re categorized into very low, low, intermediate, high and very high ranks. A land use intensity index (LUI) was calculated as LUI = Total crops area / (total crops area + fallows area). Eight maps showing land use intensity per province according to farm size range (less than 1ha, 1-3ha, 3-5ha, 5-10ha, 10-20ha, 20-50ha, 50-100ha, and over 100ha) were produced. To answer the question on how are the socioeconomic drivers (maps) related to land use intensity, critical maps were built displaying areas in which both the land use and the socioeconomic driver were simultaneously ranked as high or very high, according to the following script: Critical map = IFF ((('LUI map' = "very high") OR ('LUI map' = "high")) AND (('Land available map' = "very high") OR ('Land available map' = "high")), "related", "unrelated"). The script ran 160 times to produce 160 critical maps, which display areas in which both the land use and the socioeconomic driver were simultaneously ranked as high or very high. Their pixel numbers were tabulated, statistically tested and interpreted. RESULTS AND DISCUSSION Samples of socioeconomic drivers to shifting cultivation are displayed on figures 3 to 5. They were georeferenced and converted to the raster format for map calculation with the described script. Farms between 3-50ha are mostly covered by forests (natural or planted), grasslands or fallows of over one year; shorter fallows are common in smaller farms. Land use is intense on farms under 3ha, and very intense when they are of less than a hectare. Land use intensification diminishes on parcels from 3 to 50ha, and again intensifies when the farms are of over 50ha. In farms of less than 3ha size land use is intense when (drivers in order of importance from high to low) Indo-Fijian population, household size, and land availability values are high. In farms of 3-10ha size land use is intense when the values of (drivers in order of importance from high to low) household size, subsistence employment, coconut farming, land availability, Fijian population, and population change are high. In farms of 10-20ha size land use is intense when the values of (drivers in order of importance from high to low) household size, population change, subsistence employment and Fijian population are high. In farms of 20-50ha size land use is intense when the values of (drivers in order of importance from high to low) higher education, fishing, forestry, in-migration, population density and population distribution are also high. In farms of 20-50ha size land use is intense when the values of (drivers in order of importance from high to low) population change, Indo-Fijian population, land availability, fishing and sugar farming are also high. Figure 18. Total of pixels of areas with high or very high values of LUI per socio economic parameter. *Means that share the same letter represent not significant differences between them (p<0.05). Figure 19. Total of pixels of areas with high or very high values of LUI per socio economic parameter. *Means that share the same letter represent not significant differences between them (p<0.05). CONCLUSIONS Results showed significant differences among treatments (p<0.05), with factors related to human demography being the most influential. In farms of less than 3ha size the land use is intense when first, indo-Fijian population, household size and lastly land availability values are high; while in farms of 20-50ha size the land use is intense when the values of first population change, indo-Fijian population, land availability, fishing and lastly sugar farming are also high. LUI patterns normally decrease with the increase of farm size, but increase on farms over 20ha size. It is recommended to reformulate policies that will des-accelerate the rates of land use, such as the facilitation of land ownership over large farms, the gradual replacement to agroforestry systems, and the creation of more employment opportunities in the industry, tourism and services sectors of the country.
2021-09-01T15:05:42.609Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "3e8792f29fa51847837b65b9ea3b04c701b390cc", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B3-2021/837/2021/isprs-archives-XLIII-B3-2021-837-2021.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9132fe6dc438e602bae6cd00757cc396eac5f924", "s2fieldsofstudy": [ "Geography", "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
196671440
pes2o/s2orc
v3-fos-license
Investigation of the Antioxidant, α-Glucosidase Inhibitory, Anti-inflammatory, and DNA Protective Properties of Vaccinium arctostaphylos L. Objectives: The scope of this study was to investigate the total phenolic, anthocyanin, and flavonoid contents and the biological properties of ethanol extract (EE), methanol extract (ME), and aqueous extract (AE) from Vaccinium arctostaphylos L. Materials and Methods: EE, ME, and AE of V. arctostaphylos were prepared. Various biological activities such as total phenolic, anthocyanin, and flavonoid contents, and antioxidant (2,2’-diphenyl-1-picrylhydrazyl ferrous ion-chelating, and ferric reducing antioxidant power assays), α-glucosidase inhibitory, anti-inflammatory, and DNA protective properties of these extracts were studied. Results: EE exhibited the highest total phenolic, anthocyanin, and flavonoid contents with 44.42±1.22 mg gallic acid equivalents/g dry weight, 8.46±0.49 mg/Cyaniding-3-glucoside equivalents/g dry weight, and 9.22±0.92 mg quercetin equivalents/g dry weight, respectively. The antioxidant activities of the extracts followed the order: EE>ME>AE. EE and ME inhibited α-glucosidase enzyme and their IC50 values were 0.301±0.002 mg/mL and 0.477±0.003 mg/mL, respectively. In addition, EE and ME were determined as noncompetitive inhibitors with inhibitory constant (Ki) values of 0.48±0.02 mg/mL and 0.46±0.01 mg/mL, respectively. EE in 100 and 300 mg/kg doses caused a significant reduction in formalin-induced edema in mice, demonstrating the anti-inflammatory effect of EE. In DNA protective studies, all of the extracts protected supercoiled plasmid pBR322 DNA against damage caused by Fenton’s reagents due to their radical scavenging activities. Conclusion: Our results demonstrated that EE of V. arctostaphylos L. had strong antioxidant, anti-inflammatory, α-glucosidase inhibitory, and DNA protective effects, suggesting that it might be an effective medical plant to prevent or treat diseases associated with oxidative damage and inflammation. INTRODUCTION Medicinal plants containing secondary metabolites such as phenolic, anthocyanin, and flavonoid compounds have been used as alternative therapeutic tools to treat many diseases throughout medical history. 1 Many plants are considered able to scavenge and hinder free radicals, including reactive oxygen species (ROS) such as hydroxyl radical (OH . ), hydrogen peroxide (H 2 O 2 ), and superoxide anion radical (O 2 .- ), which induce oxidative damage in biomolecules due to these secondary metabolites possessing antioxidant activity. 2 In addition, plantbased natural antioxidants are preferred to synthetic ones due to their good safety profiles. 3 Therefore, there is growing interest in finding natural compounds that could prevent oxidative damage underlying the pathogenesis of many diseases. The genus Vaccinium belongs to the family Ericaceae; it includes approximately 450 species distributed in the Northern Hemisphere and tropical mountains of America and Asia. 4,5 Numerous studies have reported that Vaccinium possesses several biological and pharmacological activities, making it an attractive medical plant. 6 Previous studies reported that Vaccinium species have been used for memory improvement, eyesight protection, cardiovascular protection, and for their antioxidant, antidiabetic, and anticancer activities. [7][8][9][10] Vaccinium arctostaphylos L., commonly named the Caucasian whortleberry, is the only member of the genus Vaccinium and is widely used as an antidiabetic and antihypertensive agent. 11,12 To date, this plant has been reported to contain phenolic compounds such as anthocyanin, flavanol, and procyanidins that are responsible for numerous biological activities such as reducing serum glucose concentration and improving lipid profile, antioxidant and urinary antiseptic activities, etc. 12, 13 Ayaz reported that delphinidin, petunidin, and malvidin were the most predominant anthocyanins of V. arctostaphylos L. fruits, while caffeic acid and p-coumaric acid were the major phenolic compounds. 14,15 Diabetes mellitus (DM) is one of the most prevalent metabolic disorders, characterized by hyperglycemia triggered by inherited and acquired formation of insulin or by insulin resistance. 16,17 According to the International Diabetes Federation, 425 million people are living with DM; this number is expected to increase to 629 million by 2045 approximately. In addition, 352 million adults are at risk of developing DM. 18 α-Glucosidase (EC 3.2.1.20) catalyzes the break of the glycosidic bond in oligosaccharides into α-glucose, resulting in postprandial hyperglycemia. 19 Thus, an α-glucosidase inhibitor could be useful to treat obesity and DM. Commercial α-glucosidase inhibitors such as acarbose, voglibose, and miglitol are currently used against DM, but many adverse effects have been observed such as abdominal pain, renal tumors, hepatic injury, diarrhea, and flatulence. 20 Therefore, scientists seek novel natural α-glucosidase inhibitors against DM. To the best of our knowledge, there is no report on kinetic studies of the α-glucosidase inhibition, anti-inflammatory, and DNA protective properties of V. arctostaphylos. The goal of the present study was to evaluate the antioxidant, anti-inflammatory, α-glucosidase inhibitory, and DNA protective properties of ethanol extract (EE), methanol extract (ME), and aqueous extract (AE) of V. arctostaphylos L. from Turkey. EXPERIMENTAL Plant material and sample preparation V. arctostaphylos fruits were collected from Uzungöl, Trabzon, Turkey, in August 2013 and identified by Prof. Kamil Coşkunçelebi. The fruits were dried at room temperature for 2 weeks and the dried samples were pulverized using an automatic herbal grinder. Then the pulverized fruits were extracted with solvent (ethanol, methanol, and water) in a shaker for 6 h×3. After shaking, the mixtures were filtered with Whatman filter paper No: 1. The solvent was evaporated under reduced pressure by a Heidolph Hei-VAP rotary evaporator. The extracts were kept +4°C until further use. 21 Total phenolic content The total phenolic content of extracts was evaluated using the Folin-Ciocalteu reagent method described by Keser. The calibration curve was obtained with gallic acid (GA) and the results expressed as mg gallic acid equivalents (GAE) per g dry weight of the sample. 22 Total anthocyanin content The total anthocyanin content of extracts was determined with the pH differential absorbance method, as described by Cheng and Breen, and expressed as μg cyaniding-3-glucoside equivalents (CGE) per g dry weight of the fruit. 23 Total flavonoid content The total flavonoid content of extracts was investigated using an Al(NO 3 ) 3 assay and expressed as mg quercetin equivalents (QEE) per g dry weight of the sample. 24 Antioxidant activities 2,2-diphenyl-1-picrylhydrazyl radical scavenging assay The 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activities of extracts were investigated using the method described by Blois and the inhibition percentage was calculated using Formula 1. 25 A control is the antioxidant activity without extracts and A extract is the antioxidant activity with extracts at various concentrations. SC 50 values represented the concentration of the extracts that caused 50% inhibition of radical formation. GA was used as a positive control. Ferrous ion-chelating assay The ferrous ion-chelating activity of the extract was investigated using Chua et al.'s 26 method and the ferrous ion chelating capacities were calculated using Formula 1. Ferric reducing antioxidant power assay The ferric reducing antioxidant power (FRAP) effects of extracts were evaluated using the method described by Oyaizu and expressed as butylated hydroxyanisole equivalents (BHAE) per g dry weight of the sample. 27 BARUT et al. Biological Activities of V. arctostaphylos L. α-Glucosidase inhibition assay The α-glucosidase inhibitory properties were examined according to a previous study with a slight modification. 28 In the present study, the extracts and 0.5 U/mL α-glucosidase enzyme were mixed in a 96-well microplate and left to react for 10 min. After that, 5 mM 4-pNPG was added and the reaction mixture was incubated for 10 min. The absorbance was measured at 405 nm using a 96-well microplate reader. Acarbose was used as a standard reference. The percentage of α-glucosidase inhibition was calculated as follows: Here A control is the activity of enzyme without extract and A extract is the activity of enzyme with extract at various concentrations. Kinetic analysis of α-glucosidase inhibition In order to investigate the inhibition type and inhibition constant (K i ) values of extracts, Lineweaver-Burk and Dixon plots were used against α-glucosidase enzyme. 29 The kinetic analysis was conducted with various 4-pNPG concentrations in the absence and presence of extracts. 30 DNA protective properties The DNA protective properties of extracts of V. arctostaphylos fruits against oxidative damage caused by OH . were monitored by conversion of supercoiled plasmid pBR322 DNA to open circular form as described by Yeung et al. 31 In the present study, the total volume of the mixture was 10 μL, containing Tris-HCl buffer (pH 7.0), supercoiled plasmid pBR322 DNA, 1 mM FeSO 4 , 2% H 2 O 2 , and various concentration of extracts (0.125, 0.25, and 0.5 mg/mL). The mixtures were incubated at 37°C for 1 h. After incubation, loading buffer (bromophenol, glycerol, SDS, and xylene cyanol) was added to the mixture. The mixtures were loaded on agarose gel and electrophoresis was performed at 100 V for 90 min using the wide Mini-Sub cell GT system from Bio-Rad. The results were visualized with the Bio-Rad Gel Doc XR system. 32 In vivo anti-inflammatory activity Animals The male Balb/c mice (25-35 g; n=24) used in this study were kept in temperature controlled (24±1°C) rooms with food and water given ad libitum. They were allowed to acclimatize to the laboratory conditions for 1 week. The experiments were carried out between 9 am and 4 pm. The experimental protocol was approved by the Institutional Animal Ethical Committee of Karadeniz Technical University (2017/45). Formalin-induced hind paw edema The anti-inflammatory activity of EE was evaluated by formalininduced edema. The mice were divided into the following 4 groups with 6 mice in each group: 1) control (saline, 10 mL/ kg p.o.), 2) diclofenac (10 mg/kg, i.p.), 3) EE 100 mg/kg p.o., 4) EE 300 mg/kg p.o. Extract was administered orally to the mice for three consecutive days. Then 60 min after the last dose of extracts and 30 min after administration of diclofenac and saline, 20 μL of 1% formalin (in 0.9% saline) solution was injected into the dorsal surface of the right hind paws of the animals to form edema. Edema was expressed as the increment in paw thickness and was measured 30 min before and 30, 60, and 120 min after the formalin injection by micrometer caliper. 33 Statistical analysis The data were analyzed using GraphPad Prism 5.0 and Microsoft Excel Windows 10. In vitro tests were performed in triplicate and the data were expressed as the mean ± standard deviation. Statistical analysis was performed with two-way analysis of variance followed by Bonferroni tests. P<0.05 was considered statistically significant. 34 Determination of total phenolic, anthocyanin, and flavonoid contents The total phenolic, total anthocyanin, and total flavonoid contents of extracts are shown in Table 1. EE had the highest total phenolic, anthocyanin, and flavonoid contents, with 44.42±1.22 mg GAE/g dry weight, 8.46±0.49 mg CGE/g dry weight, and 9.22±0.92 mg QEE/g dry weight, respectively. In addition, ME had higher total phenolic, anthocyanin, and flavonoid contents than AE, about 1.63-, 1.40-, and 5.57-fold, respectively. Evaluation of antioxidant activity The SC 50 values of DPPH and metal chelating radical scavenging activities of extracts are presented in Table 2. All extracts demonstrated scavenging activities against DPPH radical in a concentration-dependent manner. The DPPH radical scavenging assay showed that EE had significant antioxidant activities, with an SC 50 value of 0.141±0.009 mg/mL. The extracts Enzyme inhibition and kinetic analysis of α-glucosidase inhibition The α-glucosidase inhibitory effects of extracts were evaluated using the da Silva Pinto method when compared to acarbose as a standard reference. The results obtained in the present study were expressed as IC 50 values and are presented in Table 3. The extracts demonstrated an inhibitory effect against α-glucosidase ranging from 0.301±0.003 mg/mL to 0.591±0.007 mg/mL as IC 50 values. EE exhibited the most potent inhibitory activity against α-glucosidase, with an IC 50 value of 0.301±0.003 mg/mL. The kinetic analysis of extracts was carried out using Lineweaver-Burk and Dixon plots and is presented in Table 3 and Figures 1 and 2. These data obtained were plotted as 1/activity (1/V) against 1/substrate concentration (1/[S]) for Lineweaver-Burk plots. These results revealed that the inhibition type EE and ME were noncompetitive, while AE was competitive. K i values using Dixon plots were plotted as 1/enzyme velocity versus inhibitor concentration with varying concentrations of the substrate. The K i values of EE, ME, and AE were 0.48±0.02 mg/mL, 0.46±0.01 mg/mL, and 0.58±0.04 mg/mL, respectively. In vivo anti-inflammatory activity The in vivo anti-inflammatory activity of EE was also evaluated due to its higher antioxidant activity than the other extracts. As presented in Figure 3, the intraplantar injection of formalin solution induced edema in the control group significantly with a peak at 60 min. Pretreatment with 100 and 300 mg/kg doses of EE significantly reduced the edematogenic response at 60 and 120 min compared to the control group (p<0.001). As expected, diclofenac treatment markedly reduced edema thickness at 30, 60, and 120 min compared to the control group (p<0.05; p<0.001). However, there was no statistically significant difference between extract doses or extract doses and the diclofenac group in anti-edematogenic response. DNA protective properties The DNA protective properties of extracts were investigated using supercoiled pBR322 plasmid DNA against damage caused by hydroxyl ( . OH) radicals and the results are shown in Figure 4. When supercoiled pBR322 plasmid DNA (form I) was exposed to Fenton's reagent (FeSO 4 and H 2 O 2 ), form I converted to nicked pBR322 plasmid DNA (form II) by single-strand breaks as shown in lane 2 in Figure 4. Upon increasing concentration of the extracts treated with pBR322 DNA, form II decreased and form I increased in a concentration dependent manner. At 500 μg/mL, EE almost converted form II to form I; thereby it had the highest protective effect among the extracts. DISCUSSION The phenolic compounds, acting as hydrogen donors, ROS scavengers, and reducing agents, are responsible for many biological activities such as hepatoprotective, anti-allergic, anticancer, anti-inflammatory, antimutagenic, antioxidant, and antidiabetic effects. 35 In the present work, EE had the highest total phenolic content, with 44.42±1.22 mg GAE/g dry weight. According to the literature, Ayaz et al. 14 reported that 13 phenolic compounds were identified in V. arctostaphylos fruits from Turkey, including gallic, protocatechuic, p-hydroxybenzoic, m-hydroxybenzoic, gentisic, sinapic, chlorogenic, p-coumaric, 36 reported that total phenolic contents of ME in V. arctostaphylos fruits from different regions were 20.74±0.24 mg GAE/g weight of samples. Hasanloo et al. 37 reported that acidic ME of the plants was found to contain 9.48 mg GAE/g dry weight. The higher amount of total phenolic content was determined as 42.73 mg GAE/g dry weight in Iran and the highest phenolic content was determined in May. Anthocyanins, which are responsible for colors ranging from red to blue in most vegetables, flowers, and fruits, are water-soluble pigments that are extensively spread throughout the plant kingdom. These compounds have been reported to have antiinflammatory and protective effects against chronic disorders such as hypertension, DM, and metabolic syndromes. 38 Latti et al. 15 identified that delphinidin, petunidin, malvidin were the most predominant anthocyanidins in V. arctostaphylos fruits from Turkey using high performance liquid chromatography (HPLC)-diode array detection and HPLC-electrospray ionization-mass spectrometer. In the present study, EE had the highest total anthocyanin content, with 8.46±0.49 mg CGE/g dry weight among the extracts tested. Similar to our findings, Saral et al. 36 reported that ME of V. arctostaphylos was 6.14±0.01 mg CGE/g dry weight. The results obtained in the present study demonstrated that V. arctostaphylos is a rich source of secondary metabolites. The flavonoid compounds, which are secondary metabolites, are crucial constituents due to their active hydroxyl groups. 39 In the present study, the results for total flavonoid were found to range from 9.22±0.92 mg QEE/g dry weight to 1.40±0.02 mg QE/g dry weight. According to the results of Mohaddese et al.'s 11 study, total flavonoid contents of AE, EE, and ME of V. arctostaphylos fruits were 5.4, 7.2, and 5.5 mg QEE/g dry weight, respectively, while Saral et al. 36 reported that ME of it ranged from 1.93±0.10 to 2.16±0.46 mg QEE/g dry weight. In the present work, we determined the antioxidant activities of EE, ME, and AE of V. arctostaphylos fruits on the basis of DPPH and metal chelating, radical scavenging, and reducing power. DPPH, a stable nitrogen free radical, is generally used to determine the scavenging activities of compounds that eliminate this radical with electron donation or hydrogen atom transfer. 40 EE showed higher DPPH scavenging activity and was positively correlated with total phenolic content. The correlation of total phenolic, total anthocyanin, and total flavonoid contents with DPPH was determined using GraphPad Prism 5.0. The Pearson's correlation coefficient (r) and coefficient of determination (R 2 ) results for total phenolic, total anthocyanin, and total flavonoid contents with DPPH were r=0.996 and R 2 =0.992, r=0.830 and R 2 =0.689, and r=0.990 and R 2 =0.980, respectively. In addition, there is a correlation between total anthocyanin and metal chelating effects with r=0.972 and R 2 =0.945. Mohaddese et al. 11 reported that SC 50 values of DPPH radical scavenging of AE, EE, and ME were 75, 45, and 35 μg/mL, respectively. In addition, Jooyandeh et al. 13 prepared ultrasound-assisted extract and reported that V. arctostaphylos fruits were scavenged at a rate of 32.21% at 1 mg/mL.The FRAP assay is an antioxidant method to determine the reducing capacity of samples in vitro. In the present study, the FRAP of extracts was demonstrated in the following order: EE>ME>AE. Güder et al. 12 reported that V. arctostaphylos fruits have remarkable reducing activities at different temperatures. The correlation between the FRAP with total anthocyanin and total phenolic was determined as r=0.950 and R 2 =0.903 and r=0.933 and R 2 =0.870. There are many reports that suggest that phenolic, anthocyanin, and flavonoid compounds included in medicinal herbs are responsible for α-glucosidase inhibition. 41,42 According to these results, the α-glucosidase inhibitory effect with total phenolic and total anthocyanin contents is more compatible than that between the α-glucosidase inhibitory effect with total flavonoid content. Feshani et al. 43 reported that EE of V. arctostaphylos fruits showed antihyperglycemic activity against diabetic rats. The correlation between the α-glucosidase inhibitory effect with total phenolic, total anthocyanin, and total flavonoid contents was determined as r=0.993 and R 2 =0.986, r=0.986 and R 2 =0.972, and r=0.815 and R 2 =0.665. The results from the Lineweaver-Burk plots are presented in Table 3 and Figure 1. EE and ME inhibited α-glucosidase in a noncompetitive manner with K i values of 0.48±0.02 mg/mL and 0.46±0.01 mg/mL, respectively. The noncompetitive inhibitors increase V max values and do not change K m values against enzymes. The noncompetitive inhibitors bind to different sites on the enzyme or enzyme-substrate complex, but do not bind to active sites. Otherwise, AE did not change the V max value and decreased the K m value and so it was a competitive inhibitor with K i values of 0.58±0.04 mg/mL. The formalin-induced paw edema test is widely used to screen new potential anti-inflammatory agents. 44 In the present work, we used this model to evaluate the anti-inflammatory effect of EE and we found a significant reduction in formalininduced edema for both doses of EE at 60 and 120 min when compared with the control group. This result suggested that EE of V. arctostaphylos could have a significant effect on the prevention of inflammatory response. In addition, it is well known that especially free radicals play a major role in several inflammatory diseases. In the present study, we have shown that V. arctostaphylos extracts exhibited potent antioxidant activity due to the diversity of their chemical compounds such as anthocyanins, phenolics, and flavonoids. 45,46 The antioxidant activity of EE might be related to its anti-inflammatory activity. It is well known that Fenton's reagent triggers oxidative damage to the bases of DNA via formation of hydroxyl radicals. Medicinal plants including antioxidants prevent hydroxyl radical-induced DNA damage due to their scavenging activities. 47 According to the literature, several phenolic and flavonoid compounds protect DNA against the toxic and mutagenic effects of H 2 O 2 . 48 In the present work, increasing concentrations of the extracts prevented the cleavage of supercoiled plasmid DNA when exposed to Fenton's reagent. All of the extracts in our study demonstrated remarkable reduction in the formation of form II and increase in the formation form I. EE was remarkably effective in protecting DNA by inhibiting form II and these results may be associated with its antioxidant activities. CONCLUSIONS This study presented the antioxidant, α-glucosidase inhibitory, anti-inflammatory, and DNA protective properties of V. arctostaphylos fruit extracts from Turkey. The study data demonstrated that EE had the highest total phenolic, anthocyanin, and flavonoid contents and exhibited significant scavenging and reducing activities compared to the other extracts. In addition, there was a correlation between antioxidant results and total phenolic, anthocyanin, and flavonoid contents. The α-glucosidase inhibitory studies revealed that EE and ME inhibited enzyme with IC 50 values of 0.301±0.002 mg/mL and 0.477±0.003 mg/ mL and were determined as noncompetitive inhibitors, while AE was a competitive inhibitor. The α-glucosidase inhibitory properties of extracts were in the following order: EE>ME>AE. In the anti-inflammatory experiment, EE indicated a significant reduction in formalin-induced edema in mice. In addition, when DNA was exposed to Fenton's reagent, all of extracts protected the DNA from damage, especially EE due to its antioxidant capacity. These results suggest that EE of V. arctostaphylos L. might be promising for the treatment or prevention of many diseases associated with oxidative damage and inflammation. Further studies are required to confirm these biological activities and mechanisms of action.
2019-04-02T13:14:35.966Z
2018-06-01T00:00:00.000
{ "year": 2019, "sha1": "10dcd835053c80a2e726d0aee267b5e4b0a18c8f", "oa_license": null, "oa_url": "https://doi.org/10.4274/tjps.galenos.2018.28247", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10dcd835053c80a2e726d0aee267b5e4b0a18c8f", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
213835602
pes2o/s2orc
v3-fos-license
Management by the efficiency of the local government The paper proposes a comprehensive approach to the creation of a management system of modern municipality. All stages of he automated information system (AIS) under the code name “Municipality” are considered. The exact data of some periods should be accumulated in such a system. It will provide an opportunity to estimate the achieved level of all subsystems of the municipality to formulate a forecast of the efficiency level for the future. The proposed system is based on mathematical methods that affect the growth of the staffing value. The proposed organizational and technical solutions contribute to improving the management efficiency not only of a single municipality, but also of the entire municipal management system as a whole. Introduction The development of the economy is directly related to the consumer activity, and as a result, the most competitive organizations provide qualitative goods or services. The certain rules and techniques have been developed as a part of the quality management system (QMS) to maintain quality. The QMS has various goals and objectives. However, they usually intended to preventing mistakes that can negatively affect the quality of any product or service. Any activity is characterized by the concept of efficiency. Every system has an unlimited number of properties. However, when exposed to each separate part of the system to operate with maximum efficiency, the system as a whole does not start functioning at the maximum level. The efficiency of the system depends not only on the work of each of its elements, but on their interaction. As a result, in the study of complex systems, special attention is paid to performance indicators that can be used for a comparative estimation of the design and development versions. A term "performance criterion" was introduced to determine the effectiveness, i.e. a condition on the basis of which an efficiency indicator is found. They are criteria classify as non-vector (the result of functioning is a set of indicators) and scalar (the result of functioning is a set of heterogeneous requirements). The activities of the municipality can be described using the terminology of the efficiency theory. In order to improve the quality of services provided by municipalities, the QMSS is being introduced everywhere. The quality and efficiency are interrelated and should be considered in cooperation. As a result, the QMS requires the integration of a performance management system (PMS). In the late 1960s, Discriminant analysis (MDA) began to be applied. It forms a general indicator of the organization's activity. The main advantage of the approach is the consideration of a number of interrelated indicators typical for identical companies. Over the next decades, researchers developed MDA, and probabilistic models (Logit and Probit) appeared. Often, methods evaluate the efficiency of converting one turn (spent work) into one output (useful work). If we consider the efficiency estimation of the municipality, then the situation is changing sharply. It is necessary to analyze a lot of inputs (costs of equipment, capital used, number of employees, etc.) and a lot of outputs (services provided by categories, income received in the treasury, other types of income, etc.). The most important is that one cannot consider the efficiency of the municipality without its connection to the surrounding economic, legal, and political environment, i.e. environment of its functioning. That is the most important. For the comprehensive and detailed consideration of the performance management problem, the detailed analysis of the following elements (components) is necessary: The core of the system is software. Currently, there are various mathematical methods for estimating the efficiency of the system. One of the known is the "data envelopment analysis" (DEA) [1,2]. DEA is based on the application of linear programming. It was developed in 1978 in the USA, and it is applied almost everywhere [3]. To perform strategic analysis, PEST-analysis and the SWOT-method are applied [4]. PEST-analysis works with factors of the "distant" environment of the enterprise. The SWOTmethod identifies the strengths and weaknesses of the organization, its capabilities and potential threats. The "Data Envelopment Analysis" (DEA) method The "Data Envelopment Analysis" (DEA) method is widely used in Europe and the USA. It is used in the field of estimating the efficiency of functioning in the fields of economics, healthcare, administrative management, education, etc. [5][6]. In our country, this method has not been applied and it is almost unknown. However, the potential necessity and the effect of its application can be significant, due to the following reasons. First, the international market entry of financial and industrial companies requires them to work with the same efficiency as other leading western organizations, or, using the language of this method; they are to be on the threshold of the efficiency. Secondly, the current financial situation leads to the need for significant savings, and this, as an inevitable consequence, is for companies to work with the same (or greater) return (output), but at the lower cost (input). The main advantages of applying DEA are as follows:  absence of the necessity for a user to set the weights of the input and output parameters;  absence of requirements for formulating and testing hypotheses about functional relationships between input and output parameters;  ability to operate with a large number of input and output parameters;  one can estimate the efficiency of the object by solving the problem of mathematical programming. DEA can be attributed to the group of boundary methods based on constructing the boundaries of efficiency in a multidimensional space applying input and output variables, where the level of efficiency will depend on the distance between the object and the boundary of efficiency. Accordingly, inefficiency is the degree of remoteness of an object from the boundary of efficiency. Points that do not lie on the boundary of efficiency are correlated with objects that function inefficiently. The efficiency frontier is an estimation of the production function for the case when the output is a vector (based on real data). The DEA method has some advantages [7]:  it does not apply a priori designations of weighting indicators (for input and output variables, it is possible to calculate a single aggregate indicator for each of the objects applying vector input and output);  it is possible to take into account external variables, environmental conditions in the part of the system under consideration, in addition, managers' preferences regarding the priority of certain input or output variables;  it forms a Pareto-optimal set of points that correspond to effective objects; it does not impose restrictions on the functional form of the relationship between inputs and outputs. Despite the undoubted advantages of the DEA method, there exists a drawback. It is possible to obtain an indicator of only the relative efficiency of the subjects. To mitigate such a drawback, one should apply the knowledge of experts and use the artificial boundary of efficiency as a reference for the purpose of estimating real objects [8,9]. It is possible to reach a value of the efficiency indicators are exceeding unities using artificial efficiency boundaries due to the object being estimated being in the multidimensional input/output space "outside" of the convex hull covered points that correspond to the reference objects. Briefly, the main ideas of the algorithms are as follows. Groups of G experts are generalized where = 1, . Each expert generalizes a reference boundary of efficiency . In this case, the matrix of input is for the dimension , and the matrix of output is for the dimension × . Further, such reference objects are grouped into a single unit (by combining the matrices and into the matrices X and Y so that a number of columns of the new matrices is = ∑ =1 ). The following approach is applied in order to form a generalized boundary of efficiency using individual expert boundaries of efficiency. One should divide the resulting community of reference objects that experts generalize into "efficiency layers", choosing one of the total mass as a generalized boundary of efficiency. The efficiency of the municipality Methodological support also includes methods describing the operation of objects with a complex hierarchical structure. Municipality is a complex system with intensive external relations that is in constant close interaction with different territories, public institutions that form the environment [7]. It is possible to form a holistic technology using the existing mathematical and algorithmic tools with the help of diverse techniques. It in turn will save a user (a leader, analyst, etc.) from manual execution of routine operations. It should be noted that the software is a micro level, as well as the methodical support is a macro level. It is quite possible to obtain general performance indicators of a complex system (municipality) taking into account the efficiency of subsystems (departments, administrations, territorial units) as a result of applying various methods. It in turn will reduce the amount of information provided to an official for decision-making. The main technical tool is a decision support system (DSS) developed using the achievements of the efficiency theory and can be applied as an analytical and a consulting tool for conducting comprehensive studies of the efficiency of complex systems [9]. An important problem is the choice of software for the implementation of the DSS. The tools used must ensure the reprogramming of the product in a variety of operating environments: a software product (of course, after recompilation of the source texts) should operate in a variety of operating system environments. Information support is the initial data that will be applied in the indicators and criteria calculation for the efficiency evaluation. One of the main roles can be attributed to the automated information system (AIS) under the code name "Municipality". The exact data of some periods should be accumulated in such a system. It will provide an opportunity to estimate the achieved level of all subsystems of the municipality to formulate a forecast of the efficiency level for the future. Organizational support is the development of performance indicators and criteria, decision making in the field of management to carry out measures to estimate efficiency at all structural levels of the municipality. These measures should be regular in nature, they should provide for an appropriate reporting system and response measures provided that deviations of performance indicators from a given level are detected. The activities on estimating the efficiency of the municipality will solve the following problems:  Estimating the efficiency of the municipality within the city, region and the country as a whole.  Estimating the efficiency and activity level of departments, administrations and territorial units of the municipality and comparisons by other territorial units of local self-government in Russia.  Estimating the rationality of the organizational structure of the municipality.  Estimating the degree of achievement of the main goals of municipality.  Estimating the combination of needful, productive and cost efficiency (this may be relevant in conditions of a budget deficit at all levels).  Estimating the overall social efficiency (performance of specific units and officials). Conclusion The proposed system is based on mathematical methods that affect the growth of the staffing value. The proposed organizational and technical solutions contribute to improving the management efficiency not only of a single municipality, but also of the entire municipal management system as a whole.
2019-12-12T10:16:55.917Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "a3c74b65de3f5a5d198c55b153d7f00499f11584", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1399/3/033052", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a56367ad6f43cb872e16509a49c957fc70492e23", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Business" ] }
228917884
pes2o/s2orc
v3-fos-license
Trends in a State Pharmaceutical Assistance Program for Low-income Older Adults in Wisconsin ABSTRACT Introduction: Many older adults face difficulty in affording their prescription drugs, despite having coverage available through Medicare Part D. SeniorCare is Wisconsin’s pharmaceutical assistance program that provides comprehensive drug coverage for low-income older adults who are not eligible for full Medicaid benefits. Methods: We analyzed SeniorCare enrollment and pharmacy claims data from 2014 to 2018. Results: Total drug expenditures increased by 19.3%, with the proportion of expenditures paid by SeniorCare and members decreasing while the proportion paid by other payers increased. Specialty drugs accounted for a substantial and growing proportion of total expenditures (20.4% in 2018) despite accounting for <0.2% of all claims. Conclusions: Total drug expenditures in SeniorCare have steadily increased over time, primarily due to rising average expenditures per drug fill and increased use of specialty drugs. However, SeniorCare members have been largely protected from these increases and have paid a decreasing proportion of costs over time. INTRODUCTION Prescription drugs are an important component in the management of chronic conditions for older adults. Nearly 9 in 10 adults 65 and older report currently taking a prescription medication, with 54% taking 4 or more. 1,2 Although the majority of older adults in the U.S. have prescription drug coverage through Medicare Part D, most older adults (76%) think the cost of prescription drugs is unreasonable. 1 Nearly 1 in 4 older adults say they have difficulty affording prescription drugs, with a higher likelihood seen among those with low income despite the availability of means-tested support through both Medicaid and Medicare. 1,3 Implemented in 2002, SeniorCare is a unique state prescription drug assistance program for low-income older adults in Wisconsin that provides comprehensive coverage for prescription drugs and over-the-counter insulins. 4 To be eligible for SeniorCare, an individual must be a Wisconsin resident, a U.S. citizen or have qualifying immigrant status, age 65 or older, and not be receiving full Medicaid benefits. SeniorCare is available to all eligible older adults with costs that vary based on income. However, it is available to low-income older adults with annual income ≤200% of the federal poverty level (FPL) through a Section 1115 demonstration waiver, which provides federal matching funds and grants states flexibility to design and implement programs to promote the health and wellness of vulnerable and low-income individuals. 5 The Corresponding author: Kevin Look, PharmD, PhD University of Wisconsin-Madison, School of Pharmacy 777 Highland Avenue, Madison, WI 53705 Phone: 608-890-0367; E-mail: kevin.look@wisc.edu program has a simple cost sharing structure: a $30 annual enrollment fee and copayments of $5 for generic drugs and $15 for brand name drugs. Members with income ≤160% FPL are subject only to the standard copayment amount, while members with income between 160% and 200% FPL are subject to an additional annual deductible of $500. Members with income ≥200% FPL have additional cost sharing requirements based on their income. SeniorCare is distinct from other state and federal programs in several ways. First, although a number of states have had drug assistance programs for older adults, following the implementation of Medicare Part D in 2006 many of these programs were discontinued. 6 States that maintained pharmaceutical assistance programs primarily intend them for use as supplements to Medicare Part D coverage. [6][7][8] They provide assistance with Part D premiums or while an individual is in the Part D coverage gap, and require Part D enrollment to use their programs. [6][7][8] In contrast, SeniorCare is a voluntary program that is considered creditable coverage by Medicare Part D, meaning it can be used as an alternative to Part D coverage. SeniorCare may also be used to supplement prescription drug coverage from Medicare Part D, employersponsored insurance, or other private insurance plans. Second, federal assistance is available for eligible low-income Medicare Part D enrollees through the low-income subsidy program (LIS, also known as Extra Help), 9 which helps pay premiums, deductibles, and co-payments. However, in addition to meeting certain annual income thresholds, LIS eligibility has an asset test that considers assets such as bank accounts, real estate, and retirement funds. In contrast, SeniorCare does not require any asset testing, so individuals with low income who may be ineligible for the Part D LIS are still eligible for support through the SeniorCare program. The landscape of prescription drug coverage has greatly changed during the life course of the SeniorCare program, with implementation of the Medicare Part D prescription drug insurance benefit in 2006 and major changes to the structure of the Part D drug benefit through the Affordable Care Act and related policies. 10 In addition, US health care spending is rapidly increasing, which is driven in part by rapid growth in Medicare spending. 11 Rapid growth in Part D prescription drug spending has led to growing concerns both for Medicare beneficiaries and the Medicare program as a whole. 12 However, no previous study has utilized data from the SeniorCare program, and it is unknown how these issues facing the federal Medicare Part D program have impacted the SeniorCare program or enrolled members. Given the uniqueness of the SeniorCare program in supporting prescription drug use among low-income older adults for nearly two decades and the comparatively generous eligibility criteria compared to Part D LIS support, it is important to have an understanding of how changes in drug use and spending have impacted the program and its members. This information can be useful to inform policies and programs to support the affordability of prescription drugs for low-income older adults. Therefore, the objective of this study was to evaluate trends in SeniorCare program enrollment, drug utilization, and expenditures from 2014 to 2018. Data source and study sample We obtained SeniorCare program enrollment and prescription drug claims data for 2014 to 2018 from the Wisconsin Department of Health Services. The enrollment data included dates of enrollment in the SeniorCare program, as well as demographic information such as member age, gender, race, and ethnicity. It also contained the annual income of each individual or married couple and an indicator of waiver eligibility (i.e., having annual income ≤200%). The SeniorCare prescription drug claims data contained drug name, drug ingredient, fill date, drug type (e.g., brand name or generic), days' supply, total copay, total amount paid by SeniorCare, and total amount paid by other payers. Our study sample was composed of the full population of SeniorCare members with income ≤200% FPL that were enrolled in the program through the Section 1115 waiver (the waiver population) at any point from January 2014 to December 2018. We excluded the non-waiver population (i.e., members with income >200% FPL), as their demographic characteristics were considerably different from the waiver population, and only about 35-45% of non-waiver enrollees had a claim in each year. This is indicative of structural differences between the two populations that would make it inappropriate to combine the two populations, such that there are either differing patterns of drug use, or that non-waiver members were unlikely to use SeniorCare as their primary source of insurance coverage, resulting in missing or incomplete information on prescription drug use. Therefore, we focused on the waiver population given the higher likelihood of complete information on prescription drug use through the SeniorCare program. Outcome measures We described trends in the annual number of SeniorCare enrollees and their demographic characteristics. Using the source of payment information contained in the drug claims data, we also identified the proportion of members having SeniorCare as the primary or sole source of drug insurance coverage and those with additional supplemental drug coverage. We examined trends in drug utilization using the annual number of 30-day drug fills. The drug fills were normalized to 30-day fills using days' supply to account for the variability in the number of days dispensed across fills (e.g., 90-day supply). We also measured the number and proportion of drug fills for brand name and generic drugs in each year, and those for specialty drugs and non-specialty drugs. Brand name and generic drugs were identified using the brand/generic indicator in the drug claims, and specialty drugs were identified using the state's specialty pharmacy drug classification, which defines specialty drugs as those requiring comprehensive patient care services, clinical management, and product support services. 13 We also examined trends in drug expenditures by measuring total annual expenditures, the proportion of annual drug costs paid by each source of payment, and average expenditures per 30-day fill and per member. Total expenditures were defined as the sum of all payments for a drug from any source, including SeniorCare, members, and other third-party payers (such as Medicare Part D, private insurance, or other sources of coverage). SeniorCare costs were defined as the amount paid by the SeniorCare program, and excluded any amounts paid by other payers. Member costs included all out-of-pocket costs paid by a member, including copayments and any applicable deductible amount. As in the utilization outcomes, we assessed the expenditures by drug type (i.e., brand name or generic drugs, specialty or non-specialty drugs). Analysis All outcomes were analyzed using descriptive statistics in each calendar year from 2014 to 2018. The outcomes were analyzed separately for each year to examine annual changes in the outcomes in each year and trends across the entire study period. The mean outcomes in 2014 and 2018 were compared using independent t-tests. Statistical significance was set a priori at an α of <0.05. All analyses were conducted using Stata/SE, version 16.0. Member demographics and drug use A detailed breakdown of the characteristics of the study sample is presented in Table 1. The number of members enrolled in the SeniorCare program declined over time by 11.3%, decreasing from 57,827 members in 2014 to 51,276 members in 2018. The average member age was approximately 80 years but shifted over time towards a higher proportion of members age 65-74 years. Nearly three-quarters were female, although this proportion declined slightly over time. The majority of members were non-Hispanic white. The mean annual couple income was approximately $19,000, which is consistent with the eligibility requirements for this group. Over 80% of SeniorCare members had one or more drug claim in each year, which declined slightly over time. Similarly, the mean annual number of 30-day drug fills per member decreased from 39.7 to 37.4 fills over this same time period. The proportion of SeniorCare members with additional supplemental drug coverage increased; however, approximately 70% of members had SeniorCare as their only source of drug insurance coverage over the study period, indicating high use of the SeniorCare benefit as their primary or sole source of drug insurance coverage. SeniorCare drug utilization and expenditures From 2014 to 2018, the total annual number of 30-day drug fills in the member population decreased by 16.5%, from 2,295,818 fills in 2014 to 1,916,660 fills in 2018 (Table 2). Over this same time period, total annual expenditures increased by 19.3%. The average expenditures per 30-day drug fill increased by 40.5% (P<0.001), and average expenditures per member also increased by 39.4% (P<0.001). Examining program expenditures by source of payment showed that the proportion of total annual expenditures paid by members and by SeniorCare decreased by 4.2 and 2.3 percentage points, respectively, whereas the share of costs paid by other payers increased by 6.6 percentage points. Due to the increasing trends in total expenditures over the five years, in dollar terms, annual member costs decreased by 17.6%, SeniorCare costs increased by 15.6%, and the costs to other payers nearly doubled. The number of 30-day drug fills decreased for both brand and generic drugs from 2014 to 2018, although the decrease was considerably larger for brand name drugs (Table 2). Approximately 85.8% of all 30-day fills were for generic drugs in 2014, which increased to 89.4% in 2018. Yet generic drugs accounted for only 22.2% of total expenditures in 2014, which decreased to 18.4% in 2018. Despite decreasing brand drug use over time, the total expenditures for these agents increased by 25.1%, which led to a nearly doubling in average expenditures per 30-day fill from $295 to $532 (P<0.001). The proportion of total annual expenditures paid by SeniorCare for brand name drugs decreased slightly (-4.1 percentage points); instead, the cost burden for these drugs largely shifted to other payers, increasing by 7.4 percentage points between 2014 and 2018. In contrast, the proportion of expenditures by source of payment for generic drugs remained relatively unchanged over this same time period. Specialty drugs were significantly more expensive than nonspecialty drugs; average expenditures per 30-day fill in 2018 were $7,006 for specialty drugs and $66 for non-specialty drugs (P<0.001, Table 2). Although specialty drugs accounted for <0.2% of all SeniorCare claims, their use increased by 74% between 2014 and 2018. Moreover, they accounted for a substantial and growing proportion of the costs; the proportion of total expenditures for specialty drugs increased from 9.2% in 2014 to 20.4% in 2018. Over this time period, total expenditures for specialty drugs increased by 164.7%, which far exceeded the increases in total expenditures (4.5%) for non-specialty drugs over that period. The proportion of total expenditures for specialty drugs paid by members was very low for specialty drugs at approximately 0.5% in each year. Similar to the trends seen for brand name drugs, the source of payments for specialty drugs has slowly shifted from the SeniorCare program to other payers, with the proportion paid by SeniorCare decreasing by 5 percentage points from 87.1% in 2014 to 82.1% in 2018. DISCUSSION We evaluated patterns of drug utilization and expenditures in the Wisconsin SeniorCare prescription drug assistance program for low-income older adults by analyzing enrollment and claims data from 2014 to 2018. Despite seeing decreases in the number of members and total number of 30-day fills per member, total drug expenditures increased over time, particularly for brand name and specialty drugs. These trends are similar to those seen in Medicare Part D, where costs for single-source brand name drugs and biologics are increasing faster than the cost savings due to generic use can offset. 14 The generic utilization rate in the SeniorCare population was nearly 90%, which is similar to that seen among Medicare Part D enrollees. 14 Contrary to trends seen in Medicare Part D, SeniorCare member costs decreased during the study period, in part due to decreased drug utilization and switching to less expensive generic drugs. However, because member copayments were flat and did not change during this time period, the SeniorCare program and other payers have taken on a greater share of the increasing drug expenditures. As with other Medicaid programs, SeniorCare is the "payer of last resort" such that all other insurers must pay for prescription drug costs incurred by a beneficiary before the SeniorCare program will make any payments; thus, pharmacy providers are required to bill Medicare Part D and any other payers (e.g., private insurance coverage) prior to SeniorCare. 15 The costs paid by other payers nearly doubled over the study period, indicating an increasing use of SeniorCare as supplemental coverage for other sources of drug coverage such as Medicare Part D. Similar to Medicare Part D, substantial growth was seen in the use of and expenditures for specialty drugs. In Medicare, specialty drugs are defined solely based on drug costs, and when the same criteria were used in SeniorCare, the use of specialty drugs was consistent to that seen in the Medicare Part D program, at approximately 1% of all drug claims. 14 Specialty drugs were on average far more expensive than non-specialty drugs, and their costs per member increased at a steeper rate. In addition, the flat copayment structure of SeniorCare also contributed to the rapid grown in the cost burden of specialty drugs to the SeniorCare program and other payers. Although the SeniorCare benefit structure protects its members from the rising costs of specialty drugs, many private and public payers including Medicare Part D have adopted approaches such as inclusion of an additional specialty tier containing a higher copayment or coinsurance amount to control the use of specialty drugs. 16 Further evaluation is needed to assess the appropriateness of specialty drug use in the SeniorCare program and the need for additional cost-containment strategies when more cost-effective options may be available (such as generics, non-specialty brand name drugs, or biosimilars). Previous research has identified a positive association between prescription drug insurance coverage and the use of other health care services, which has had a positive impact on patient health outcomes. 17 Prescription drug use has also been shown to offset medical costs, such that a 1% increase in the number of prescriptions filled by beneficiaries would cause Medicare's spending on medical services to fall by 0.2%. 18 However, our data did not contain information on the utilization of and expenditures for other health care services given that the SeniorCare population is composed of older adults 65 and older who have Medicare for their health insurance coverage. Future research will combine SeniorCare data with Medicare claims data for Parts A, B, and D to provide a more comprehensive picture of prescription drug and health care utilization and spending among SeniorCare members. In addition, this will allow direct comparisons between SeniorCare members and Part D beneficiaries to compare characteristics of enrollees, prescription drug use, medical services use, and overall health care expenditures. Limitations The following limitations of this study should be noted. First, this study only used Wisconsin SeniorCare enrollment and claims data, and was not able to capture drug use and spending through other drug insurance and subsidy programs SeniorCare members might have been enrolled in. Secondly, information on other payers was limited to a payment amount and did not contain any information on the identity of the other payer. Future research will link SeniorCare and Medicare data to examine the impact of the SeniorCare program on the Medicare program. Finally, SeniorCare is a unique program in one state, and the results of this study may not be generalizable to other populations. CONCLUSION The Wisconsin SeniorCare program has served as an important source of drug coverage for low-income older adults in Wisconsin. Despite growing program expenditures over time, SeniorCare members have been largely protected from these increases and have paid a decreasing proportion of costs over time. However, the growing share of drug costs paid by other payers suggests an increasing use of SeniorCare as a supplementary drug benefit to other drug coverage such as Medicare Part D, and expenditures have increased over time primarily due to rising drug costs for expensive brand name and specialty drugs. State-level policies and programs such as SeniorCare may be increasingly important to support the affordability of prescription drugs for low-income older adults.
2020-11-12T09:02:19.292Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "5968eaa13422cf21935e8712a9bc6c38f32aa70b", "oa_license": "CCBYNC", "oa_url": "https://pubs.lib.umn.edu/index.php/innovations/article/download/3410/2572", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb24dd7b150b70f55708fa74244089aa05dad8eb", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine", "Business" ] }
58463174
pes2o/s2orc
v3-fos-license
Optimization of IPv 6 Protocol Independent Multicast-Sparse Mode Multicast Routing Protocol based on Greedy Rendezvous Point Selection Algorithm Forming of the Multicast tree with the best root considered as center selection problem (typically classified as NP-complete type). Alternatively called center Rendezvous Point (RP) due to the direct impact on the multicast routing protocol in terms of the performance. This research article introduces a new compound solution for multicast RP selection called Greedy based RP Selection Algorithm (GRPSA) to select the best RP for PIM-SM multicast routing protocol in IPv6 multicast domain based on Fitness or cost criteria supported by Dijkstra algorithm. The paperwork passes through two phases. First, MATLAB phase used for GRPSA implementation assisted by Fitness calculation to select the best RP called Native-RP. The second phase investigates the performance of GRPSA using QoS metrics compared to another candidate RPs. Validated using the GNS3 emulator for the core IPv6 multicast network and realized using UDP streaming data sourced from Jperf traffic generator via virtual machines at the network edges. The multicast technology implements a very high-efficiency point-tomultipoint data transmission over IP networks (IPv4 and IPv6). The results show GRPSA-RP performs better than other possible RPs by 25.2%, 25.3%, 46.2% and 62.9%, in terms of data received, bandwidth, jitter and loss respectively on average. INTRODUCTION The Rapid growth of Internet communications continues to create new services and network applications.Meanwhile, the massive growth in the number of concurrent users who want simultaneously access shared data in corporate intranets with competitive cost drives the global Internet to provide more shared services.In addition, many real-time applications appeared, such as video conferencing, audio, collaborative environments, IPTV (Lloret et al., 2011).Most multicast applications include a source send messages to a selected group of receptors, but the broadcast and unicast network communication are not optimal for this application kind.So appeared technology called IP multicast (Bartczak and Zwierzykowski, 2012;Joseph and Mulugu, 2011).Multicast utilizes network infrastructure efficiently by requiring the server or source to send out a stream of packets only once to the multicast group's address, the nodes in the network take care of replicating the packet to reach multiple receivers only where necessary (Taqiyuddi et al., 2008).Moreover, the multicast can scales to a larger receiver population by not requiring prior knowledge of who the receivers are or how many there are.In addition, multicasting preserves bandwidth on the network and eliminates traffic redundancy.IP multicast available for both versions of Internet Protocols, IPv4 multicast and IPv6 multicast, but due to the low address space of IPv4 cannot provide the necessary support for multicast communication multicast (Bartczak and Zwierzykowski, 2012;Joseph and Mulugu, 2011).It may happen that multicast will be the main driving force behind the widespread use of the IPv6 protocol (Bilicki, 2006).Multicasting also provides enhanced efficiency by controlling the traffic on your network and reducing the load on network devices.The clients on your network are able to decide whether to listen to a multicast address, so packets only sent to where they are required.In addition, multicasting is scalable across different sized networks but is particularly suited to WAN environments.It enables people at different locations access to streaming data files, like a video, film or lives presentation without taking up excessive bandwidth or broadcasting the data to all users on the network.Multicast communication uses multicast distribution tree for data routing.Typically, defined as either source or share based tree.Source-based tree creates separate multicast routing tree for each source, while shared multicast tree creates one tree for the whole group and shared among all sources.In addition, shared tree has an advantage over source tree because only one routing table needed for the group.Shared multicast trees require the selection of a central router called "Core Point" in the case of CBT multicast protocol (Ballardie, 1997) and "Rendezvous point or RP" in the case of PIM-SM (Fenner et al., 2006). The current paper focuses on shared tree type using PIM-SM in which the right selection of RP router is very important and considered as an NP-complete problem (Wang et al., 2010;Zappala et al., 2002), which advised to be resolved with a heuristic algorithm.Also, an optimized Greedy-based RP Selection Algorithm (GRPSA) is proposed and implemented to achieve the research contribution.It presents an adaptive approach to evaluating the Defects and Features of the multicast tree through considering both cost and QoS factors, by realizing RP selection with the local search algorithm.Bartczak and Zwierzykowsk (2009) described the comparison between different multicast routing protocols for different approaches.It focuses on similarities and differences between PIM-SM protocol that uses source tree and PIM-DM protocol that practice shared tree.The research covered IPv4 multicast only.Wang et al. (2010) suggested tabu search algorithm in PIM-SM multicast routing to select multicast RP because PIM-SM uses shared tree and the main problem is how to determine the position of the RP.The algorithm selects multicast RP by considering both cost and delay.The outcome of Wang's proposed algorithm indicates good performance in multicast cost, ETE delay and having good expansion and practical feasibility.However the paper doesn't consider RP reselection after the dynamic join and leave of group members (Wang et al., 2010). Youssef Baddi, Mohamed Dafer, introduces D2V-VNS-RPS (Delay and delay variation constrained algorithm based on Variable Neighborhood Search algorithm for RP Selection problem in PIM-SM protocol).This algorithm selects the RP router by considering tree cost, delay and delay variation.The main motivation behind the use of VNS search algorithm was to solve core selection problem using several neighborhoods to explore different neighborhood structures systematically.Simulation results show that D2VVNS-RPS got better average delay compared to other tested algorithms such as TRPS, DDVCA and Random.The algorithm shows the less cost compared with the tested algorithms (Baddi and El Kettani, 2012) but still, the experiments require further validation using emulators behind simulators for further QoS investigation such as throughput and available bandwidth. Youssef Baddi, Mohamed Dafer, presented 2DV GRASP-RP (Delay and Delay Variation) algorithm based on Parallel GRASP Procedure (Greedy Randomized Adaptive Search Procedure) using PIM-SM multicast routing protocol to select the right RP by considering cost, delay and delay variation functions.As a result, the algorithm shows good performance in terms of multicast cost, end-to-end delay and other aspects compared to other three algorithms; AKC, DDVCA and Tabu RP Selection algorithm (or TRPS) (Baddi and El Kettani, 2013).It focused on IPv4 multicast only. Compared to the related works, the current paper introduces further investigation to the effect of the right RP selection on the performance of IPv6 multicast domain using QoS metrics such as throughput, available bandwidth, jitter and loss.Besides, a new algorithm tested and a real traffic generator is deployed for validation. MATERIALS AND METHODS Construction of IP Multicast tree and identifying the right RP selection criteria could considered as two most significant traffic-engineering factors in PIM-SMmulticast performance.To achieve our optimization target, the following steps discuss the proposed method: Multicast PIM-SM problem and motivation: The essential problem in building multicast routing tree is how to find a low-cost tree covering all group members plus the path from source.This problem was attributing to a Steiner tree problem (Mehlhorn, 1988) in mathematics and considered as an NP-complete problem (Wang et al., 2010;Zappala et al., 2002).PIM-SM divides the multicast tree into two sub-problems: an RP selection problem and a routing selection problem.RP selection using PIM-SM protocol classified into two types: static and dynamic.When static selection is active, the IP address of RP must define on all routers.Unlike static, the dynamic depends on several ways, but the most important is abootstrap router (BSR) (Bhaskar et al., 2008).It works by sending the relevant information comprising priority and IP address of candidate-RP to all routers of the network.This information obtained from candidate-RP that willingness to be an RP.All routers use a hash function to select one RP address based on IP address, priority and hash-mask-length prepared by BSR.However, these steps do not guarantee the selection of the best RP position.In addition, the static and dynamic mechanisms for RP selection designed without care of cost (or distance of multicast group members).These limitations motivate us for further research contribution. Basic greedy local search algorithm: A Local Search (LS) algorithm is an iterative search procedure begins from an initially suitable solution and this solution Fig. 1: Pseudo code for basic greedy local search improves progressively through execution a series of local modifications (or moves).The search then transitions to a "neighbor" that is "best" than the current candidate solution according to an objective function. The search halts when it faces a local optimum solution in relation to the transformations that it considers.The significant restriction of the method: unless one is quite lucky, this local optimum is often a mediocre solution. In LS, The quality of the solution obtained in addition to the computing times is commonly highly dependent upon the "richness" of the set of transformations (moves) considered at each iteration of the heuristic (Gendreau and Potvin, 2010).The basic LS (Eiben and Smith, 2015) algorithm is described in Fig. 1. The proposed algorithm GRPSA for RP selection: The main goal of the proposed algorithm GRPSA is to solve/optimize the RP selection problem in IP multicast domain.The design, implementation and evaluation of GRPSA are achieved by dividing the research work into two phases; MATLAB phase for RP selection with the best tree rout computationally.The last is performance evaluation phase using GNS3-Jperf for testing and validation in terms of QoS metrics such as, jitter, loss and data received (Total throughput)with consideration of available bandwidth. The rest of this section discusses the MATLAB implementation phase of GRPSA.Many transitions followed to get the best RP selection guided by a greedy approach based on the Fitness function.The formulation of the fitness function depends on assigning two weights; one weight signifies the impact of the distance from the source node to the selected RP, while the second weight determines the importance of the distance between RP and the destination nodes.The designed fitness function combines these two weights together to find the fitness values Eq. ( 1).If the calculated fitness for child-RP is smaller than the corresponding value of the parent-RP, it will select the child-RP as the new parent-RP, else parent-RP is selected (no change in parent RP): where, ‫ݓ‬ ଵ : The weight associated to the impact of distance between source node and RP ‫ݓ‬ ଶ : The weight associated to the impact of distance between RP and a destination node dist ሺ݊1, ݊2ሻ: Shortest path distance between node n1 and n2. The following outline activities of GRPSA algorithm, which are detailed next: 1. Set multicast topology (including source, receiver and links) 2. Find adjacency matrix of the network.In summary of MATLAB phase, GRPSA produces the best-shared tree root (Native-RP) that optimizes the routes along the paths from source to destinations via the selected native-RP.To provide fairness as well as to maximize the advantages of multicast among receptors.The design of GRPSA assumes that the expected right RP (of the multicast tree) found close to the middle distance (cost) among the source and receptors.Thus, the RP distance weight set to 0.5 in Fitness function. Figure 2 to 7 depict the running process of the GRPSA algorithm.It starts by generating a random network topology with 20 nodes (Fig. 2).The symbolic representation of graphs as follows: nodes in the figure denote routers, whereas the directed edges stand for directed links.The initial weights between source to RP and between RP and destination are set to two parameters; wSrc2RP = 0.5, wRP2Dest = 0.5 respectively.Node 11 represents source node (or multicast server) marked with a solid square circle, nodes 2,4,13,14,18 and 20 denote destination nodes, marked with a solid triangle and candidate RP nodes are marked with a solid black circle, child RP denoted by.In Fig. 3, the trace for GRPSA implementation shows that node 19 selected as Parent-RP, then node 6 as Child-RP initially and randomly (represents 1 st two rounds).The calculated fitness value for them are (10 and 8.5) respectively using Eq. ( 1).Through preferring the minimum fitness value, node 6 replaces the current Parent-RP (node 19) and starts the next search which leads to promoting Node 3 as a new Child-RP as illustrated in Fig. 4. However, the calculated fitness of node 3 was (12) which is greater than node 6 fitness (8.5), so node 3 discarded; as a result, node 6 stays as Parent-RP (Fig. 5).Next, GRPSA search for the next Child-RP node, thus node 16 is selected with calculated fitness value (8).Byfitness comparison, node 16 got Parent-RP vocation temporarily (8 less than 8.5), whereas node 9 promoted as new Child-RP as shown in Fig. 6. Next, GRPSA greedy algorithm continues discovering all possible Parent-RPs of the topology.Finally, node 1 selected by GRPSA as Native-RP since it has a minimum Fitness value (7) as shown in Fig. 7. GRPSA PERFORMANCE EVALUATION USING GNS3 AND JPERF (QOS VALIDATION) This section introduces the performance evaluation phase using GNS3 and Jperf.The environment for more complex tested network topology composes 20 virtual Cisco 7200 routers interconnected via serial links as shown in Fig. 8. Six virtual computers realized as VMWARE virtual machines with 1GB RAM and 10GB HDD per virtual machine.End-to-end connection realized using the server as a source for UDP media streaming, then received by clients over theIPv6 multicast network using GNS3.Window 7 is used in virtual machines. CONCLUSION This study introduced a new deployment for IPv6 greedy algorithm called GRPSA based on Fitness criteria to solve RP-selection problem for theIPv6 multicast domain.This minimization problem considered as an NP-complete problem that requires further research investigation.The MATLAB implementation test of the proposed algorithm (GRPSA) depicts the behavior and calculation for finding the best or Native-RP choice among other possible RPs.This choice validated using GNS3 supported with Jperf based on QoS metrics.It is found that the right selection of RP router is very significant due to the direct impact on the tree structure rooted by RP.Furthermore, it affects the performance of multicast routing protocol.Consequently, the received quality and quantity of multicast streaming traffic shows variations in data received(Total throughput),Bandwidth (Average),jitter anddatagram loss, with the distinguished result using GRPSA-RP comparatively.Finally, to save the cost calculations, it could mix the GRPSA target in selecting the best RP for IPv6 multicast with anexisting routing protocol such as OSPF as future work. 3. Compute shortest path (using Dijkstra algorithm) between every pair of nodes in the network 4. Randomly select an initial RP node (Parent-Rp) from all network nodes for the 1 st round of the algorithm.The selected RP node should not belong to the source or destination nodes. 5. Then calculates the fitness value for the selected RP using Eq.(1).6. RP mutation: It generates (Child-RP) from Parent-RP.The mutation operator depends on the proposed fitness function.7. Calculate fitness of Child-RP.8. Compare Parent-RP with Child-RP and select the best one according to the fitness values.9. Iteration = iteration + 1. 10.If (iteration<max iteration) go to step 6 else end. Table 1 : Trace for GRPSA rounds to select the best RP based on fitness using Eq. 1 (Multicast topology in Fig.2to 7) Enable IPv6 and Multicast routing: • Enable IPv6 Unicast routing: IPv6 multicast network topology (one source and six receivers) using UDP streaming over GNS3 and JPERF forwarding (RPF) check, which identifies the closest interface of the multicast router to the source.Thus, OSPF unicast protocol is used in the tested topology.The GNS3setting and configuration steps for the tested IPv6-multicast network topology are listed as follows:Moreover, the configuration commands (fragment) of IPv6 addressing, OSPF and clock rate for Router 1 interfaces (serial and Ethernet) looks like:
2018-12-29T19:48:57.918Z
2017-10-15T00:00:00.000
{ "year": 2017, "sha1": "901348b8e60d15ba9c48f0864f7ee3bc84666d78", "oa_license": "CCBY", "oa_url": "https://doi.org/10.19026/rjaset.14.5128", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "901348b8e60d15ba9c48f0864f7ee3bc84666d78", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258027658
pes2o/s2orc
v3-fos-license
Human forebrain organoids-based multi-omics analyses reveal PCCB’s regulation on GABAergic system contributing to schizophrenia Identifying genes whose expression is associated with schizophrenia (SCZ) risk by transcriptome-wide association studies (TWAS) facilitates downstream experimental studies. Here, we integrated multiple published datasets of TWAS (including FUSION, PrediXcan, summary-data-based Mendelian randomization (SMR), joint-tissue imputation approach with Mendelian randomization (MR-JTI)), gene coexpression, and differential gene expression analysis to prioritize SCZ candidate genes for functional study. Convergent evidence prioritized Propionyl-CoA Carboxylase Subunit Beta (PCCB), a nuclear-encoded mitochondrial gene, as an SCZ risk gene. However, the PCCB’s contribution to SCZ risk has not been investigated before. Using dual luciferase reporter assay, we identified that SCZ-associated SNP rs35874192, an eQTL SNP for PCCB, showed differential allelic effects on transcriptional activities. PCCB knockdown in human forebrain organoids (hFOs) followed by RNA-seq revealed dysregulation of genes enriched with multiple neuronal functions including gamma-aminobutyric acid (GABA)-ergic synapse, as well as genes dysregulated in postmortem brains of SCZ patients or in cerebral organoids derived from SCZ patients. The metabolomic and mitochondrial function analyses confirmed the deceased GABA levels resulted from reduced tricarboxylic acid cycle in PCCB knockdown hFOs. Multielectrode array recording analysis showed that PCCB knockdown in hFOs resulted into SCZ-related phenotypes including hyper-neuroactivities and decreased synchronization of neural network. In summary, this study utilized hFOs-based multi-omics data and revealed that PCCB downregulation may contribute to SCZ risk through regulating GABAergic system, highlighting the mitochondrial function in SCZ. Introduction Schizophrenia (SCZ) is a complex polygenic psychiatric disorder with risk contributed by environmental and genetic factors 1 . Genetic studies such as genome-wide association studies (GWAS) have identi ed hundreds of common single nucleotide polymorphisms (SNPs) associated with SCZ 2,3 . Most of the SCZ-associated SNPs are non-coding variants located in regulatory DNA elements [4][5][6] , suggesting that gene expression mediates the connection between genetic variants and SCZ phenotypes 7 . Identifying genes whose expression is associated with SCZ phenotypes facilitates discovering SCZ risk genes for downstream functional studies. By integrating SCZ GWAS and brain expression quantitative trait loci (eQTL) data, several approaches which are collectively described as transcriptome-wide association studies (TWAS) have been used to identify SCZ risk genes. These TWAS approaches, including FUSION 8-10 , PrediXcan 11 , summary-data-based Mendelian randomization (SMR) 2,10,12,13 , and joint-tissue imputation approach with Mendelian randomization (MR-JTI) 14 , aimed to identify the association between predicted gene expression and SCZ risk. Though MR-JTI could improve gene expression prediction performance in TWAS and provide a causal inference framework 15 , experimental validation is still needed. Here we integrated results from MR-JTI 14 and other published SCZ TWAS datasets 2,8−13 to prioritize SCZ risk genes for functional study. Through these procedures, we identi ed Propionyl-CoA Carboxylase Subunit Beta (PCCB), a protein-coding gene that plays important roles in mitochondrial metabolism 16,17 , as an SCZ risk gene with the most supporting evidence in our analysis. However, the PCCB's contribution to SCZ risk has not been investigated before. Using human forebrain organoids (hFOs), three-dimensional cell cultures that capture key aspects of human brain 18 , we found that PCCB knockdown in hFOs resulted into SCZ pathology-related cellular phenotypes. We also identi ed that SCZ-associated SNP rs35874192 may regulate PCCB expression, supporting that SCZ-associated common genetic variants may regulate PCCB expression which mediates the genetic effects on SCZ risk. Results PCCB is prioritized as a promising SCZ risk gene To obtain reliable SCZ risk genes for downstream functional study, we integrated multiple published TWAS datasets (Table S1) to prioritize genes with su cient supporting evidence. We also checked whether the prioritized genes are located in SCZ risk-associated gene coexpression module or dysregulated in postmortem SCZ brains. These analyses prioritized PCCB, GATAD2A, and GNL3 as the top three SCZ risk genes (Table 1). Notably, PCCB was also identi ed as an SCZ risk gene in the gene-based MAGMA analysis 19 . Moreover, PCCB is located in the gene coexpression module (M2) that is downregulated in SCZ based on the PsychENCODE data 10 . PCCB was also found to be nominally downregulated in postmortem SCZ brains (P = 0.01, FDR = 0.14) in the CommonMind data 20 . These lines of evidence suggested that PCCB expression mediated the genetic effects on SCZ risk. Therefore, we focused on studying how PCCB contributes to SCZ risk in this study. Since PCCB expression is genetically associated with SCZ, we investigated the functional impacts of SCZ-associated SNPs on PCCB expression. Based on the TWAS results used in this study, we retrieved the top SNPs (rs7432375, rs7427564, rs527888, rs66691851) and their linkage disequilibrium (LD) SNPs that were associated with PCCB expression. To narrow down to the putatively causal variants, we focused on those eQTL SNPs (eSNPs) that are likely to affect PCCB expression in the brain. Since opening chromatin facilitates gene expression activation, we used brain ATAC-seq data from the PsychENCODE consortium 21 to identify eSNPs located in active transcription regions. By integrating PsychENCODE ATAC-seq data and SNP annotation information from the Roadmap Epigenetics Consortium 22 , we prioritized three eSNPs (rs35874192, rs900818, rs7349597) ( Table 2) that are located in genomic regions strongly suggested as enhancers or promoters in the human brain tissues or neural cell cultures. We then performed dual luciferase reporter assay (DLRA) in both hNPCs and SH-SY5Y cell lines to validate the regulatory effects of the three eSNP-containing DNA elements. For each eSNP, 50 base pairs (bp) eSNP-containing DNA fragment was synthesized and cloned into the upstream of PCCB promoter in the PGL3-basic luciferase reporter vector (Fig. S1). In DLRA, the eSNP rs35874192 (G/C) showed allelic effects on transcriptional activities in both hNPCs and SH-SY5Y cell lines, with SCZ-associated allele C corresponding to a lower gene expression (Fig. 1A). Notably, the directions of allelic effects of rs35874192 were consistent with eQTL patterns detected in brain tissues from the GTEx 23 and BrainSeq consortium 24 (Fig. 1A). While the eSNPs rs900818 and rs7349597 had no differential allelic effects on transcriptional activities (Fig. 1B, C). PCCB knockdown in hFOs affects expression of genes enriched in GABAergic synapse According to the TWAS and DLRA results, lower PCCB expression is associated with increased SCZ risk. We established PCCB knockdown and control human induced pluripotent stem cells (hiPSCs, U2F) using CRISPR interference (CRISPRi). In CRISPRi, one guide RNA (gRNA) sequence targeting PCCB (PCCB-G1) and one non-targeting control gRNA were designed. The established PCCB knockdown and control hiPSCs were then used to generate hFOs ( Fig. 2A, B) to investigate the functional impacts of PCCB knockdown. On day 60 of organoid culture ( Fig. 2A, C, D), PCCB knockdown and control hFOs were used for RNA-sequencing (RNA-seq). Differential gene expression analysis identi ed 2326 differentially expressed genes (DEGs) [false discovery rate (FDR) < 0.05] between the PCCB knockdown and control hFOs (Table S2). Among the 2326 DEGs, 1099 genes were upregulated and 1227 genes were downregulated in PCCB knockdown hFOs (Fig. 2E, F). PCCB -induced DEGs in hFOs are enriched with SCZ-related genes To explore PCCB's connection to SCZ, we evaluated the enrichment of 1079 PCCB-induced DEGs in hFOs with SCZ-related gene sets. The rst SCZ gene set was 4096 differentially expressed protein-coding genes between postmortem brains of 559 SCZ patients and 936 controls from the PsychENCODE consortium 10 . The second SCZ gene set was 2809 DEGs between cerebral organoids (6 months) derived from eight SCZ patients and eight controls from the Kathuria et al. study 26 . We found that the 1079 PCCB-induced DEGs were signi cantly overlapped with genes dysregulated in PsychENCODE SCZ brains (Overlapped genes = 282, P = 5.84E-04) and SCZ patient-derived cerebral organoids (Overlapped genes = 255, P = 1.92E-14) (Fig. S3A). We also found that the 1079 PCCB-induced DEGs were signi cantly (P adjust =7.09E-3) overlapped with genes reported in SCZ GWAS from the FUMA analysis 27 (Fig. S3B). These results suggested that PCCB may contribute to SCZ risk by affecting expression of genes related to SCZ (Table S2). PCCB knockdown in hFOs decreases GABA level by reducing tricarboxylic acid cycle and leads to mitochondrial dysfunction PCCB encodes the β subunit of the propionyl-CoA carboxylase, a mitochondrial enzyme involved in the catabolism of propionyl-CoA 28 . PCCB mutation has been reported to impair mitochondrial energy metabolism by disrupting the tricarboxylic acid (TCA) cycle 16 . We would expect mitochondrial dysfunction caused by PCCB knockdown. Indeed, we found several mitochondrial genes that function in cellular oxidative phosphorylation, including MT-ND2, MT-ND5, and MT-CYB, were downregulated in both PCCB-G1 and PCCB-G2 hFOs (Table S2), which was further validated by the RT-qPCR analysis (Fig. 4A). Adenosine triphosphate (ATP) and reactive oxygen species (ROS) detection assays showed that PCCB knockdown reduced ATP generation and increased ROS levels in hFOs (Fig. 4B, C), indicating the mitochondrial dysfunction caused by PCCB knockdown. Since GABA metabolism involves a route from α-ketoglutarate (α-KG) generated by the TCA cycle to succinate via glutamate, GABA, and succinic semialdehyde 29,30 , we examined whether PCCB knockdown decreased GABA level by inhibiting the TCA cycle. We performed enzyme linked immunosorbent assay (ELISA) and con rmed that PCCB knockdown decreased succinyl-CoA (SCOA) and α-KG (Fig. 4D, E), two key metabolites that connect the GABA shunt and TCA cycle 29,30 . Considering that α-KG is an upstream metabolite that could be converted to GABA, we added α-KG (10 µg/ml) into culture media of hFOs and found a restored GABA level (Fig. 4F) in PCCB knockdown hFOs. These results indicated that PCCB knockdown decreased GABA level by reducing TCA cycle (Fig. 4G). PCCB knockdown in hFOs leads to abnormal electrophysiological activities Since GABA, the major inhibitory neurotransmitter in the brain 31 , was decreased in PCCB knockdown hFOs, we used multielectrode array (MEA) recording assay to test whether PCCB knockdown affected neuroactivities. The PCCB knockdown and control hFOs at day 160 were seeded on Matrigel-coated 24-well MEA plate (Fig. 5A). After 7 days of culture, electroactivities of hFOs were recorded (Fig. 5B, C, D). We found that PCCB knockdown in hFOs led to increased number of spikes (Fig. 5E) and mean neuron ring rate (Fig. 5F), suggesting a hyper neuroactivity after PCCB knockdown in hFOs. However, PCCB knockdown decreased the synchronization of the neural network (Fig. 5G). Since hyper neuroactivity and decreased synchronization of neural network in SCZ brains have been reported by the electroencephalography and magnetoencephalography 32 , these results supported that PCCB knockdown led to abnormal electrophysiological activities that link to SCZ phenotypes. Discussion SCZ is a polygenic psychiatric disorder with risk contributed by multiple genes. Identifying genes whose expression is associated with SCZ risk by TWAS is a powerful approach to prioritize SCZ risk genes. By integrating multiple published datasets from TWAS, gene coexpression, and differential gene expression analysis, we prioritized PCCB as a reliable SCZ risk gene. PCCB is a gene encoding β subunit of propionyl-coA carboxylase enzyme 28 , defect of which has been reported as a cause of propionic acidemia 33 To investigate PCCB's contribution to SCZ risk, we performed RNA-seq analysis and identi ed that PCCB knockdown in U2F hFOs affected expression of genes related to multiple neuronal functions and GABAergic synapse pathway. To con rm the RNA-seq results, we generated hFOs using another hiPSC line (ACS-1011) (Fig. S4A, B), nding that PCCB knockdown also led to decreased expression of GABA receptor genes, including GABRA1, GABRA2, GABRB2, and GABRB3 (Fig. S4C), as detected in U2F hFOs. The downregulated GABAergic synapse pathway caused by PCCB knockdown attracted our attention, since GABAergic system dysfunction plays important roles in SCZ etiology 37,38 . We performed the metabolomic analysis and con rmed the decreased GABA levels in PCCB knockdown hFOs. The following electrophysiological analysis showed that PCCB knockdown led to hyper neuroactivity and decreased synchronization of neural network activities, cellular phenotypes reported to be associated with SCZ risk 32,39 . Through the hFOs-based multiomics analyses, we revealed the impacts of PCCB in neural functions and its connection to SCZ etiology, highlighting that PCCB may contribute to SCZ etiology through regulating the GABAergic system. Since the GABA shunt connected the GABA metabolism pathway and TCA cycle 29,30 , we expected that PCCB regulates GABAergic system by affecting TCA cycle. As expected, PCCB knockdown lead to reduced production of SCOA and α-KG in TCA cycle. The α-KG produced from TCA cycle could be converted into SCOA or serve as a source for GABA synthesis 40 , the decrease of α-KG may be responsible for the reduced production of GABA. On the other hand, PCCB knockdown lead to the reduction of SCOA, which may exacerbate the entry of GABA into the TCA cycle through GABA shunt pathway and further reduced GABA content in the cytoplasm 40 . Overall, PCCB knockdown decreased GABA levels by reducing the content of α-KG and SCOA in TCA cycle (Fig. 4G). As mitochondrial dysfunction has been reported to be associated with GABA dysfunction 41 and etiology of SCZ 42,43 , our study provided evidence how mitochondrial dysfunction may contribute to SCZ risk. In addition to mitochondrial dysfunction, one of the major effects of PCCB defect is the cellular accumulation of propanoic acid, propionyl carnitine, and other metabolites. Indeed, we did observe a dramatic increase of propanoic acid or propionyl carnitine in PCCB knockdown hFOs (Table S3). Interestingly, hFOs exposed to propanoic acid (3.5 uM) led to signi cantly decreased expression of GABA receptor genes (GABRA1, GABRA2, GABBR2, and GABBR3) (Fig. S5), which is consistent with those observed in PCCB knockdown hFOs. These results suggested that the accumulation of propanoic acid may mediate the effects of PCCB knockdown, and partially explain how propionic acidemia could lead to neuropathological symptoms. These results also suggested the potential effects of short-chain fatty acids on SCZ risk, since short-chain fatty acids including propanoic acid, acetic acid, and butyric acid were found to be upregulated in serum of SCZ patients 44 . This study reveals the connection between PCCB and SCZ risk, some limitations also existed. First, PCCB knockdown affects multiple types of synapses, including GABAergic, glutamatergic, dopaminergic, and cholinergic synape, as revealed by RNA-seq analysis. The cell-type speci c effects of PCCB knockdown is unclear. Further investigations such as RNA-seq and other omics analysis at single-cell level are needed. Second, we showed that SCZ-associated SNP rs35874192, an eQTL SNP for PCCB, affected transcriptional activities. But whether SNP rs35874192 could affect PCCB expression in-vivo remains unclear. In the future, using CRISPR-Cas9 gene editing to con rm the regulatory effects of SNP rs35874192 on PCCB expression is needed. In summary, this study used hFOs-based multi-omics analyses and revealed connection between PCCB and SCZ, highlighting that PCCB may contribute to SCZ etiology through regulating the GABAergic system and mitochondrial function. Prioritization of SCZ risk genes and SNPs We combined the published results from TWAS 8-11 , MR-JTI 15 , and SMR 2,10,12,13 analyses to prioritize SCZ risk genes with su cient supporting evidence. We also checked whether the prioritized genes are located in SCZ risk-associated gene coexpression modules or differentially expressed in postmortem brains of SCZ patients. To prioritize SCZ risk SNPs, we rst collected top SNPs in the TWAS analysis. We then retrieved SNPs in LD (r 2 ≥ 0.6, European population genome) with the top SNPs. We prioritized candidate causal SNPs that likely affect gene expression in the brain using the following criteria: 1) candidate SNPs are eSNPs for the SCZ risk genes in the brain based on the BrainSeq 24 , GTEx 23 , or PsychENCODE eQTL data 45 and 1% penicillin/streptomycin (Gibco, 10378016). As described in our previous study 46 , hNPCs were induced from U2F hiPSCs using the STEMdiff™ Neural Induction Medium (STEMCELL Technologies, 05835). The hNPCs were cultured in Matrigel-coated plate and maintained in the STEMdiff™ Neural Progenitor Medium (STEMCELL Technologies, 05833). For DLRA, 1× 10 5 cells per well were plated into 24-well plate. After 24 hours of culture, 500 ng recombinant pGL3-basic luciferase reporter vector and 20 ng PRL-TK Renilla internal control vector for each well were co-transfected into cells using Lipofectamine™ 3000 (Invitrogen, L3000015). 36 hours post transfection, the Fire y and Renilla luciferase activities were measured on the LumiPro luminescence detector (Lu-2021-C001) using the DLRA kit (Vazyme, DL101-01). Experiments were conducted in three biological replicates. Rt-qpcr Analysis Total RNA was used to generate complementary DNA using HiScript III RT SuperMix for qPCR (+ gDNA wiper) (Vazyme, R323-01). RT-qPCR assay was performed using ChamQ SYBR qPCR Master Mix (Vazyme, Q711-02) on the Real-Time PCR System (Roche, LightCycler 480 II). GAPDH was used as the internal reference gene. At least three technical replicates were used in the RT-qPCR analysis. RT-qPCR primers are provided in Table S4. Generation Of Hfos The established PCCB knockdown and control hiPSCs were used to generate hFOs using the STEMdiff Dorsal Forebrain Organoid Differentiation Kit (STEMCELL Technologies, 08620) based on the manufacturer's instructions with some modi cations. Brie y, hiPSCs were dissociated into single cells using Accutase solution (Sigma-Aldrich, A6964) on day 0. 1x 10 4 cells per well were then plated into 96-well round-bottom ultra-low attachment plate (Corning, 7007) and fed with 50 µL Forebrain Organoid Formation Medium supplemented with 1x Penicillin/Streptomycin and 10 µM Y27632 (Selleck, SCM075). On day 3, each well was gently added with 50 µL fresh Forebrain Organoid Formation Medium without Y27632. On day 6, the medium was replaced with the Forebrain Organoid Expansion Medium. On day 25, the Forebrain Organoid Expansion Medium was replaced with the Forebrain Organoid Differentiation Medium. From day 43, organoids were cultured in the Forebrain Organoid Maintenance Medium. hFOs were characterized using immunostaining as described in our previous studies 46, 47 . Bulk Organoid Rna-seq And Data Analysis Two hFOs from PCCB knockdown or control group were randomly selected and pooled together as one mixed sample (day 60, N = 5 in each group) for total RNA extraction using the miRNeasy Mini Kit (Qiagen, 217004). RNA quality was evaluated on the Agilent 2100 Bioanalyzer system. RNA samples with RNA integrity numbers over 7 were used for RNA-seq (150 bp, paired-end) on the Illumina NovaSeq 6000 system. Raw RNA-seq data were ltered to get clean reads using FastQC (v0.20.0). The clean reads were aligned to the human genome hg38 using STAR (v2.7.9a). Gene expression quanti cation was conducted using RSEM (v1.3.0) based on Gencode v40 comprehensive gene annotation. The lterByExpr function in the edgeR package (v3.36.0) was used to lter out low-expression genes. The sva function in the SVA package (v3.42.0) was used to estimate batch effect and other artifacts. Differential gene expression analysis between the PCCB knockdown and control group was performed using the DEseq2 package (v1.34.0). P values were adjusted using the Benjamini-Hochberg method. To annotate the functions of DEGs, we used the online tool WebGestalt2019 (http://www.webgestalt.org/) to perform GO and KEGG pathway enrichment analysis. Ppi Analysis The STRING database (v11.5) (http://www.string-db.org/) was used to construct a high-con dence (interaction score > 0.7) PPI network for the PCCB-induced DEGs. Active interaction sources included text-mining, experiments, databases, coexpression, neighborhood, gene fusion, and co-occurrence. The PPI network was visualized using Cytoscape (v 3.9.1). CytoHubba 49 , a Cytoscape plugin, was used to explore hub nodes in the PPI network. Metabolomic Analysis Of Bulk Hfos For metabolomic analysis, three hFOs (day 60) in PCCB knockdown or control group were randomly selected and pooled together as one mixed sample. Five mixed samples in each group were then used for HM400 metabolomic analysis (Beijing Genomics Institute, China). Brie y, hFOs or quality control samples were lysed in 140 µL 50% water/methanol solution. The lysate was centrifuged (18000g, 4℃, 20 min) to get the supernatant. The supernatant was used for derivatization reaction and then centrifuged at 4000g, 4℃, 10 min. The supernatant was further used for high performance liquid chromatography tandem mass spectrometer (LC-MS/MS) analysis on the SCIEX QTRAP 6500 + LC-MS/MS system. Parameters of liquid chromatographic column were BEH C18 (2.1 mm x 10 cm, 1.7um, Waters). The parameter of mass spectrometry was ESI+/ESI-. Content of metabolites (µmol/g) was quanti ed using the HMQuant software based on the formula (C*0.14/m), where C represents the calculated concentration (µmol/L) and m represents the sample weight (mg). Two-tailed t-test was used to identify differential metabolites between PCCB knockdown and control hFOs. Functional annotation for the differentially expressed metabolites was performed using the online tool MetaboAnalyst (https://www.metaboanalyst.ca/). A, B, and C DLRA results for PCCB eSNPs rs35874192 (A), rs900818 (B), and rs7349597 (C) in hNPCs and SH-SY5Y. The PRL-TK Renilla vector was used as internal control. Data are shown as Mean ± SEM. Unpaired two tailed t test was used for comparison between two groups. **P < 0.01, ***P < 0.001. Reference eQTL plots in this gure were downloaded from the GTEx portal (https://www.gtexportal.org/home/) and BrainSeq phaseIeQTL data (http://eqtl.brainseq.org/phase1/eqtl/). Figure 2 Functional effects of PCCB knockdown in hFOs. A Work ow of hFOs culture in this study. Scale bar, 200 μm and 300 μm. B, C RT-qPCR analysis for PCCB expression in hiPSCs (B) and hFOs (C). D Immunostaining characterization for hFOs. Cortical plate marker, MAP2; intermediate zone marker, TBR2; ventricular zone markers, SOX2 and ki67; forebrain-speci c markers, FOXG1 and PAX6. Scale bar, 50 μm. EPCA plot for the PCCB knockdown and control hFOs. PCCB-NC was shown with green dots and PCCB-G1 was shown with red dots. F Volcano plot of DEGs between the PCCB knockdown and control hFOs. Upregulated genes are shown with red dots and downregulated genes are shown with blue dots. G, H GO and KEGG analysis for PCCBinduced upregulated (G) and downregulated DEGs (H) respectively. GO terms are shown with red bars, and KEGG are shown with blue bars. I PPI network analysis for 350 shared PCCB-induced downregulated genes between PCCB-G1 and PCCB-G2 hFOs. The top 10 hub nodes are shown with orange nodes. J RT-qPCR analysis for GABRA1, GABRA2, GABRB2, and GABRB3 (The hub nodes in the PPI network). The twotailed student's t-test was used to assess difference between the PCCB-NC and PCCB-G1 or PCCB-G2 group. **P< 0.01, ***P < 0.001, ****P < 0.0001. PCCB knockdown leads to mitochondrial dysfunction in hFOs. A Bright elds of hFOs cultured in 24-well MEA plate. Scale bar, 100 μm. B Representative burst traces for individual electrodes recorded from hFOs. C Schematic diagram of a single unit event. D Raster plot diagram of synchronized burst activity. Each pink box represents a synchronized burst. E, F, and G PCCB knockdown increased the number of spikes (E) and mean neuron ring rate (F), but reduced synchronized burst activity in hFOs (G). Data were shown with Mean ± SEM (averaged organoids N = 8). The two-tailed student's t-test was used to assess difference between the PCCB-NC and PCCB-G1 or PCCB-G2 group. *P< 0.05, ****P < 0.0001. Supplementary Files This is a list of supplementary les associated with this preprint. Click to download.
2023-04-09T05:07:17.743Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "68d42ad8bfa9edc11e5e2057b038a68a38e0729f", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-2674668/latest.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "68d42ad8bfa9edc11e5e2057b038a68a38e0729f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258442884
pes2o/s2orc
v3-fos-license
Influence of external load on forced vibration of rectangular plate . Rectangular plate is one typical structure with wild application in engineering, and the force vibration is the basis for analysis of sound radiation of plate. In this work, numerical simulation is conducted for force vibration of rectangular plate, and the influence of external load is also studied. Numerical results show that, the rectangular plate will resonate under external load at certain driven frequency, and it’s strictly related to structural modal frequencies. Structural vibration response will change obvious with the change of location of external load. When the location is fixed, with the increase of action region of external load, the acceleration of plate will decrease and some new resonance frequencies will be observed. Conclusions here will lay foundation for further work on sound radiation of plate. Introduction Rectangular plate is one typical structure which has wildly application in aero-space, ship structure, marine structures, and so on. Vibration characteristic of rectangular plate receives great attentions from researchers, since it's the foundation for sound radiation of plate. Bardell et.al. [1] applied the Ritz method to analyze the inplane vibration of rectangular plate with different boundary conditions. Gorman [2][3][4][5][6] put forward a new superposition method for solving the vibration of rectangular plate with free, simply-supported and clamped boundary conditions. Based on the differential quadrature method (DQM), Xing et.al. [7] put forward a new method with high accuracy for in-plane vibration of rectangular plate. Du et.al. [8][9] use the Fourier series to analyze the vibration of rectangular plate with elastic boundary conditions. But, it should also be important to get clear the influence of external load on structural vibration. In this work, numerical analysis for forced vibration of rectangular plate is carried out, and the influence of external load (i.e., location and action region) on structural vibration is also conducted. Numerical simulation of forced vibration 2.1 Structural model The finite element model for rectangular plate is shown in Fig.1, the plate is of length 2640mm, width 800mm, thickness 10mm. Both long edges are clamped. Material for the plate is 316L steel. Forced vibration Harmonic vibration of plate is conducted here by applying an external point unit force at the center of plate in normal direction. The frequency range for numerical analysis is 10Hz-3000Hz. Mean acceleration of plate is plotted in Fig.2. It's note that the plate will resonate at certain excitation frequency, i.e., 80Hz, 430Hz, 1120Hz, and so on. Due to the setting of step length of 10mm in numerical simulation, little difference is note between structural modal frequency and resonance frequency. Location of point excitation To analyze the influence of load location on structural vibration response, three cases of location at the center, L/4 and short edge are calculated, and the mean acceleration of plate is plotted in Fig.3. It's noted that, the change of load location will affect the amplitude of vibration acceleration, resonance frequency remains the same, and new resonance frequency will be observed. Since the structural modal frequencies do not change. But the change of load location will lead to different contribution of structural mode to vibration response, and finally result in different structural responses. Action region of external load Point load is ideal, and action region should be considered in engineering. Therefore, the effect of action region on structural vibration response is studied in this section. Three cases of action region, 120 mm×70 mm, 120 mm× 100 mm, 340 mm × 180mm, ware calculated and compared. Numerical results are shown in Fig.4. The total acceleration level in 10Hz-3000Hz are listed in table 1. It's noted that: 1) general distributions of acceleration versus driven frequency from three cases are similar, main resonant frequencies are the same since structural modal frequency do not change. 2) With the increase of action region, few new resonance frequencies will be noted, since the change of action region will change the contribution of structural modes to vibrational response, and few modal contribution will increase and be the main part; 3) With the increase of action region, structural acceleration level will decrease. Conclusion In this work, numerical simulation is conducted for force vibration of rectangular plate, and the influence of external load is also studied. Numerical results show that, the rectangular plate will resonate under external load at certain driven frequency, and it's strictly related to structural modal frequencies. Structural vibration response will change obvious with the change of location of external load. When the location is fixed, with the increase of action region of external load, the acceleration of plate will decrease and some new resonance frequencies will be observed. Conclusions here will help to give a better understanding of the sound radiation of rectangular plate.
2023-05-03T15:02:45.605Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "5e2a9e62074f0267e1c43a14c36edf5daef102b9", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2023/07/matecconf_msms23_01022.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cb2a90c036fe656ea6624f20529d36bbc157d3d8", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
250004409
pes2o/s2orc
v3-fos-license
Extending the energy-power balance of Li-ion batteries using graded electrodes with precise spatial control of local composition layer-by-layer deposition technique, are compared with conventional electrodes: at an energy density of 500 Wh L (cid:0) 1 the best graded electrode design increased power density from approximately 100 W L (cid:0) 1 to 630 W L (cid:0) 1 , while at a power density of 300 W L (cid:0) 1 , the energy density increased from approximately 420 Wh L (cid:0) 1 to 600 Wh L (cid:0) 1 . The results highlight the potential for new manufacturing approaches and electrode designs to provide perfor- mance enhancements for existing and future Li ion battery chemistries. • Energy-power trade-off in Li-ion cells is mitigated by composition graded electrodes. • A composition gradient across an electrode thickness is realized by spray deposition. • A heterogeneous electrode structure increases active materials utilization. • A trapezoidal graded distribution increases the power performance of electrodes. • C-rate response and cycling behaviour are enhanced by composition graded electrodes. Introduction To meet the performance demands of energy storage applications in electric vehicles (EVs) there is a need for improved Li-ion batteries (LIBs) that combine high energy density with improved power density, slower capacity degradation and reducing cost [1][2][3]. Existing commercial LIBs trade energy density for power density (and vice versa), and the performance window for both high power and high energy density is comparatively restricted [4]. Most effort on widening LIB energy-power performance has concerned the formulation of novel battery chemistries with intrinsically fast Li-ion diffusion and high energy density [5,6]. While steady progress has been made in LIB energy density (and huge progress in cost reduction), a step-change in LIB power-energy performance via a new electrochemically active material may be achievable only at increased cost [7]. Meanwhile, time is running out to meet carbon neutral commitments, e.g. 2050 in the UK and the EU, and advances in energy storage performance must be accelerated [8]. The energy-power density trade-off in LIBs applies to all negative and positive electrodes and their combination because it is a consequence of the way batteries are constructed, in particular the electrode microstructure [9]. Electrochemical reaction kinetics are generally faster than the mass transport of Li-ions in either solid active particles or in the liquid electrolyte [10,11]. Consequently, at all but the very slowest charge/discharge rates, the diffusion control of electrochemical processes leads to a heterogeneous spatial distribution of the Li ion concentration and state-of-charge (SOC), most strongly through the electrode thickness [12][13][14][15][16]. Although the movement of Li ions in the electrolyte is generally tortuous because of the complex, inter-connected pore structure, the net ionic flux is principally through thickness and, for example, typically leads to Li-ion enrichment close to the separator and depletion close to the current collector for a positive electrode during discharging [17,18]. At increased C-rates, i.e. to sustain a high reaction current, regions of low Li-ion concentration will lead to a local increase in diffusion overpotential and contributes to the total charge-transfer overpotential in Butler-Volmer reaction kinetics [19,20]. As a result, the cut-off voltage for charge or discharge is reached within a much shorter period than at lower C-rates, and local insertion/de-insertion reactions from active materials are incomplete [18,21]. Consequently, only a fraction of the available capacity is achieved at fast rates [12,17,22]. A further consequence of overpotential heterogeneity is that some regions of the electrode are "over-charged", which leads to unnecessary particle pulverisation [13,21] and capacity fade, or even the well-known case of local Li plating in graphite-based negative electrodes [23]. Commercially, electrodes are optimized towards high energy or high power characteristics [24]: high energy cells use thicker electrodes and/or a higher active material weight fraction to reduce the number of current collectors and separators, but sacrificing power density [4]; high power cells use thinner electrodes, and/or smaller sized active particles, higher porosity, and/or more carbon additives, but with a sacrifice in energy density [24,25]. Compared with the exploration of new LIB chemistries, the design of more favourable electrode structures has received only recent attention [26], partly due to a lack of manufacturing flexibility. Currently, almost all LIB electrodes are manufactured by mixing the active material, carbon conductive additives and a polymeric binder in a liquid that suspends the actives and additives and dissolves the binder [9]. The mixture is continuously coated onto metal foil current collectors through a slot-die followed by evaporation of the carrier/solvent liquid and calendaring [9]. This process has been developed spectacularly to the giga-factory scale (typically 20-40 GWh production per year) [27]. From a microstructural point of view, the objective of manufacture is to produce an identical, essentially random mixture of the constituents through the electrode thickness and within the plane (over many 100's of m 2 ), uniformly well-adhered to the foil. As installed capacity continues to ramp up, there is inertia to explore other manufacturing approaches that might offer more microstructural control but where the cost-benefits are not clear or sufficiently advantageous. For example, the benefits of aligning electrode porosity in the through thickness direction have been shown by simulation and experiment for many years, but a competitive, productive route for manufacture remains elusive, and engineered porosity (for example by sacrificial pore templating) usually increases pore fraction and undermines volumetric performance [28][29][30]. There remains controversy of the benefitsand under what conditionsof graded microstructure electrodes in which local particle size, fractions of materials, or type of material deliberately varies place to place [31][32][33][34]. At very slow charge rates, the cell is close to equilibrium conditions, and capacity only controlled by thermodynamic considerations such as the total amount of electrochemically active material and its intrinsic capacity. Graded electrodes should not offer any benefit under these conditions, and only when dynamic or kinetic effects become important might microstructural influence become apparent. In general, simulation has been a more convenient approach to explore the dynamics of graded electrodes because processes that control microstructure point-to-point during manufacture have not been available until recently [34][35][36][37]. Where grading has been investigated experimentally, there has been difficulty in fabricating electrodes that offer a fair comparison of graded versus conventional electrodes. For example, when active particle diameter is graded through-thickness, the local porosity fraction also tends to vary so that similar weight electrodes have different thicknesses. This type of interdependency makes it difficult to unpick experimentally the underlying benefitsor deficienciesof graded electrodes and microstructural design. Previous experimental work on graded electrodes has included local pore fraction or particle size grading [25,31,32,34,38], pore templating [28,29,39], and layered arrangements using multiple slurry casting/drying/calendaring steps [37,40]. These studies have shown some benefits of grading under some configurational and charge/discharge conditions, as well as some practical challenges such as higher electric resistivity at layer interfaces [41], and inter-layer cracking during long-term cycling [42]. It was demonstrated experimentally for LiFePO 4 and Li 4 Ti 5 O 12 particles that if the local fraction of conducting carbon additive was increased close to the current collector, the C-rate performance was much better than uniformly distributed, random mixture electrodes [35,36], consistent with similar findings for multi-layered electrodes [43]. The improved performance was attributed to lower impedance and homogenization of local reactivity and overpotential, supported by modelling insights [44]. Overall, the literature suggests that grading, in some circumstances can be beneficial, but there is not a "universal" optimum, and any optimum will depend on the active material involved, electrode thickness and porosity, and performance metrics of interest. Indeed, for some electrode formulations a random, uniform mixture may be close to optimum, and only the relative fraction of each component is important. We present the manufacture and comparative performance of various LIB positive electrodes with controlled local fractions of active material (LiFePO 4 , LFP), conductive carbon and binder through the electrode thickness. We select LFP because electrode performance is known to be sensitive to grading effects [35,36]; it also has a relatively low intrinsic capacity making optimization to sustain capacity performance valuable. We investigate active materials loadings of 90 wt% that are significantly higher than previous work [35,36]. The electrodes are additively manufactured by a layer-by-layer spray deposition technique, achieving through-thickness grading with near μm-scale resolution. Critically, we vary local composition while producing a range of electrodes with the same overall approximate weight, porosity, and proportion of materials. This level of compositional control, and the ability to use exactly the same materials used in the ubiquitous slurry casting electrode fabrication route allows detailed "back-to-back" comparison. While previous work considered only monotonic through-thickness grading arrangements, we explore designs that increase local carbon and binder (CAB) fractions in the region of the electrode lower (electrode/current collector) and upper (electrode/separator) interfaces, and maximize active fraction in electrode central regionsproducing a "trapezoidal" or "flat top mountain" distribution of active material through the thickness (e.g. Fig. 1b). Higher binder fraction at the current collector may increase adhesive strength and mechanical stability while more carbon can reduce contact resistance [45]. The benefits of more CAB towards the separator are more speculative, and part of the hypothesis explored here. Benefits might include: (i) reducing the fraction of active material exposed to the highest local Li concentrations in the near-separator region and so helping to avoid excessive SEI formation or other degradation mechanisms [46,47], and (ii) increasing the local ionic mobility at the electrode-separator interface to facilitate ion transport and so prevent excessively steep gradients in bulk electrolyte concentration forming across the electrode [44]. Another feature of the graded arrangements is that for a constant overall CAB fraction as used here, relatively high local CAB fractions at the current collector and separator region must be balanced by a reduction in the CAB fraction in the electrode central region. In turn, this may reduce CAB blocking or shadowing of active particle surface in this region, which may help to promote higher achievable electrode capacities. Graded electrode fabrication A layer-by-layer, additive manufacture spray deposition route for LIB electrodes has been developed ( Fig. 1a) that fabricates A5 (148 mm × 210 mm) area double-sided electrodes for pouch cells [48][49][50][51]. The process operates with conventional electrode slurries and compositions but with a greater dilution to enable atomization into a spray that deposits on a current collector foil. During spraying, a peristaltic pump controls the flow rate of suspension to the spray head where it is atomized using compressed air. The arising cone-shaped spray plume is scanned cyclically under computer control over a foil current collector in zig-zag pattern using a x-y-z manipulator gantry. The foil is attached to a heated vacuum chuck held at 140 • C to promote near-instant drying of the droplets and evaporation of the solvent on deposition. There is no significant re-suspension of previously deposited and dried layers, so the Table 1 Materials and formulations used to produce graded, trapezoid-shaped through thickness compositional variations. The overall weight ratio of LPF active material: carbon:binder was 90:5:5 (wt.%) for Trapezoid 1 to 4 and Uniform-90, and 80:10:10 (wt.%) for Uniform-80 electrode. To fabricate each electrode, a total of 3 g solid materials was sprayed onto the current collector under otherwise identical conditions. instantaneous, local fraction of electrode components is "frozen" into the electrode structure. The principal benefit is the ability to change the spray composition over time, or to mix multiple sprays, allowing electrodes with through-thickness (or in-plane) variations in composition, particle size, binder fraction, discrete inter-layers, etc. to be fabricated reproducibly and relatively quickly. Previous work showed the benefits of a simple, monotonically varying active material and carbon distribution in LFP-based electrodes [35], but here we investigate if any further benefit can arise for more complex trapezoid arrangements, shown schematically in Fig. 1b, i.e. the active fraction starts relatively low at the current collector (and the CAB fraction is correspondingly relatively high), increases with electrode thickness to a maximum plateau fraction, and then reduces again at the separator. Table 1 lists the electrode materials for each arrangement. All the graded electrodes had overall electrode active:carbon: binder compositions of 90:5:5 (wt.%), along with uniform electrodes of either 90:5:5 or 80:10:10. To ensure similar electrode thicknesses and areal loading, the total electrode mass sprayed and all other parameters (including electrode porosity (see later), moving velocity of the spray nozzle (20 mm s − 1 ), distance between nozzle and substrate (150 mm), pressure of the compressed air (0.4 bar), temperature of the substrate (140 • C), pumping rate of the suspension (see below), etc.) for all electrodes were kept constant during fabrication. To fabricate the trapezoidal active material distributions, three suspensions A, B and C were used (Table 1). Step 1: suspensions A and B were divided into two equal volume suspensions A1 and A2, and B1 and B2. Step 2: suspension A1 was pumped into suspension B1 at a rate of 2.25 mL min − 1 and suspension B1 was sprayed at 4.5 mL min − 1 onto the heated current collector via a nozzle (ViscoMist, Lechler GmbH, Germany). Because suspension A1 flowed into suspension B1, the fraction of electrode materials deposited gradually changed. Both suspensions were magnetically stirred throughout until both suspensions were exhausted. Step 3: suspension C was sprayed at 4.5 mL min − 1 until exhausted. Step 4 was the exact inverse of Step 2: suspension B1 was pumped into A1 at a rate of 2.25 mL min − 1 and A1 was simultaneously sprayed at 4.5 mL min − 1 until both were exhausted. A MATLAB® code was used to calculate the resulting, nominal through thickness local weight ratios [35]. Coin cell assembly The areal materials loading of all the LFP cathodes, irrespective of the materials arrangement, was 15.4 ± 0.7 mg cm − 2 . Note these areal electrode loadings are significantly higher than most comparable studies that use 3-5 mg cm − 2 [52]. Higher loadings were chosen to amplify any differences between the arrangements, and because higher loadings are always more desirable so long as performance is not compromised. The LFP cathodes were dried overnight at 60 • C and then calendared to similar porosities (55-56%), and punched disks of 12 mm diameter were obtained. For 90:5:5 graded and uniform electrodes, the calendared thicknesses were 104 ± 7 μm, electrode densities were 1.48 ± 0.04 g cm − 3 , and electrode porosities were 54.6 ± 1.4% (averaged from ~50 electrodes and typically 8 electrodes fabricated for each arrangement). Applying a t-test to these data, at a confidence level of 95%, the confidence interval for electrode thickness was ± 2.0 μm, for electrode density ± 0.024 g cm − 3 , and for porosity ± 0.74%. For the 80:10:10 uniform electrodes, the calendared thickness were 121 ± 2 μm, electrode densities were 1.30 ± 0.02 g cm − 3 , and electrode porosities were 56.3 ± 0.7%. The electrode porosity p was calculated according to: where ρ is the electrode density; f AM, f C, and f B are weight fractions of active materials, carbon and binder, respectively; ρ AM = 3.6 g cm − 3 , ρ C = 1.8 g cm − 3 , and ρ B = 1.74 g cm − 3 are the densities of active materials, carbon (super-P) and binder (PVDF), respectively. Although the electrodes with f AM = 80% had a lower calendared density than those with f AM = 90%, once the relative densities of the differing fractions were accounted for, the electrode porosities were all closely similar (55-56%) with an electrode materials loading of 15.4 ± 0.7 mg cm − 2 . CR2032 half-cells were assembled with the LFP-based positive electrodes working against Li foil, a Celgard separator, and 1 M LiPF 6 in ethylene carbonate and dimethyl carbonate electrolyte (EC/DMC = 50/ 50 v/v, Sigma-Aldrich, UK) electrolyte. Before assembling, all cell components were stored in a vacuum oven at 70 • C inside an Ar filled glovebox for more than 5 h and assembled into cells within the same glovebox. As-assembled cells were aged for 6-12 h before testing. Electrochemical testing Coin cells were tested using a battery cycler (Arbin Instruments, USA, Models: BT-G-25 and IBT21084LC) in the potential range 2.5-4.2 V vs. Li/Li + for LFP half-cells at room temperature and at various C-rates from 0.1 to 7C. Here, 0.1C corresponded to 17.0 mA g − 1 . Within each cycle, charging and discharging were performed at the same C-rate. Materials characterization After electrode calendaring, pristine electrode cross-sections were observed in a Carl-Zeiss Merlin (Germany) high-resolution field emission scanning electron microscope (FE-SEM) combined with an Oxford Instruments (UK) Xmax 150 energy-dispersive X-ray spectroscopy (EDX) detector. EDX element mapping and line scanning across the electrode thickness were performed to obtain qualitative element distributions. Electrode cross-sections shown in Fig. 2 were prepared by breaking the electrodes with tweezers. Results and discussion The plots in Fig. 2a-d shows the ideal, designed local fraction variations as a function of distance from the electrode surface (close to the separator) "down" towards the current collector for the electrodes given in Table 1. All the electrodes had the same overall composition ratio of active material LFP:carbon:binder of 90:5:5 (wt.%), the same overall coating thickness and density, but with different local composition variations through the electrode thickness: a uniform distribution in Fig. 2a (U90 hereafter) and trapezoidal distributions 2, 3 and 4 in Fig. 2b, c and d (T2, T3, T4 hereafter). Superimposed solid symbols in Fig. 2a-d shows EDX line scan intensity data from spray printed electrode cross-sections for Fe Kα1, C, and F Kα1 that depict the distribution of LFP, carbon conductive additive and PVDF binder, respectively. The EDX intensity data cannot be interpreted directly as the local weight fraction of each of the materials. Nonetheless the EDX traces provided reassurance that experimental distributions qualitatively matched and differentiated the uniform (Fig. 2a) and the trapezoidal designs ( Fig. 2b-d). For example, the increased thickness of the carbon and binder rich region at the electrode periphery from 15% of total thickness in T3 (Fig. 2c) to 25% in T4 (Fig. 2d) was clearly differentiated. As intended, the electrode thicknesses were almost identical. Fig. 2e-h shows EDX element maps superimposed on the SEM images (shown in Fig. S1 of the Supplementary Data) of the uniform and graded electrodes cross-section corresponding to the designs in Fig. 2a-d, respectively. The micrographs and maps show there were no step changes in local composition and no internal delamination. For ease of comparison, Fig. 3a shows the electrode materials design for U90 and the graded electrodes T1, T2 and T3 (Table 1) in a single plot. The relative thickness of the central plateau of high LFP fraction was maintained at 70% of the total thickness, but the graded regions differed. For example, in the plateau region, T1 and T2 had the same composition ratio of 93:3.5:3.5 (wt.%) and T3 had a ratio of 95:2.5:2.5, but at the edges of the graded region the ratios were 76:12:12 for T1, 59.8:20.1:20.1 for T2, and 49.4:25.3:25.3 for T3. Fig. 3b shows that at low C-rate (0.1C) the electrodes had similar discharge capacities of ~147 mAh g − 1 and microstructural design played no differentiating role. As the C-rate increased up to 3C, all the electrodes had reduced capacity, which should be expected, particularly given relatively high electrode thicknesses of ~100 μm after calendaring. Electrode T3, with the highest CAB fraction and lowest active fraction at the periphery but the highest active fraction in the plateau region, retained the highest capacity of ~3 times the uniform counterpart at 1C. All the electrodes had similar degradation rates during cycling at 0.5C (Fig. 3c), but the improved capacity retention of the graded electrodes compared with the uniform electrode was maintained. These data suggest that electrode capacity was sensitive to the microarrangement of constituents: the C-rate performance was improved by redistributing CAB to the electrode upper and lower regions, with a consequent increase in active material fraction in central regions. Galvanostatic charge-discharge curves for U90, T1, T2 and T3 electrodes are plotted in Fig. 4a-d respectively. The ability to maintain capacity with increasing rate C-rate from 0.1C to 1C was confirmed as U90 < T1 < T2 < T3 with discharge capacities at 1C of 43, 62, 89 and 120 mAh g − 1 , respectively. The corresponding first derivative of capacity to voltage (dQ/dE) for each electrode is plotted as a function of voltage in Fig. 4e- In the widely used Newman model of Li ion batteries [53], the Butler-Volmer equation is used to describe the reaction kinetics at the active particle/electrolyte interface. where i n is the current density normal to the surface of active material due to the Li intercalation/deintercalation, i 0 is the exchange current During discharge of a LFP cathode, Li + is intercalated first at the active particle surface and is then slowly transported through the particle according to Fick's 2nd law of diffusion [53]. The active particle was assumed to be spherical with a radius R s . D s is the diffusion coefficient of Li in the active particle. In Eq. (3c), the rate of Li + ion transport at the particle surface corresponds to the rate of Butler-Volmer reaction kinetics. At fast C-rates, the vacancies at the particle surface to host Li + may be saturated before Li vacancies beneath the surface are fully occupied, i.e. c s → c s,max, at the particle surface, and c s ≪ c s,max at the particle core. From Eq. (2b), the exchange current density (i 0 ) tends to zero because c s → c s,max, at the particle surface. From Eq. (2a), in order to maintain the high current output at fast C-rates, the overpotential η has to be elevated to provide a larger driving force for the reaction to take place. As a result, the cell cut-off voltage will be reached within a relatively short time, before vacancies inside the particle are fully occupied by Li + . This situation can be regarded as microscopic heterogeneity. When all the active particles are now considered across the electrode thickness, macroscopic heterogeneity at the electrode level arises due to the inhomogeneous Li + concentration in the electrolyte, which follows the moderately concentrated electrolyte theory [53,54]. where x is along the electrode (and the cell) thickness, ε is electrode porosity, a is specific interfacial area of the electrode, t + 0 is transference number of Li + in electrolyte, D eff e is the effective diffusion coefficient of the electrolyte, D e is the intrinsic diffusion coefficient of the electrolyte, and b is the Bruggeman coefficient. According to numerical solutions of Newman's model [54], c e reduces steadily from the electrode/separator interface to the electrode/current collector interface during discharging of a cathode, and this concentration gradient becomes steeper with increasing C-rate. Under these conditions, for particles located close to the current collector of the cathode, the relatively low local c e leads to reduced i 0 according to Eq. (2b). The local reaction rate will be consequently reduced and there is not enough Li + in the electrolyte to intercalate fully into the active cathode particle. For particles located closer to the separator where c e will be relatively high, the particles tend towards fully intercalated. Therefore, particles across the electrode thickness as a whole are not homogeneously utilized at electrode thickness scale. To address this inhomogeneity in active material utilization, recent simulations of compositionally graded electrodes predicted that the reaction rate could be homogenized by rearranging the electrode materials micro-distribution through the electrode thickness [44] and similar to the ideas explored experimentally here. Here, we assume the voltage difference between matching anodic and cathodic peaks gives an indication of the polarization between Liion deintercalation (anodic) and intercalation (cathodic) reactions, which are marked by the dashed vertical green lines in Fig. 4e-h. This polarization is approximately twice the overpotential for either deintercalation or intercalation reactions for a given C-rate [55]. Fig. 4i summarizes the discharge capacity data in Fig. 4a-d; Fig. 4j summarizes the polarization data from Fig. 4e-h as a function of C-rate; and finally, Fig. 4k then combines the data to show the strong relationship between discharge capacity and extent of polarization. Graded electrodes with higher capacity retention had lower polarization at each C-rate, i.e. a lower overpotential for redox reactions. Note electrode U90 had a lower polarization than T1 electrode (Fig. 4j) but its C-rate performance was nonetheless inferior; U90 also had a faster capacity decay with increasing polarization. As overpotential increases, the rate of side reactions generally also increases, driving the thickening of the SEI layer that increases internal resistance and further elevates overpotential [35,46]. Fig. 4k shows that in terms of discharge capacity as a function of polarization, the behaviour of graded electrodes was similar but distinctly superior to the uniform electrode. Fig. S2 in the Supplementary Data and Table 1 gives details of the design and performance of a further T4 arrangement in which the high fraction active material plateau was reduced from 70% of the overall electrode thickness to 50%, and consequently, the edge gradient was less steep. In terms of C-rate capacity retention and degradation rate, T4 was inferior to T3, emphasising the importance of a wide plateau region rich in the active material, and redistributing more CAB to the electrode edges. In this study the local binder fraction scaled with the local carbon fraction as it was assumed necessary always to have sufficient binder to embed the carbon into a CAB mixture. However, the binder is electrochemically inert and wastes mass and volume, does not contribute to electronic conductivity and may obscure active surface to Li-ions. Although not pursued in this paper, the spray deposition route allows for full decoupling of local binder and carbon fractions, even as the active material fraction also changes, and could be investigated as a route to minimise the overall fraction of parasitic binder in the electrode. To explore further the materials distribution effects, cyclic voltammetry was conducted and the resulting plots for U90, T2, T3 and T4 electrodes are shown in Fig. 5a-d respectively. As the scan rate increased from 0.05 to 0.09 mV s − 1 both anodic and cathodic peak currents of the uniform electrode were similar while the peak area tended to shrink, indicative of sluggish reaction kinetics and reducing active material utilization as seen in the earlier C-rate data (Fig. 4). T2 and T4 electrodes exhibited slightly higher peak currents than U90, and the peak areas expanded over the range 0.05-0.08 mV s − 1 . For electrode T3, both peak current and peak area increased with scan rate, and the difference between peak potentials widened, consistent with its higher capacity retention. To compare Li-ion mobility in the different electrodes anodic and cathodic peak currents were plotted as a function of the square root of the scan rate, as shown in Fig. 5e and f. Assuming diffusion controlled reactions, the slope of a best-fit line to the Randles-Sevcik equation is proportional to the Li-ion diffusion coefficient [56][57][58]: where i p is the peak current, v is the potential scan rate, F is the Faraday constant, R is the gas constant, T is absolute temperature, C* is the initial Li-ion concentration, A is the electrochemical active surface area, i.e. the sum of the active particle/electrolyte interfacial area, and D = D 0 ε b is the effective Li-ion diffusion coefficient for porous electrode, where b is the Bruggeman coefficient, ε is electrode porosity, and D 0 is the diffusion coefficient without porosity [44]. C* and A were assumed constant for all electrodes. It has been suggested that when the LFP electrode thickness is > 20 μm (or the electrode loading > 4 mg cm − 2 ), the Li-ion diffusion coefficient describes the overall diffusivity of the electrode rather than diffusivity in the LFP particles [59][60][61]. Given an electrode thickness ~100 μm and electrode loading ~15 mg cm − 2 , the Li-ion diffusion coefficients estimated here were not the intrinsic diffusivity of Li in LFP particles but reflected the effective Li-ion diffusivity, integrated over both particulate and electrode length-scales. Recent work has shown that for LIBs, Li + diffusion limitations in the electrolyte are the principal restriction to high energy retention at fast charging [10]. Fig. 5e shows that for electrodes T2 to T4 there was a linear dependence of i p (anodic) on v 1/2 at scan rates < 0.07 mV s − 1 . For electrode U90, there was no best-fit because the scan rates were too high for electrodes of this high mass loading and thickness functioning. Note that due to the sluggish ion diffusion at the fast rates, U90 did not show inflection points (current peaks) at 0.09 mV s − 1 . Instead, the maximum and minimum current obtained at the positive and negative potential ends are plotted in Fig. 5e and f to provide some indication of rate response, but no fitting to these data was performed. As shown in Fig. 5e, the slope (and therefore Li-ion mobility) increased from T4 (2.05 A g − 1 mV − 1/2 s 1/2 ) to T2 (0.341 A g − 1 mV − 1/2 s 1/2 ) and then T3 (0.640 A g − 1 mV − 1/2 s 1/2 ), and similar to the trend for the cathode peak current shown in Fig. 5f. Variations in pore fraction or tortuosity can have significant effects on Li-ion diffusivity throughout the electrode, but, to a first approximation, pore fraction was essentially the same for all the electrodes. Consequently, it is interesting to speculate on the reasons for the differing Li-ion diffusivity. Given that the aim of grading is to improve the spatial homogeneity of overpotential and local reaction current, it is consistent to infer that the apparent increasing diffusivity arose from more active material being utilized in the diffusion-limited energy storage reactions. The best-performing T3 graded electrode with 5 wt% carbon and 5 wt% binder was compared with a conventional uniform electrode in which carbon and binder fraction were doubled to 10 wt% and the active fraction correspondingly decreased to 80 wt% (U80). A comparison of the carbon, binder and active distributions is shown in Fig. 6a. Usually, for electrodes of thickness ~100 μm, increasing the carbon fraction to 10 wt% would be expected to improve the C-rate response [62,63]; indeed, Fig. 6b and c confirm that U80 outperformed U90 in terms of capacity per unit weight of active material. The C-rate response of U80 was now similar to T3 up to 1C, although T3 maintained its superior capacity at 2C and 3C. In terms of capacity degradation during cycling at 0.5C, U80 was more similar to T3 than to U90, although T3 was again superior to both uniform electrodes. Fig. 6e and f shows the same gravimetric data replotted using the total electrode materials mass (active + carbon + binder). At 0.1C, the extra carbon in U80 provided little benefit and undermined gravimetric capacity, and thereafter, the C-rate response and degradation rate were now markedly inferior to T3. For example, T3 specific capacities were 133 mAh g − 1 (0.1C) and 102 mAh g − 1 (1C), while U80 capacities were 118 mAh g − 1 (0.1C) and 83 mAh g − 1 (1C); the T3 average degradation rate was 0.34 mAh g − 1 per cycle while for U80 it was 0.50 mAh g − 1 per cycle. Although the response of the trapezoidal graded electrodes in Fig. 6e and f is significantly better than uniform electrodes, the response should also be considered in terms of linear graded arrangements that were the focus of previous work [35,36]. At a lower, less useful active fraction of 80 wt% LFP, the previous studies showed that linear grading (carbon-rich at the current collector for LFP electrodes) was superior to trapezoidal grading in terms of capacity with increasing C-rate. However, trapezoidal grading gave the slowest capacity degradation rate of all the electrodes studied during long term cycling. Therefore, combining prior and current work, the following trends can be seen: (i) linear and trapezoidal graded electrodes outperform uniform LFP-based electrodes in all principal figures of merit; (ii) linear grading from a carbon-rich current collector region produces the best dynamic response of electrode capacity to increasing C-rate; and (iii) trapezoidal grading sustains electrode capacity most effectively in long term cycling, consistent with our original hypothesis. It should be emphasised that local variations in electrode composition will change the local electronic conductivity, ionic conductivity, diffusivity, etc, which in turn will change the overpotential distribution (and its various ionic and electronic contributions), and therefore local reaction rates. The best performing electrodes here, for a given C-rate, in effect represent the best balance of both local ionic and electronic limitations on the electrochemical response, integrated over the electrode thickness. These aspects are explored in further detail by impedance simulations elsewhere [44]. Fig. 7 shows the gravimetric/volumetric power density against energy density for LFP-based half-cells with various uniform and graded electrodes. For electrodes with a composition ratio of 90:5:5 (wt.%), the gravimetric power versus energy performance could be ranked as U90 < T1 ~ T4 < T2 < T3 (Fig. 7a). As seen before, at low C-rate all electrodes had similar energy density; with increasing C-rate, the materials distribution played an increasing differentiating role in power versus energy performance. For example, at the same power density of 250 W kg − 1 (~0.5C), the energy density increased from 243 Wh kg − 1 for U90 to 473 Wh kg − 1 to T3 i.e. an increase of nearly 100%. Among graded electrodes, power versus energy performance was sensitive to the materials distribution ( Fig. S2 further compares T4 and T3). U80 had a power-energy curve that sat between T2 and T3 when only active material mass was considered in the power/energy density calculation (Fig. 7a, pink line/ symbols); however, when the whole electrode mass was considered, the U80 energy density reduced faster with increasing power than all the other electrodes due to the higher inactive material content (Fig. 7b, pink line/symbols). In terms of volumetric power density versus energy density in Fig. 7c, the power-energy curve for U80 shifted to the left in comparison with the graded electrodes (to lower energy densities), principally due to its lower electrode density at a similar porosity as other electrodes, as described in the Experimental section. In balancing power and energy densities for a uniform electrodes, increasing the carbon content from 5 wt% (U90) to 10 wt% (U80) is usually expected to increase the power significantly and reduce energy density, as confirmed by comparing black and pink curves above 0.2C in Fig. 7a-c, but sacrificing energy density at 0.1C by 10% (gravimetric, Figs. 7b) and 17% (volumetric, Fig. 7c). In understanding the uniform versus graded performance, U80 may be considered a power-oriented formulation (higher carbon content) and U90 as an energy-oriented formulation (higher active material content). Graded formulations T1 to T3 are closer to energy-oriented because they have 90 wt% active material. However, graded electrode T3 has energy and power densities that are both significantly larger than the power-oriented electrode U80 i.e. blue versus pink in Fig. 7b and c and also highlighted in Fig. 7d. For example, at the same energy density of 500 Wh L − 1 , the power density increases from ~100 W L − 1 (U80) to ~630 W L − 1 (T3) whereas at the same power density of 300 W L − 1 , the (c, d) Specific discharge capacity at different C-rates, and subsequent long-term cycling at 0.5C, when only the active materials mass was considered. (e, f) Specific discharge capacity re-calculated from (c, d) when considering the whole electrode materials mass (active material + carbon + binder). energy density increases from ~420 Wh L − 1 (U80) to ~600 Wh L − 1 (T3). This is achieved by reduced polarization (Fig. 4j) and consequently increased reaction currents that more efficiently utilizes the active materials as power demand (C-rate) increases. Recent modelling work suggests that LFP might be particularly sensitive to carbon and/or CAB grading in electrodes [44] because LFP has a relatively low intrinsic electronic conductivity (~10 − 9 S cm − 1 ) [64], although conductivity can vary depending on the extent of reduction/oxidation and the presence and effectiveness of any carbon coating. The results here confirm LFP electrodes can benefit from more precision in the CAB placement, and in comparison to previous work, the grading performance improvements are more compelling and achieved at a higher, practical active material loading of 90 wt% and in relatively thick electrodes. Further, while the most significant effect of carbon redistribution may be to reduce interfacial resistance that can be problematic in LFP-based electrodes [44], the results here show that energy storage behaviour is surprisingly responsive to the particular detail of this redistribution i.e. the range of graded electrode responses is diverse. Other active materials with higher intrinsic electrical conductivity than LFP may not benefit from CAB redistribution as strongly, or may only show worthwhile grading sensitivity under high C-rate conditions and/or for relatively thick electrodes. However, we note that significant grading benefits for Li 4 Ti 5 O 12 -based anodes have been shown where similar constant thickness/loading comparisons have been made [36]. Further, by combining linearly graded LFP-based positive electrodes with LTO-based negatives electrodes in a full cell arrangement significant C-rate and cycling performance improvement were realized, along with new power-energy combinations unavailable with uniform electrodes [36]. Finally, the trapezoidal shapes explored here may still be sub-optimal for LFP (or any active material), and both simpler or more complex variations can be conceivedlayer by layer fabrication opens a very large electrode design space. Given the possibilities, trial and error exploration of this design space is impractical, and highlights the increasing importance of a model-guided electrode design methodology [44]. Although in this study overall electrode porosity was constant for all the electrode arrangements, in terms of a generic, flexible approach to electrode design, a potentially confounding factor is that local pore fractionso farcannot be independently controlled with the same accuracy as local binder, carbon and active material fraction. Although pore templates might be used to control pore fraction and tortuosity [30], templating is in general, difficult to scale whereas a benefit of the spray deposition, layer by layer approach is that it is readily scalable to larger areas. Conclusions To advance understanding of the possible benefits of graded composition electrodes in LIB applications, we manufactured LiFePO 4based electrodes in which the distribution of active materials, carbon and binder through an electrode thickness was controlled with approximately micron-scale precision. Critically, weight per unit area, overall porosity and overall ratio of all materials was kept constant, regardless of the micro-distribution of materials, allowing for a fair back-to-back comparison of electrochemical performance. In the graded electrodes, the local carbon and binder (CAB) fraction was smoothly increased towards the interface with the current collector and with the separator, with a proportionate decrease in the local active fraction; in central regions of the electrode, the local active materials fraction was relatively high (>90 wt%) and the CAB fraction reduced. Different graded and uniform material distributions were explored, showing significant differences in electrochemical behaviour, particularly as C-rate was increased. The best performing graded electrode with overall 90 wt% active material and ~100 μm thickness had a higher power density than a highpower uniform electrode with twice as much carbon (and binder), while simultaneously providing higher gravimetric and volumetric energy density. The principal advantage of grading was to reduce overpotential compared with identical uniform electrodes, and both anodic and cathodic reaction currents were strongly enhanced, which was interpreted as better utilization of the available active materials at high rates, resulting in higher capacity retention under all conditions studied. The reduced overpotential and more uniform utilization supported a marked reduction in degradation rate during intermediate C-rate cycling.
2022-06-25T15:18:25.605Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "079e757c760da234153154645e3ecd432ab4a165", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jpowsour.2022.231758", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "a840b801fc86cf96cedaf0eec7ff7790de70cbab", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
55955592
pes2o/s2orc
v3-fos-license
Efficiency of construction waste recycling Recycling is widely used in practice in various fields of activity. However, the effect of such use does not always cover the costs of the processing. The article considers the problem of recycling waste generated while constructing residential buildings and structures. We present the results of full-scale studies of construction waste generated at construction sites in the city of Samara. We also show the qualitative and quantitative composition of the elements and analyze possible ways of their reuse. In addition, we have calculated economic feasibility of reusing building materials recycled from construction waste. Introduction Currently, there is an increase in the activity of urban development.Adjoining territories are built up and already existing city buildings are renovated as well.The expansion of territories is most noticeable and occurs more actively in large cities.For example, over the past decade, dozens of residential complexes have been built and continue to be built in the territory of the city of Samara: Koshelev project has increased the built-up area of the city territory by 130 hectares, Novaya Samara residential area in Krasnoglinsky district of the city occupies about 58 hectares, Southern City project has expanded the urban development in the southern part of the city by more than 1000 hectares.Such area expansion results from the increase in the number of city residents and their need to obtain comfortable housing with necessary utility systems and social infrastructure [1][2][3][4]. It is especially difficult to carry out construction work on infill areas, when worn-out buildings and structures are replaced with more comfortable and new ones (Fig. 1).In this case, all works on preparing the territory, cleaning, planning, erection and improvement are carried out in cramped conditions.During construction, adjacent residential areas suffer from a negative impact, including a large number of various construction wastes.At first, these wastes are stored on the construction site, then they are to be transported to a landfill.It should be noted that at present more and more construction wastes are sorted at the place where they are formed and subsequently transported to specialized enterprises for processing into secondary raw materials [5][6][7][8][9].This practice takes place in the cities of Moscow, Novokuibyshevsk, Togliatti and others.Despite this, unfortunately, construction waste recycling is not widely spread and is currently developing slowly.The main reason for this is lack of economic leverage to influence construction firms or waste management companies. Fig. 1. A worn out building is being demolished. To determine the efficiency of construction waste recycling and reuse, studies were conducted to determine the component composition of construction waste at city construction sites. Materials and Methods To determine the component composition of construction waste generated during the construction process, a method of field observation was used, followed by statistical analysis of the results obtained.Field observation implied surveying urban construction sites with measuring equipment and photographic fixation, and the results were automatically processed by graphic editors. The first stage of field inspection was fixating geometric indices of a construction waste dump location.The inspection report noted its location relative to the construction site boundary, settlements, the approximate shape and dimensions of the dump (length, width, average and maximum height of waste storage). The second stage involved sampling, fractionation and chemical analysis of the samples in the laboratory to determine the composition of the wastes as a percentage of the volume of each type of fraction to the total volume. The types of fractions were taken in large scale, taking into account the Federal Classification Catalog of Waste acting in the territory of the Russian Federation, namely: paper and cardboard, broken glass, plastic and polyethylene, wood waste, metal waste, broken concrete and reinforced concrete, broken bricks, household waste. The third stage of the study was determining the criteria for reusing recycled waste products.For this purpose, physical and chemical properties of the recycled semi-finished products were analyzed in the laboratory, feasibility and appropriateness of their further use was settled, and their applicability criteria were established. Results More than 30 construction sites located in different parts of the city of Samara were surveyed.The main criterion for choosing the surveyed sites was the proximity to the existing residential buildings.We selected construction sites located not farther than 100 meters from a residential building. The surveyed construction sites satisfying the above conditions were concentrated in the central part of the city, which is densely populated; there are both private houses and new multistoried comfortable residential complexes. The preliminary survey of construction sites showed that in 85% of cases there were no specially designated places for temporary storage of construction waste at the construction site.As a rule, the waste was stored without prior division into components in bulk either in the territory of the construction site, or outside its boundaries.More often the mass of waste was lying along the protective fence and had the following dimensions: 1.5 × 5 m, average height of waste storage was 1.5 m (Fig. 2).The dimensions are shown in Table 1.Based on the results presented in Tables 1 and 2, it is possible to determine the type and volume of construction works, as well as how often the dumped waste is transported away from the site. For example, the wastes at construction sites № 1, 10 and 29 were formed as a result of dismantling old buildings and clearing the territory for new construction.Wastes at sites № 12, 16 and 24, where there is a large proportion of broken concrete, were most likely formed during the erection of a panel building.This is indicated not only by the highest number of concrete waste, but also by lumpy iron waste. Since the waste at construction site №27 is mainly represented by plastic and polyethylene, it can be assumed that the construction works at this site are at the final stage: installation of utility systems, windows, and so on. At construction site № 2, a large amount of waste is stored, which indicates lack of timely waste disposal and non-compliance with sanitary requirements. Based on all survey results, we propose to divide the construction sites into the following types of waste components: demolition-construction and finishing.Their averaged component compositions are presented in the diagrams (Fig. 3). Discussion Analysis of construction sites research data showed that the most common wastes generated both during the dismantling of buildings and in the main construction period are broken bricks, broken concrete and reinforced concrete.It should be noted that in some cases these wastes can reach 70-80% of the volume of stored wastes.Thus, the approximate amount of concrete transported to the landfill ranges from 9 to 120 tons (excluding waste that was disposed of before the survey of construction sites).Such volumes are significant, and waste disposal to the landfill leads to losing valuable materials [5][6][7][8][9].In the authors' opinion, the use of concrete waste as a material for building temporary roads, for secondary backfilling and in terrain planning [10][11][12], also leads to losing valuable recyclable materials. The conducted studies of the resulting broken concrete quality showed sufficient strength of the resulting building products [13] for their use in the construction of new buildings and structures or as a concrete filler for reconstruction of hydraulic structures.Such use of concrete waste will reduce the volume of extracted natural resources, decrease the load on landfills, and also minimize the logistics costs for the transportation of waste and natural resources. To determine the efficiency of using recycled scree, a comparative cost analysis was carried out, which showed the following.The cost of natural scree, which is used for the most common concrete of class B22.5 and mined in the quarries of the Samara region, is about 500-700 rubles per ton.The cost of processing recycled scree, including preliminary sorting, transportation or cost of receiving waste (50-70 rubles), is about 300-400 rubles per ton.Thus, recycling concrete and reinforced concrete waste is financially beneficial.Even taking into account the purchase of equipment (the cost of installation is about 60 million rubles), the costs will be recouped after processing 300,000 tons of waste, which can happen in 1.5-2 years of good load and development. Conclusions The research showed the following: 1. Large amounts of various wastes are formed on construction sites; their component composition makes it possible to determine the stage and specifics of the construction work. 2. The resulting waste is reused in small volumes, which leads to losing large quantities of valuable materials. 3. Using recycled scree in the production of building materials will minimize the cost of extracting natural resources, reduce the burden on the environment, and also make profit from processing waste concrete. Table 1 . Typical dimensions of construction waste dumps.Further, the component composition of the stored waste was determined for each construction site.The results of the studies are presented in Table2. Table 2 . Results of component composition analysis of construction waste.
2018-12-08T16:23:49.435Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "1fb1a2d7063d4c282e7a05a7e732d6f0904fb7bd", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/31/matecconf_rsp2017_00055.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1fb1a2d7063d4c282e7a05a7e732d6f0904fb7bd", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
119084105
pes2o/s2orc
v3-fos-license
Evidence for Supersymmetry in the Random-Field Ising Model at D = 5 We provide a non-trivial test of supersymmetry in the random-field Ising model at five spatial dimensions, by means of extensive zero-temperature numerical simulations. Indeed, supersymmetry relates correlation functions in a D-dimensional disordered system with some other correlation functions in a D-2 clean system. We first show how to check these relationships in a finite-size scaling calculation, and then perform a high-accuracy test. While the supersymmetric predictions are satisfied even to our high-accuracy at D=5, they fail to describe our results at D=4. Introduction.-Thesuggestion [1] that the randomfield Ising model (RFIM) at the critical point [2][3][4] obeys supersymmetry came as a major surprise in Theoretical Physics.One of the implications of supersymmetry is dimensional reduction [5,6]: the critical exponents of a disordered system at space dimension D and those of a pure (i.e.non-disordered) system at dimension D − 2 coincide.Let us remark that dimensional reduction is a consequence of, but not necessarily equivalent to, supersymmetry. However, in spite of its power and elegance, it was soon clear that the applicability of supersymmetry is problematic.The original argument [1] was based on the study of the solutions of the stochastic Landau-Ginsburg equations in the presence of a random magnetic field.Unfortunately, the crucial assumption of uniqueness of the solution of these equations [1] (which holds at all orders in perturbation theory), fails beyond perturbation theory.In fact, it was immediately clear that in the RFIM (but not for branched polymers [7]) the predicted dimensional reduction is absent at low dimensions: the RFIM has a ferromagnetic phase at D = 3 [8,9] while the D = 1 pure Ising model has no transition.Non-perturbative effects (e.g.bound-states in replica space [10][11][12][13]) are obviously important in D = 3.Yet, their relevance for D > 3 (specially upon approaching the presumed upper critical dimension D u = 6) is unclear.If we consider the case of D = 6 − , different scenarios are possible, as listed below: 1. Nonperturbative effects could destroy supersymmetry at a finite order in the expansion or, even worse, at D = 6. 3. Supersymmetry has been suggested to be exact but only for D > D int ≈ 5.1 [16][17][18].For D < D int the supersymmetric fixed point becomes unstable with respect to non-supersymmetric perturbations. In order to discriminate among these three scenarios, we need accurate simulations aimed to test some of the many predictions of supersymmetry.In the last few years, the development of a powerful panoply of simulation and statistical analysis methods [19][20][21] set the basis for a fresh revision of the problem.Great emphasis was made on the anomalous dimensions η and η related to the decay of the connected and disconnected correlations functions, respectively [see Eq. ( 2)].Supersymmetry predicts η = η (moreover, the D-dimensional RFIM η = η are predicted to be equal to the anomalous dimension of the pure Ising model in dimension D − 2).Extensive numerical simulations at zero temperature showed that these relations fail at D = 3 [19] and D = 4 [21], but they are valid with good accuracy at D = 5 [22].These numerical results suggest that supersymmetry may be really at play at D = 5. The predictions of supersymmetry go further beyond those regarding the critical exponents: they involve both finite volume effects and high-order correlations functions.Here, we will show that several non-trivial supersymmetry predictions hold at D = 5 to a very high numerical accuracy.This is the first direct confirmation that supersymmetry holds in the RFIM at high dimensions.As a consistency check, we show that the same relations are definitively not-satisfied at D = 4. Simulation setup.-The Hamiltonian of the RFIM is with the spins S x = ±1 on a hypercubic lattice in D dimensions with nearest-neighbor ferromagnetic interactions and h x independent random magnetic fields with zero mean and variance σ 2 .Given our previous universality confirmations [23], we have restricted ourselves to normal-distributed h x .We work directly at zero temperature [24][25][26][27][28] because the relevant fixed point of the model lies there [29][30][31].The system has a ferromagnetic phase at small σ, that, upon increasing the disorder, becomes paramagnetic at the critical point σ c .Here, we work directly at σ c , namely at 6.02395 ≈ σ c (D = 5) [22] and at 4.17749 ≈ σ c (D = 4) [21]. We consider two correlation functions, namely the connected and disconnected propagators, C where the • • • are thermal mean values as computed for a given realization, a sample, of the random fields {h x }. Over-line refers to the average over the samples.For each of these two propagators, we scrutinize the second moment correlation lengths [32], as adapted to our geometrical setting.In particular, our chosen geometry is an elongated hypercube with periodic boundary conditions and linear dimensions L x = L y = L z = L and L t = L u = RL (at D = 4 we chose L x = L y = L and L z = L t = RL) with aspect ratio R ≥ 1.In fact, the supersymmetric identities that we will check in the critical region hold in the limit R → ∞, which should be taken before the standard thermodynamic limit. We simulated lattice sizes in the range L = 4 − 14 at D = 5 (L = 4 − 28 at D = 4) and aspect ratios 1 ≤ R ≤ 5. Additional simulations for R = 10 and L ≤ 10 were performed at both 5D and 4D for consistency reasons.For each pair of (L, R)-values we computed ground states for 10 5 disorder samples.Our simulations and analysis closely follows the methodology outined in our previous works at D = 3 and 4 [19,21] (for full technical details see Ref. [20]). Supersymmetric predictions.-Let us consider a point in the 5D lattice, r = (x, u) where x = (x, y, z) refers to the first three cartesian coordinates, while u = (t, u).In a similar vein, for the 4D case, we split r = (x, y, z, t) = (x, u) as x = (x, y) and u = (z, t).The supersymmetric predictions are particularly simple for disconnected correlation functions: where G is the pure Ising model correlator, and Z is a position independent normalization constant that will play no role (see below).Note that the left-hand side depends on both linear dimensions, L and RL, while the right-hand side depends only on L. Therefore, we must carefully consider under which conditions Eq. ( 3) is expected to hold.In a more conventional study, one would require an hierarchy of length scales LR L ξ 1 (recall that ξ is the correlation length), while we demand for the D − 2 Euclidean distance x 1 − x 2 /ξ ∼ 1.We shall put under stress Eq. ( 3) by demanding it to hold as well in the finite-size scaling regime These preliminaries lead us to consider a D − 2 Fourier transform in the Note that the u-dependence vanishes due to the disorderaverage (hence we average over u in order to gain statistics).We then compute the second-moment correlation length from the ratio of Ĉ(dis),D The important observation is that, because the constant Z in the r.h.s. of Eq. ( 3) cancels when computing the ratio, the dimensionless ratio ξ (dis) /L as computed in the D-dimensional RFIM coincides with ξ/L as computed in the D − 2 Ising model.This equality holds if ξ (dis) /L is computed precisely at the critical point σ c and if the thermodynamic limit is taken under conditions (4). If we now consider the four-body disconnected correlation function, supersymmetry predicts a relation analogous to Eq. (3) (the normalization in the r.h.s changes to Z 2 ), so we may compute as well a (D − 2)-dimensional U 4 parameter, that is predicted to coincide with that of the critical D−2 Ising model (under the same condition discussed above for ξ (dis) /L).Again, we improve our statistics by averaging both M 4 u and M 2 u over u.We finally address the supersymmetric predictions for the connected correlation function.It is convenient to consider the correlation functions K defined as The Ward identity for supersymmetry [33] implies, see Appendices A and B, that the second-moment correlation length ξ does not make direct reference to dimensional reduction. Results.-Let us start by recalling in Table I the (D− 2) = 2, 3 universal quantities from the pure Ising model that we aim to recover from the D dimensional RFIM.We shall need as well the value of the leading corrections to scaling exponent ω); the analysis we present is done ξ (dis) (L, R)/L vs. L −ω for various R values, as computed in the D = 5 RFIM.The value of the corrections to scaling exponent ω corresponds to the pure Ising model in three spatial dimensions, see Table I (the value from Ref. [37] is so accurate that we took their central value as numerically exact).The dashed horizontal line corresponds to the value for ξ/L, also shown in Table I.The continuous line is a fit to our R = 5 data (see text for details).The extrapolation to L = ∞ obtained from the fit is compatible with the pure Ising model value, as predicted by supersymmetry. using the exponent ω given by dimensional reduction, which is not far from the one computed in the large-scale simulations at D = 5 [22]. First, we consider the dimensionless ratio ξ (dis) (L, R)/L in Fig. 1.Our first task, recall Eq. ( 4), is to extract the large-R limit.The good news is that we expect this limit to be reached exponentially in R and uniformly in L [38].In fact, the comparison of our numerical results for R = 5 and 10 suggests that (within our statistical accuracy) R = 5 is large enough.Therefore, we focus the analysis on R = 5, where we reach our largest L value, namely L = 14.As it is clear from Fig. 1, our data are accurate enough to resolve corrections to scaling.Furthermore, the non-monotonic L-evolution of ξ (dis) (L, R = 5)/L implies that subleading corrections cannot be neglected.Hence, we have attempted to represent these sub-leading corrections in an effective way by means of a fit to a polynomial in L −ω .We have included in the fit only data with L ≥ L min .We have attempted to keep both L min and the order of the polynomial as low as possible.We As in Fig. 1, but for the ξ I). find a fair fit (χ 2 /dof = 3.24/2, p-value=20%) with a cubic polynomial and L min = 6.The corresponding extrapolation to which is statistically compatible to the three-dimensional result in Table I.Hence, our first check of supersymmetry has been passed.The strength of this check is quantified by our 2% accuracy.The analysis of ξ (con) σ−η (L, R)/L, see Fig. 2 is carried out along the same lines.We find a good fit (χ 2 /dof = 0.63/3, p-value=89%) with a quadratic polynomial in L −ω and L min = 6.The corresponding extrapolation to It follows that we have checked supersymmetry to a 1% accuracy. Our U 4 (L, R) data, see Fig. 3, can be analyzed in a similar vein.We find a fair fit (χ 2 /dof = 6.85/4, p-value=14%) with a quadratic polynomial in L −ω and L min = 5.The corresponding extrapolation to again compatible with the three-dimensional pure Ising model value (Table I).Supersymmetry is checked to the 0.2% level, this time.Finally, as a comparison, we show our data for the 4D RFIM Ising model in double limit L → ∞ and R → ∞, all three dimensionless quantities differ from their values in the 2D pure Ising ferromagnet.Although this is hardly a surprise (recall, for instance, exponents η and η [21]), the discrepancy is at least at the 10% level. Conclusions.-Thefinding of supersymmetry and dimensional reduction in the RFIM is, arguably, one of the most surprising results in Theoretical Physics.Here, thanks to state-of-the-art numerical techniques, we have carried out a precision test of supersymmetry.Although supersymmetry is clearly broken at D = 4, the D = 5 RFIM is supersymmetric with good accuracy.Hence, the Scenario 1 in the Introduction is plainly discarded. The only remaining contenders are Scenarios 2 and 3. Exponent ω might help to settle the question.In the expansion ( = 6 − D) we find at least two exponents: ω DR = + O( 2 ) (obtained through dimensional reduction) and ω NS = 2 + O( 2 ) (due to irrelevant nonsupersymmetric operators).The large value of ω found here and in Ref. [22] (the values for ω(D) are in Appendix C), agrees with dimensional reduction and favors Scenario 2. Indeed, in Scenario 3 supersymmetry is broken only for space dimension D < D int , suggesting a much smaller value ω(D = 5) ∼ D int − D ≈ 0.1.However, further studies are needed to resolve this delicate issue. ACKNOWLEDGMENTS We acknowledge partial financial support from Ministerio de Economía, Industria y Competitividad (MINECO, Spain) through Grant No. FIS2015-65078-C2, and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant No. 694925).N. G. F. and I.We show the corresponding universal values for the 2D pure Ising model (black dashed lines).Note that for R = 1 there are two natural ways of computing U4.One way (black squares) is averaging over a co-dimension two manifold [this is the natural way for a supersymmetry check, recall Eq. ( 6)].The other way, which is the natural one when studying the D = 4 RFIM per se, is averaging over the full four-dimensional lattice (green diamonds).Clearly, the two choices differ, both at finite L and in the large-L limit.Instead, for ξ (con) σ−η (L, R)/L these two kinds of spatial-averaging coincide by construction.The horizontal green dotted lines are the large-L limit, as obtained for the D = 4 RFIM [21]. M. P. were supported by a Royal Society International Exchanges Scheme 2016/R1. Appendix A: Finite volume supersymmetry In the case of RFIM in the Landau-Ginsburg form, it is well known that we can neglect the thermal fluctuations near the critical temperature and the model becomes equivalent to a stochastic differential equation.Under the approximation of uniqueness of the solution, we arrive to a supersymmetric field theory.In this theory we can define the superfield Φ(X) as function of the superposition X = x ⊕ θ, where θ is a complex anticommuting quantity, φ(x) is the original field and ψ(x) and λ(x) are auxiliary fields, whose correlations functions are related to the response functions.For instance, in the supersymmetric formulation the connected propagator C (con) xy corresponds to the propagator of the fermionic field ψ(x)ψ(y) , while the disconnected propagator C (dis) xy corresponds to the propagator for the bosonic field φ(x)φ(y) . In the infinite volume limit, the theory is invariant under the supergroup O(D|2) which implies that the correlation functions are functions of the superdistances.In particular, the correlation function Φ where r 2 is the (squared) Euclidean distance between points x and y in the D-dimensional space: where Z = (X − Y ) 2 .By Taylor expanding both sides of Eq. (A3) in powers of θθ we conclude that because all higher powers of θθ vanish.We readily obtain the Ward identity [33] ψ We note that Eq. (A5) implies for the RFIM in a infinite lattice that where large r and ξ are assumed (ξ is the correlation length), so that D-dimensional rotational invariance is restored, and Z 2 is a position-independent (therefore, irrelevant for us) constant [39].These relations (A3-A6) lead to a bunch of Ward identities among various correlation functions.One also finds that the probability distribution of the φ field on a d ≡ D − 2-dimensional hyperplane is the same of the dimensional reduced theory. However, in a finite volume rotational invariance is broken so that supersymmetry and dimensional reduction are lost.Fortunately close examination of the argument shows that we do not need the full O(D|2) supersymmetry, but the O(2|2) supersymmetry is enough in order to have dimensional reduction.In order to recover the O(2|2) supersymmetry, the system size needs to be infinite only in the remaining two dimensions. Our choice (see main text) is to stay in a system of linear size L in d directions and of size LR in two directions.At the end we need to consider the limit R → ∞ in order to have supersymmetry and dimensional reduction.Let us write the D dimensional coordinates r as (x, u), where x is d-dimensional and u is two dimensional.We can write The O(2|2) supersymmetry acts on the two-dimensional subspace, labeled by coordinates u ⊕ θ, that becomes infinite in the R → ∞ limit.Dimensional reduction gives informations only on the probability distribution on fields on the hyperplanes at fixed u that have volume L d .Supersymmetry does not give us information on the behaviour of the correlations function of fields whose u is different, unless we stay at distances much smaller than L, where 2 + d rotational invariance is recovered.It connects however responce functions at different u with the correlations functions at fixed u, as we shall see below. Appendix B: The Ward Identity and its consequences As explained above (see also main text), we shall be considering points in the five-dimensional lattice, r = (x, u) where x = (x, y, z) refers to the first three cartesian coordinates, while u = (t, u).In a similar vein, for the D = 4 case, we split r = (x, y, z, t) = (x, u) as x = (x, y) and u = (z, t).The (squared) Euclidean distance between two points in the D dimensional lattice will be named In the finite L case we only have a O(2|2) supersymmetry.Therefore, instead of the Ward identities corresponding to O(D|2), see Eqs. (A2,A6), the Bosonic and Fermionic propagators are now related through a O(2|2) Ward identity that tells us that In our geometry, we only have the full D-dimensional rotational symmetry for x 2 L 2 .Instead, in the limit of a large aspect ratio, R → ∞, we have two-dimensional rotational symmetry (for the u variables) for any x.Thus, we expect the two correlation functions C (dis) x,u and C (con) x,u to be functions of where g(x) is some function of the d-dimensional coordinates that reduces to the d-dimensional Euclidean distance x 2 in the limit x 2 L 2 [a simple possibility in D = 5 would be g(x) = L 2 π −2 (sin 2 πx/L + sin 2 πy/L + sin 2 πz/L)]. Note that, because we shall be taking the limit of large R at fixed L, the gap in the transfer matrix scales as 1/L.Therefore, the correlation function C (dis) x1,0,0;x2,ρ,0 decays exponentially in ρ (for any L), so the convergence of the two-dimensional integrals in Eqs.(B4)-(B6) poses no problems. Hence, in the large-R limit, the second-moment correlation length ξ (con) σ−η is predicted to coincide with the one obtained from the disconnected propagator.The prediction holds to a high accuracy in the RFIM in D = 5, but certainly not in D = 4 (see Fig. 4 in the main part). Let us conclude this section by explainig our naming ξ (con) σ−η to the correlation length extracted from the K propagator, which stems from the way it is computed.Indeed, the Fluctuation-Dissipation relations for Gaussian random-fields [20] suggest a simple way to compute the K x1;x2 propagator.Let Of course, x 1 and x 2 might be interchanged, so it is better to average over the two orderings.[19], D = 4 [21] and D = 5 [22] in units of D − 6 versus the space dimension.If we explicitly assume dimensional reduction (DR), we also have an exceedingly more accurate result for D = 5 (from the threedimensional pure Ising model (3D IM) [37]) and an exact result at D = 6. Appendix C: Exponent ω for the RFIM: the smoking gun? As discussed in the conclusions of the main part, dimensional reduction suggests that ω(D) = + O( 2), with = D − 6.Indeed, Fig. 5 strongly suggests that the dimensional-reduction prediction is sensible, because ω(D)/(D − 6) seems a very smooth function of D − 6.We do not find any indication for a zero of ω(D) near D = 5.It is our impression that such a zero, which we do not see, would be a direct prediction of the Scenario 3 discussed in the main paper.Paradoxically, it is not trivial to determine the scaling corrections exponent ω in the D = 2 pure Ising model, which is one of the best known models in Statistical Mechanics. The difficulty lies in that the leading correction to scaling seems to have a somewhat unusual origin.Consider, for instance, the magnetic susceptibility χ as computed at the critical point for a system of linear dimension L. It is expected to scale as where η = 1/4 is the anomalous dimension, A is a scaling amplitude and C is a constant term due to the analytic part of the free-energy density.Eq. (D1) can be cast as well in the typical form for scaling-corrections studies (see, e.g., Ref. [32]): χ ∼ L 2−η (A + CL −ω ) , ω = 2 − η = 7/4.(D2) However, this exponent ω = 7/4 is not related to any irrelevant operator, but to the analytic part of the freeenergy.Hence, the reasoning leading us to Eq. (D2) makes sense only if the ω exponents arising from all the irrelevant operators are larger than 7/4.Only under this assumption the leading corrections to scaling would be given by Eq. (D2).Now, it is well known that an operator associated to the dilution for the q-Potts models in D = 2 (the q = 2 Potts model is the Ising model) has dimension 10/3 and then ω = −(D − 10/3) = 4/3 [40].According to the discussion above, the leading corrections to scaling would then be given by ω = 4/3, rather than 7/4.However, we think this is not the case, due to a number of theoretical and numerical reasons: • This dilution operator is outside of the main Kac table of operators for the Ising model.Thus it is not produced by other operators (susch as the Identity, spin or energy operators) and then it is expected that this operator does not contribute to the corrections to scaling.Note that, on the contrary, the operator is inside the Kac table for other Conformal Field Theories (CFT), such as the 3-Potts model [41], for instance.In fact, in the limit q → 4, the critical points for Potts and the tricritical Potts (which corresponds to the dilution fixed point) merge and, indeed, the dilution operator has a dimension 2 in this limit.It is one example for which one finds ω = 0. • The above analytical reasoning was confirmed in Ref. [42].which considered (numerically) various extension of the Ising model (antiferromagnetic Ising model in a magnetic field and the Blume-Capel model).The exponent ω = 4/3 was not found in any of these models (rather, a correction ω 2 was identified).Indeed, the authors of Ref. [42] concluded that the dilution contribution to the correction to scaling is indeed given by an exponent ω = 4/3, but with amplitudes proportional to (q − 2) and thus is absent for the Ising model, in agreement with CFT predictions.This scenario was supported by simulations of the random-cluster model for q close to 2. (L, R)/L data, as computed in the D = 5 RFIM.The agreement of the L = ∞ extrapolation with the value of ξ/L from the pure Ising model is a direct confirmation of the supersymmetric Ward identity, see Appendix B. Inset: Zoom of main panel data corresponding to R = 5, 10, and L > 4. For the sake of clarity, in the vertical axis, we have subtracted the value of the pure Ising model (see also Table Fig 4 .FIG. 3 . FIG.3.As in Fig.1, but for the U4(L, R) data, as computed in the D = 5 RFIM.For comparison, we also show data for the pure Ising model in three spatial dimensions.Corrections to scaling in the pure model are of similar size (but opposite sign) to those of the large R limit for the RF IM at D = 5. FIG. 4 . FIG. 4.Dimensionless quantities ξ (dis) (L, R)/L (a), ξ (con) σ−η (L, R)/L (b) and U4(L, R) (c) vs. L −ω as computed in the D = 4 RFIM.We set ω = 1.75 from TableI.We show the corresponding universal values for the 2D pure Ising model (black dashed lines).Note that for R = 1 there are two natural ways of computing U4.One way (black squares) is averaging over a co-dimension two manifold [this is the natural way for a supersymmetry check, recall Eq. (6)].The other way, which is the natural one when studying the D = 4 RFIM per se, is averaging over the full four-dimensional lattice (green diamonds).Clearly, the two choices differ, both at finite L and in the large-L limit.Instead, for ξ FIG. 5 . FIG.5.The corrections to scaling exponent ω, as computed from the RFIM in D = 3[19], D = 4[21] and D = 5[22] in units of D − 6 versus the space dimension.If we explicitly assume dimensional reduction (DR), we also have an exceedingly more accurate result for D = 5 (from the threedimensional pure Ising model (3D IM)[37]) and an exact result at D = 6. Appendix D: Exponent ω for the pure Ising model in D = 2 TABLE I . Universal quantities as computed in the pure Ising model at two and three spatial dimensions.The somewhat controversial situation with the corrections to scaling exponent ω in two dimensions is discussed in Appendix D.
2019-01-24T16:04:18.000Z
2019-01-24T00:00:00.000
{ "year": 2019, "sha1": "518871cf265a95ec42926ada3ff2a0530abb5aa4", "oa_license": null, "oa_url": "https://zaguan.unizar.es/record/86380/files/texto_completo.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "724fab9ad73df5820146c752d08be6bf6a627d12", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics", "Mathematics" ] }
237928277
pes2o/s2orc
v3-fos-license
Mineralogical, Geochemical and Physico-Chemical Characterization of Clay Raw Materials from Three Clay Deposits in Northern Cameroon The characterization of clay raw materials of three clay deposits from Northern Cameroon was investigated. The three deposits, located in Gaschiga, Sekandé and Boulgou, are locally used as building materials, but no data are available on these materials and they are relatively unknown. Mineralogical, geochemical and physico-chemical characteristics were studied, using X-ray diffraction, X-ray fluorescence and physico-chemical analyses. Mineralogi-cally, quartz was the most abundant mineral in the studied raw materials. It is associated to abundant quantity of smectite, kaolinite and K-feldspars, and slightly abundant to traces of hematite and amphibole. Geochemically, those clayey soils are more siliceous (SiO 2 , 51% - 59%) with significant amount of aluminum (Al 2 O 3 , 15% - 19%) followed by iron oxides (Fe 2 O 3 , 3% - 10%). Other oxides (K 2 O, MgO, TiO 2 , Na 2 O, MnO, CaO and P 2 O 5 ) are in relatively lower proportion. High level of silica content explains the sandy nature of these clays. The results of granulometric analysis show that the studied raw material contain sand (39% - 68%) as major grain size followed by clay particles (17% - 38%), silt (1% - 36%) and gravels (0% - 16%). The studied clayey soils were moderately plastic, with plasticity index values ranging from 13% to 30%, and are also characterized by very high liquidity limits of 34% - 63%. Introduction Clay is a widely distributed abundant mineral resource of major industrial importance for an enormous variety of uses (Ampian, 1985;Reeves et al., 2006;Murray, 2007). According to Velde (1983) clay is applied both to materials having a particle size of less than 2 μm and to the family of minerals that has similar chemical compositions and common crystal structural characteristics. This material also has the power to be shaped, to shrink, to harden after drying and to consolidate after firing, which allows the formation of a vitreous phase more or less important (Melo et al., 2003;Kamseu et al., 2007;Pialy, 2009). They have varying chemical composition depending on both the physical and chemical changes in the environment where clay deposits are found (Salawudeen et al., 2010). In the context of sustainable development and technological innovation relative to the field of futuristic research, the study of clay and its properties for various applications appears as an imperative. Clay deposits have been identified in all regions in Cameroon (Sieffermann, 1959;Nkoumbou et al., 2001;Mefire et al., 2015;Basga et al., 2018;Nchare et al., 2018;Nzeukou et al., 2021) though with differing properties probably owing to geological differences. It is the reason why many houses in cities and villages are built using these materials due to the social and economic factors. In the North Cameroon for example, clay materials are mainly used for the manufacture of different pottery items and building materials such as bricks (Djenabou et al., 2015;Temga et al., 2015;Tsozué et al., 2017;Yanné et al., 2018;Nguiamba et al., 2019;Nzeukou et al., 2021;Yaboki et al., 2021). These activities, practiced by craftsmen during the dry season, bring them finances to meet their needs. While the consumption of these products tends to become widespread, their production remains very unsustainable in some developing countries. This situation can be explained by the virtual absence of a real industrial fabric and a poor estimation of the local resources potential. Therefore, the evaluation of this natural resource has an important effect on the economic development of countries. The exploration of new clay deposits in developing countries such as Cameroon could significantly contribute to socio-economic development, as this helps also identifying the original materials. Many factors intervene in all industrial applications of clay material. They include their structure, composition and physical properties. This paper aims to characterize the clay raw materials collected from three clay deposits in the northern part of the Cameroon. In detail, it will be to characterize the clay raw material on the mineralogical, geochemical and physico-chemical composition view point. The results of the work will be used to improve a database to support the start-up of industrial projects for local clay materials. Study Area The study area is located at the center of the Benue valley in North Cameroon, between latitudes 10˚10' and 9˚30''N and longitudes 13˚15' and 13˚46'E ( Figure 1). The study was undertaken in three localities, Gaschiga, Sekandé and Boulgou (Table 1). They were chosen mainly for their traditional used as building materials. The climate is characterized by a Sudanian climate, with two contrasted seasons: a humid one from May to October and a dry one from November to April. Total yearly precipitations vary between 900 and 1500 mm and mean annual temperature is 28˚C. The general landscape is composed of two geomorphological units, extensive plain whose monotony is interrupted here and there (Ngounouno et al., 2001;2003) ( Figure 2). The vegetation is a seasonally flooded prairie which is strongly modified by farming activities (Letouzey, 1980). The superficial formations are mainly constituted of vertisols associated with ferruginous soils (Raunet, 2003;Tamfuh et al., 2011;Kagonbé et al., 2020a). Sampling Field work was carried out during the dry season and consisted of direct observations, description of environmental settings and soil survey in order to choose the position of pits. Table 1 outlines the profile code, sampling code and their geographic coordinate. Eleven pits were dug in three localities (Gaschiga, Sekandé and Boulgou) and thirteen samples were collected: five in Sekandé (SK1C2, SK1C3, SK2C2, SK3C2 and SK4C3), three in Gaschiga (GA1C2, GA2C2 and GA3C2) and five in Boulgou (BO1C2, BO2C2; BO3C2, BO5C2 and SPmG ( Table 1). The samples were selected on the basis of their color and homogeneity. About 30 kg of each sample were collected and placed in polythene bags, labeled and sent to the laboratory for analyses. Analytical Techniques Mineralogical and geochemical analyses were done at the Research Unit Clay, Geochemistry and Sedimentary Environments (AGEs) of the University of Liege in Belgium. X-ray diffraction patterns were obtained with a diffractometer (Bruker Advance 8) equipped with Ni filtered CuKα radiation, with automatic slit and on-line computer control. The samples were scanned from 2˚ to 45˚ 2θ. Chemical analyses were obtained by atomic absorption spectroscopy. Loss on ignition (LOI) was measured from total weight after ignition at 1000˚C for 2 h. The chemical alteration index (CAI or CIA) is considered to be a good measure for the degree of weathering (Nesbitt & Young, 1982). Its calculation is based on molecular proportions: CIA = Al 2 O 3 /(Al 2 O 3 + CaO* + Na 2 O + K 2 O) × 100. In the equations, CaO* is associated with the silicate fraction and corrected for inputs from carbonate and apatite (Ozaytekin & Uzun, 2012). Since Al is much more immobile than the alkali elements (Na + and K + ) and Ca 2+ , changes in CIA reflect changes in the proportions of feldspar and the various clay minerals developed in the clay raw materials. For the semi-quantitative analysis of the samples, the relative abundance of minerals was estimated from the intensity of the main reflections. Physico-chemical properties were determined at the laboratory of Locals Material Promotion Authority (MIPROMALO) of Yaoundé in Cameroon. The grain size distribution was determined following the NF P18-560 standard by dry sieving and the P94-057 standard by sedimentation. The plasticity was measured by the Atterberg limits: plastic limit and liquid limit according to the ASTM, D 4318-10 norms. The plasticity indices were calculated after the determination of Atterberg limits (Casagrande, 1948). Mineralogical Composition The results of the mineralogical analysis on disoriented powder of thirteen samples collected are illustrated in Table 2. They show the predominance of quartz, followed by smectite, kaolinite and K-feldspar which is present in all samples of the three localities, but in various proportions. In Gaschiga, quartz is the most abundant mineral followed in the same proportion by smectite, kaolinite and K-feldspar. In Sekandé, quartz is also the main mineral, but highly expressed here than in Gaschiga. It is followed by smectite and kaolinite which quantities on contrary are lower. K-feldspar remains in the same proportion as in Gaschiga. In Boulgou, quartz remains the most represented mineral. Contrary to the two other sites, kaolinite is the most important mineral here after quartz. It is followed by smectite and K-feldspar, which quantities are globally low compared to the two other sites. The Boulgou site stands out from other sites by the presence of hematite, which quantities are similar to those of smectite. Also, amphibole, another primary mineral, is present in the study area. It is observed only in Sekandé and Boulgou. Geochemical Composition The geochemical composition of the clay samples is illustrated in (Table 3). It is noted however that the most weathered material is observed in Sekandé. Table 4 summarizes the physicochemical characteristics of the different samples. Globally, the result reveals that the particle size distribution varying slightly between different studied localities. The proportions of particles more than 2 mm in size, vary from 17% -33% for Gaschiga, 26% -38% for Sekandé and 20% -32% for Boulgou. Clay fraction has the medium quantities. Silt fraction has the lowest quantities in Boulgou. They range between 1% to 8%. In Gaschiga and Sekandé their values vary from 11% to 36% and 11% to 16% respectively. Sand is Journal of Geoscience and Environment Protection the most important fraction. Its quantities are 43% to 50% in Gaschiga, 39% -50% in Sekandé and 50% -68% in Boulgou. The higher proportions are observed in Boulgou. Physico-Chemical Properties The gravels are slightly represented in clay raw material. Their quantities vary from 0% to 16%. Only BO1C2 presents the higher value. The presence of organic matter in a raw material is inconvenient and undesirable. Organic compounds particularly reduce the strength of a building material. They cause corrosion and softening in the material over time. Their proportions vary between 2% to 6%. The liquid limit of studied samples ranged between 36% and 63% (Table 4; Figure 3), while the plastic limit values were between 18% and 36%. The highest plastic value (LL > 63%) was observed in the GA1C2 sample. The resulting plasticity indexes ranged between 17% and 33%. Discussion The mineralogical analysis of the studied clayey materials is characterized by relatively high contents of quartz followed by smectite, kaolinite and K-feldspar, with small proportions of amphibole observed in Sekandé and Boulgou, as well as hematite only observed in Boulgou. Except the GA3C2 sample coming from Gaschiga, the proportion of quartz is very abundant in all localities. Smectite clay mineral was responsible for the extensive swelling and shrinking upon drying and wetting, the major characteristic of all vertisols (Duchaufour, 1977;Soil Survey Staff, 1999;FAO, 2006;Aydinalp, 2010). The high smectite content in the studied clay material was related to the low landscape positions, a strongly contrasted climate and the presence of a clay-rich alluvial parent material (Boulvert, 1968;Bocquier, 1973;Gavaud, 1975;Duchaufour, 1977). The predominance of smectite suggests that the chemical process acting in the study area is bisiallitisation (Pédro, 1966;Temga et al., 2015). All the studied clay bodies are rich in quartz and kaolinite which suggest felsic sources. Kaolinite is formed by the Figure 3. Casagrand's plasticity chart (Holtz & Kovacs, 1981) showing representative clay material samples. B. P. Kagonbé et al. decomposition of orthoclase feldspar in granite (Nzeukou et al., 2021;Yaboki et al., 2021). The existence of hematite in the Boulgou is tributary to the significant iron oxide contents Fe 2 O 3 , is in line with the presence of amphibole . The presence of kaolinite suggests that monosiallitisation is a crystallochemical processes acting in the study area towards bisiallitisation (Pédro, 1966). The studied clayey materials are characterized by relatively high contents of SiO 2 followed by Al 2 O 3 and Fe 2 O 3 . However, it also contains some minor elements such as potassium, sodium and titanium oxides. The SiO 2 content should be associated with the presence of quartz particles: a highest value for all sample refers to the higher sand fraction in particle size distribution analysis and higher quartz content in mineral composition. Alumina (Al 2 O 3 ) reflects the presence of aluminosilicates. Iron (Fe 2 O 3 ) is related to the presence of hematite while potassium (K 2 O) is binded to the presence k-feldspars (Nzeukou et al., 2021). The high SiO 2 /Al 2 O 3 ratio > 2 wt%, suggests the presence of free-form of silica and 2:1 clay mineral types (Crook, 1974;Temga et al., 2015) and indicates high chemical maturity of the investigated samples (Maignien, 1958;Tsozué et al., 2017;Temga et al., 2015). A representation in the triangular diagram SiO 2 -Al 2 O 3 -Fe 2 O 3 showed that all sample were localized on SiO 2 -Al 2 O 3 axis (Figure 4), toward SiO 2 pole in line with high SiO 2 /Al 2 O 3 ratio. This is indicative of an excess of SiO 2 in the studied soils and confirmed the presence of quartz and 2:1 phyllosilicates of montmorillonite type (Tsozué et al., 2017). The suitability of clays for different industrial applications is based on their particle size distribution. Particle size distribution of clay plays an essential role in defining the properties of suspensions (plasticity and viscosity) and green pastes during drying and firing (Rivi & Ries, 1997;Basga et al., 2018). Grain-size distribution also affects the microstructure and the mechanical properties of fired materials (Ngun et al., 2011;Nguiamba et al., 2019;Sayouba et al., 2019;Temga et al., 2015;Voula et al., 2021). The particle size distribution of the study clay is moderately consisted of clay fraction and mostly sand fraction. Indeed, the particle size distribution of the studied clays shows that the samples are constituted of mixtures in varied proportions. Position of different samples in ternary diagram for textures illustrated by Figure 5 shows that the dominant soil texture for Boulgou was sandy silt and heavy sandy silt for Sekandé and Gaschiga excepted GA1C2. The dominance of the coarse range could be attributed to the immediate environment especially from weathering and disintegration products of sandstone (Kagonbé et al., 2020b;Voula et al., 2021). The Atterberg limits of the studied samples are shown in the Holtz and Kovacs (1981) diagram (Figure 3). Based on this diagram, samples BO2C2, BO3C2 and BO5C2 are lower plastic clays, samples BO1C2, SK2C2, SK1C2 and GA3C2 are medium plastic, while samples SK3C2, SK4C3, SK1C3, SPmG, GA2C2 and GA1C2 are higher plasticity clays. According to McNally (1998), the plasticity of clay materials depends to its particle size distribution and mineralogy composition. This characteristic could be also attributed to the low organic matter content and the dominance of the kaolinite mineral in the clay (Abdullahi et al., 2012). Journal of Geoscience and Environment Protection Conclusion In this study, geochemical, mineralogical and physico-chemical properties of the raw clayey materials from Gaschiga, Sekandé and Boulgou in Northern Cameroon were investigated. Mineralogically, the studied raw clay materials are constituted of smectite and kaolinite as secondarily clay minerals, associated to high amount of quartz, and small amount of hematite, K-feldspath and amphibole. Monosiallitisation and bisiallitisation are the two crystallochemical processes acting in the study area. Geochemically, SiO 2 , Al 2 O 3 and Fe 2 O 3 are the main oxides. Al 2 O 3 reflects the presence of aluminosilicates and Fe 2 O 3 is related to the presence of hematite. High SiO 2 /Al 2 O 3 ratio indicates an excess of SiO 2 in the form of quartz and the presence of 2:1 phyllosilicates of montmorillonite type towards kaolinite. Particle size analysis shows that the raw materials are mostly constituted of sand fraction (39% -68%), followed by clay fraction (17% -38%) and silt fraction (1% -36%). The clay materials are moderately plastic clays, with plasticity characteristics varying between 18% and 36% and plasticity index ranging between 13% and 30%, which could be attributed to the low organic matter content and the dominance of the kaolinite mineral in the raw clay materials.
2021-08-27T16:38:15.109Z
2021-06-04T00:00:00.000
{ "year": 2021, "sha1": "9ae8922fda2b3b7767773568bac03f0a3582c2cb", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=109954", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d07daad9cbe6cc1c7c78ed13d5aef335c152df18", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
233579455
pes2o/s2orc
v3-fos-license
Towards sustainable resource management: identification and quantification of human actions that compromise the accessibility of metal resources Conservation & Recycling Although metals and minerals represent a prominent asset for sustainable development, continuous population growth and the current accelerations in energy and mobility transitions are increasing concerns regarding their accessibility for current and future generations. As recent insights have identified access rather than depletion to be the dominant factor for resources, this paper elaborates on the (in)accessibility concept of such raw materials once they have entered the technosphere. It identifies six human actions that compromise accessibility: emitting, landfilling, tailing, downcycling, hoarding and abandoning. It analyses the degree of the generated inaccessibility and proposes estimated duration of inaccessibility as a proxy. It further explores how current sustainability management tools like material flow analysis and life cycle analysis could be further developed to address resource (in)accessibility. Finally, the paper presents a case study on cobalt in the EU, where five compromising actions make 70% of the extracted cobalt inaccessible due to tailings (21.3%), landfilling (31.2%), downcycling (11.6%), dissipation (1.4%) and hoarding (4.3%); only 30% is used to expand the functional stock. are reasonable prospects for eventual economic extraction. The location, quantity, grade or quality, continuity, and other geological characteristics of a mineral resource are known, estimated, or interpreted from specific geological evidence and knowledge, including sampling. (2) A (mineral) reserve is the economically mineable part of a measured and/or indicated mineral resource. It includes diluting materials and allowances for losses, which may occur when the material is mined or extracted and is defined by studies at pre-feasibility or feasibility level as appropriate that include application of modifying factors. Such studies demonstrate that, at the time of reporting, extraction could reasonably be justified. Introduction Natural resources, including minerals and metals, are key to satisfying mankind's needs now and in the future. Mancini et al. (2018) investigated the function of raw materials in meeting the UN's Sustainable Development Goals for 2030. Their supply chain contributes to several environmental impacts, including water pollution and climate change, but they are essential in final applications. In addition to long standing use in housing, transport and communication infrastructure, they become more and more key in energy supply and storage to address climate change. In comparison to fossil based energy generation, the application of neodymium, dysprosium and terbium into permanent magnets in wind mills results in 95 to 98% climate impact reduction of electricity use, despite the energy intensive supply chains (UNEP, 2013). Apart from sustainable energy sources like wind, solar or hydropower, the energy transition urges for an expansion of in-use-stock of raw materials for its infrastructure. Low-carbon mobility (electric and hybrid vehicles) is following a similar pattern, with exponentially growing in-use stocks of metals e.g. Rare Earth Elements (REEs) in electric engines and cobalt in traction batteries (EC, 2020a). The employment of metals in the coming years and decades will be tremendous in the energy and mobility value chains to face the climate challenge: the European Commission anticipates that for electric vehicle batteries and energy storage, the EU would need up to 18 times more lithium and 5 times more cobalt in 2030, and almost 60 times more lithium and 15 times more cobalt in 2050, compared to the current supply to the whole EU economy. Demand for rare earths used in permanent magnets, e.g. for electric vehicles, digital technologies or wind generators, could increase tenfold by 2050 (EC, 2020a;EC, 2020b). The above numbers, alone, illustrate that in no way are growing demand and growing stock compatible with circular economy policies (e.g. EC, 2015) based on enhanced qualitative recycling only. One should be aware that recycling does not always deliver the same quality of the resources as in the original product, as illustrated by the cascading concept discussed by Campbell-Johnston et al. (2020). An injection of metals from mining of primary resources will remain necessary to meet the growing demand and allow for the projected change. Hence, transfer of primary metals from the ecosphere to the technosphere will be essential to deliver the infrastructure through a Net Addition to the Functional Stock (NAFS). Indeed, it is vital to understand the key role of resources: they fulfill ultimately a function at the user by becoming part of the 'Functional Stock'. As we have a continuous renewal, change and growth of this 'Functional Stock' because of changing needs and technologies, we face a continuous need for a 'Net Addition to the Functional Stock'. We introduce this new term to highlight the functionality at the user; stocks at user or in-use stock might be ambiguous as a fraction might be not functional but hoarded and/or at end-of-life. We intend to address the mass flows and stocks that are related to the build-up of the functional stock, rather than the function and functionality of the functional stock itself. It has to be examined to what extent the energy and mobility transitions shift the climate and energy challenge into a material challenge. One heavily debated material challenge, with metals in particular, is socalled resource depletion. In other words, will future generations have sufficient natural resources to meet the demand of metals? The Life Cycle Assessment community historically used methods like ADP (Abiotic Depletion Potential) that characterizes 'Abiotic Resource Depletion', based on quantification of use-to-natural stocks rates -in other words an application of the fixed-stock paradigm -, or as a function of additional energy use or costs due to decreasing ore grades or increasing efforts as with extraction of oils from oil sands (Sonderegger et al., 2017;Sonderegger et al., 2020;Berger et al., 2020). However, the underlying assumption has been heavily criticized as the natural stocks quantification is scientifically questionable in terms of the rationale behind. On top, it is very challenging methodologically and quantitatively speaking, especially given the economic context and the dynamics of exploration, see e.g. Drielsma et al. (2016). The mining sector equally argues that metals as such are not necessarily depleted or gone by transferring them to the technosphere as metals do not vanish. They argue that there is no justification to leave e.g. cobalt underground (to avoid depletion), if cobalt can remain in use within the technosphere for decades, thus delivering benefits to current and future generations. Interestingly, this idea has been coined in the LCA community by Frischknecht (2014) where he brought in terms like resource borrowing and post-consumer resource availability; but the latter concepts have not further been developed according to the knowledge of the authors. In a recent European project, SUPRIM (Sustainable Management of Primary Raw Materials through a better approach in Life Cycle Sustainability Assessment), stakeholders from various backgrounds have been brought together to develop a better understanding of the resource problem (Schulze et al., 2020a, Schulze et al., 2020b. Rather than 'depletion', the project brought forward that the concern is continued access to resources by humans for use in the economy. Accessibility is defined as the ability to make use of a resource (Schulze et al., 2020a). This is in line with Berger et al. (2020), who define the safeguard subject for mineral resources as "[…] the potential to make use of the value that mineral resources can hold for humans in the technosphere". Hence, actions that compromise accessibility to resources should be framed and quantified, as well as counter-measures should be adopted in function of sustainable resource management. Kral et al. (2019) discussed recently that next to material cycles also so-called final sinks exist, both man-made and environmental media. Ciacci et al. (2015) mentioned the problem named 'lost by design', i.e. there are common uses of metals where losses are intended in the application, e.g. copper in brake pads. Recent work by Charpentier Poncelet et al. (2019) and Zampori and Sala (2017) pointed to dissipation as key to develop new life cycle impact assessment methods for resource use under the area of protection natural resources. Helbig et al. (2020) quantified dissipative losses of 18 metals. Van Oers et al. (2020) define several compromising actions like dissipation in the environment, hibernation in the technosphere and occupation in use. Dissipation in the environment has been taken further into a new life cycle impact assessment model (van Oers et al., 2020). All in all, current sustainability assessment tools like life cycle assessment (LCA) or material flow analysis (MFA) are not specifically designed to unravel the nature of compromising actions systematically. The research work cited in the paragraph above points to the importance of actions in the technosphere, which can take place anywhere along the value chain, as key actions that compromise accessibility, rather than the ecosphere-technosphere transfer by mining. However, the sustainability impact of the primary production should not be forgotten and assessed with impacts on ecosystem quality and human health. Indeed, extraction processes contribute 50% to the global carbon emissions and even 80% to biodiversity losses (Oberle et al., 2019). Hence, the burdens associated with the primary supply of raw materials and metals should be minimized by cleaner technologies for both expanding and maintaining the functional stock. This means that, ideally, minerals and metals extracted by the primary production sector are fully transferred as a net addition to the functional stock (NAFS) and there is no need to compensate for resources already mined, but made inaccessible by the abovementioned compromising actions. Whereas a lot of research has been dedicated to the quantification of what we keep in the loop by proposing so-called circular economy indicators (see e.g. Moraga et al., 2019), this paper envisages rather quantification of material losses than materials retention by elaborating the (in)accessibility concept. Looking at material losses can allow mapping where reduction of inaccessibility can be achieved. The focus is on metals, which have a stock and non-renewable character (Sonderegger et al., 2017). In principle, they cannot disappear at the element level and theoretically full continued access is possible in absence of compromising actions. The goal and the novelty of this paper are to bring forward, develop and support the concept of (in)accessibility in function of sustainable resource management and to contribute to rethinking sustainability assessment tools like MFA and LCA in function of better addressing resource (in)accessibility. In order to illustrate the obtained insights from the concept, an exploratory case study on cobalt within the EU has been elaborated. For sake of clarity, key terminology is explained in Appendix 1. Identification of nature of compromising actions and related actors Van Oers et al. (2020) point to human actions that lead to a change in accessibility of resources, rather than to their depletion. Indeed, elements cannot be transformed as such and hence they cannot be depleted, unless they undergo nuclear fission or decay. Moreover, Van Oers et al. (2020) point out that exploration by the mining sector increases the stock of accessible resources, supporting the critique that reserves as a base for depletion methods is flawed and too narrow in scope. In terms of actions that decrease accessibility, they identify environmental dissipation, technosphere hibernation and occupation in use. Environmental dissipation is rather obvious: emission leads to very low concentrations in environmental compartments; these diluted stocks become inaccessible for mankind with the current state of technology and economics. There is quite a common vision on this type of compromising action which can be called emissions into the environment, dissipative or dispersive flows into the environment, or disposal in atmosphere, hydrosphere and geosphere. They can be from both point and diffusive sources (Kral et al., 2019). Both terms dissipation and dispersion are used. Dissipation is a far broader term as it is related to the outcome of an irreversible process and can in principle relate to other physical issues than matter (e.g. energy), whereas dispersion clearly points to the spreading of mass in a larger volume (see Appendix 1). The second type of human activity that leads to inaccessibility, hibernation in the technosphere, is less intuitive. Van Oers et al. (2020) mention two terms: dissipation in the technosphere and hibernation. Van Oers et al. (2020) state that it is not always straightforward to distinguish between the two. Nevertheless, they identify three hibernating stocks: landfills, tailings and abandoned products. Landfills and tailings are clearly confined stocks, i.e. intended to be kept in a closed place. Landfills originate from all kind of disposal activities from industrial or household end-of-life materials. Tailings stem from mining activities. Mining processes lead actually not only to tailings as hibernating stocks, but also to waste rock (overburden and interburden, or 'scalpings') (Shaw et al., 2013). Depending on the concentration of desired metals in the waste rock, waste rock might be labelled as a second hibernating stock from mining activities in addition to the tailings. Abandoned products need a closer look. On one hand, there are clearly products at the user that are not in use anymore and are stored before disposal. This can be called hoarding. There are obvious examples of electronic equipment like mobile phones and laptops (Thiébaud et al., 2017a). However, there is also infrastructure that is abandoned. There are not only abandoned residential areas like abandoned villages (Jaszczak et al., 2018) but also abandoned industrial infrastructures, sometimes named brownfields, see e.g. De Sousa and Spiess (2018). Furthermore, there is a lot of formerly used infrastructure no longer utilized within active industrial and residential areas. In particular, abandoned metals-based infrastructure embedded in urban areas has to be mentioned. Swedish researchers investigated in detail copper stocks in local power grids, identifying that almost 20% of the copper in the grid is no longer in use in cities like Gothenburg (Krook et al., 2011). Next to power grids, old railways are known as abandoned infrastructure (Quattrone et al., 2018). A lot of abandoned infrastructure may be poorly documented. The European Commission estimated 4.1 million vehicles with 'unknown whereabouts' in the EU (EC, 2018), being vehicles that are deregistered but not destructed, potentially exported, hoarded or abandoned. Whereas the inaccessibility of abandoned products hoarded by users is rather easily reversible, abandoned infrastructure may lead to a more severe and persistent inaccessibility. Hence, it is proposed to differentiate hoarding from abandoning infrastructure as human activities that lead to inaccessibility. Further on, we may identify other inaccessible stocks of materials within the technosphere in addition to the aforementioned landfills, tailings, hoarded stocks and abandoned. Indeed, production and end-oflife processing lead to dispersion of metals in all kind of products withheld in the technosphere. The level of complexity of modern products makes it extremely challengingif not impossibleto make all embedded resources fully accessible at the end of the product's service. It should be recognized that in many cases recycling keeps materials and metals in the loop in society but in other applications where the metals do not deliver the same functionality as in the first application. This downcycling leads to a dispersion of metals in metal alloys or in (road) infrastructure, making them inaccessible and preventing them to reenter the initial functional stock. As an example, De Meester et al. (2019) analysed the recycling of waste electronic and electrical equipment in Belgium. Despite quite significant recycling rates of metals like aluminium and palladium of 81% and 60% respectively, only half and one third of these percentages add back to the stock with the same functionalities. The other half and two thirds flow into the dispersed stock within the technosphere. The complexity of products today challenges high quality recycling, see the example of smartphones where at least 70 of the 83 stable elements can be found (Rohrig, 2015). Finally, there is also the stock in use or the functional stock ('occupation in use') that Van Oers et al. (2020) identified as inaccessible. Its inaccessibility may be questioned as it is indeed inaccessible for many humans but at the same time accessible for its users. This might raise questions about the distribution of accessibility, be it geographically, socially, economically or culturally, but these aspects are beyond the scope of this paper. At least, metals serve a purpose in providing services as part of the functional stock. In this sense it plays an essential role and is to be separated from the other inaccessible concentrated and dispersed stocks that do not deliver any service. However, the quantities that lead to service should be investigated to better understand the system. More resource efficient products with less materials leading to the same functionality, typically part of product service system (PSS) models, and which may be expressed by resource efficiency metrics, like Material Input Per unit of Service (MIPS), are important in sustainable resource management strategies (Wiesen and Wirges, 2017), in addition to addressing inaccessibilities. A typical example where more service can be provided with lower quantities of materials is sharing products, e. g. car sharing. As a result, six inaccessible stocks are identified and are summarized in Table 1 with the associated human compromising action, their location, their status dispersed or confined, together with a typical example. It should be highlighted that the stocks identified are not only dispersed or dissipated stocks, the latter getting most of the attention in rethinking life cycle assessment impact methods (Zampori and Sala, 2017;Charpentier Poncelet et al., 2019;van Oers et al., 2020). Factors affecting the duration of the inaccessibility The human actions that compromise inaccessibility through transfer into six different stocks have to be differentiated in terms of its severeness or degree. The characteristics of the stocks, together with the socioeconomic framework and technological development may determine when these stocks may become accessible again. The uncertainty about the duration of inaccessibility may be very different from one accessible stock to another. In a broader context, inaccessibility may be limited by technological and societal factors. The latter do not only govern the in-use stocks but also to some extent the duration of inaccessible stocks because of ownership. In this section we focus on socio-economic and technological constraints in particular. The latter are fundamentally governed by thermodynamics (Castro et al., 2004;Castro et al., 2007;Reuter et al., 2006). When the thermodynamics are unfavorable, e.g. in case of huge dilutions or in case of extremely strong interactions in between metals like in alloys, technology becomes economically unfeasible to make metals accessible again. The most reversible inaccessibilityat least from a technical point of view -is clearly related to the hoarded stock. Its location and confinement and the current state of the technology with proper take-back, pretreatment and recycling schemes in many countries demonstrate the feasibility of making it accessible, see e.g. metal recycling from waste electric and electronic equipment (see e.g. Thiébaud et al., 2017a;De Meester et al., 2019). Actually, the cause of the inaccessibility is the socio-economic context where there is insufficient (economic) incentive to avoid hoarding. In line with these latter inaccessible stocks are abandoned stocks. The metal stocks therein are concentrated (present with a relatively high mass fraction in the infrastructure) and relatively confined (present in places with a volume, i.e. urban areas or brownfields, relatively low compared to environmental compartments or the technosphere as a whole), but at the same time relatively dispersed (spread over an area and volume relatively large). Equally, on the short to medium term, they are not expected to become accessible again, despite technology might be available either to (re)use or to recycle this stock. The continuation of the inaccessibility may stem predominantly from the socio-economic context. The economic feasibility is low, but an even more important obstacle may be the practical feasibility. Indeed, making these stocks accessible again requires knowledge about their exact location and composition. The historic build-up of these stocks is not properly documented. A second practical unfeasibility, for those stocks documented, lies in the physical technical hindrances. Selective removal of abandoned infrastructure amongst functional structures can lead to malfunctioning of the latter, or may even require full temporary removal of it, which may be socially inacceptable. Finally, landfills and tailings may be the stocks with a degree of inaccessibility in between. They are usually well located and confined. They are a result of lack of proper technology to make use of the materials (e.g. too low concentrations) or lack of interest in particular raw materials (e.g. co-occurring metals) at the point of their generation. The exploitation of these stocks is currently a significant subject of study with prospective studies where sampling is key (see e.g. Blasenbauer et al. 2020), and even exploitations, as the socio-economic context changes over time. Equally, mining them as part of their environmental management can occur to mitigate environmental impacts and risks. Graedel et al. (2004) studied the importance of tailings for copper, where reworked tailings were estimated as 2% of the global copper inputs to production. It must be clear that the duration range of this type of inaccessible stocks might reveal the highest spread, given that they may be extremely variable in terms of particular raw materials embedded, concentrations and chemical structure. In conclusion, differences in confinement, nature of confinement, technology development for the reversal of the inaccessibility of the respective stocks and the socio-economic context lead to a differentiation in terms of degree of inaccessibility amongst them. The size, the geographical location, the spatial distribution and the lack of mobility of the stocks may be key in the reversal. It is a challenge to bring forward (semi)quantitative measures to express the degree of irreversiblity. Duration as a measure to differentiate the degree of inaccessibility A possible way to qualify and quantify the difference in degree of inaccessibility may be its anticipated duration of inaccessibility. To put a number of years on this duration is extremely challenging as it is looking into the future. In this section, an effort is put forward, based on a literature study and interviews with specialists in various areas. The results are summarized in Fig. 1. The effort has been done in two manners. First of all, a best estimate of the inaccessibility duration has been based on available information (cited further in this section), along with a quality assessment of the estimate. Secondly, as uncertainty is high, a range with minimum and maximum duration estimates has been put in function of three agreed time horizons as defined in the SUPRIM project. Degrees of inaccessibility have been classified in time spans in between today (0 years), short term (5 years), medium term (25 years), long term (500 years) and infinite. Results are summarized in Fig. 1. The best estimates could be put forward for the hoarded stock. Indeed several surveys have been done on hoarding of materials at household levels for appliances that contain important raw materials (Thiébaud et al., 2017a;Wilson et al., 2017;Zhang et al., 2019;Glöser-Chahoud et al., 2019;Godoy León and Dewulf, 2020) but equally for industrial equipment (Godoy León and Dewulf, 2020). For voluminous devices such as flat panel displays, hoarding is less than one year, whereas for more tiny products it may rise up to 3-4 years. A best estimate with a rather high quality leads to a duration estimate of 2.5 years, clearly within the short term time window: 0-5 years. The quantities of these inaccessible stocks are anticipated to be rather limited. However, based on the service and hoarding times, the study of Thiébaud et al. (2017a) indicate their relative importance at a household level with 20-25% of the stock hoarded at households. Based on the SUPRIM project where several experts have been consulted (see acknowledgement in Schulze et al., 2020a), and based on discussions with various other experts (see acknowledgement in this paper), it becomes clear that the dispersion in the environment leads to a long term inaccessibility, i.e. for multiple generations. There is clear consensus that it is long term, despite the number of years assigned might be subject of debate. Within the SUPRIM project, the minimum was set at 100 years with finally adoption of the value of 500 years, in line with other long term effects modeled in LCA, e.g. global warming potential of greenhouse gases at a 500 years span. It must be mentioned that other LCA practitioners set long term horizons in the 60 000 -80 (5 yrs), medium term (25 yrs), long term (500 yrs) and infinite 000 years range (Weidema et al., 2013). All in all, the time span can be set at minimally 500 years. Tailings is another stock from which metals could be made available again (Lottermoser, 2011;Shaw et al., 2013). Ongoing developments point to a finite duration of the inaccessibility (decades), which is well illustrated by the recent report from the MINEA (Mining the European Anthroposphere) project (Blasenbauer et al., 2020). Information of their potential is not all in the public domain because of strategic reasons (Lottermoser and Suppes, 2019). Nevertheless, from various reports, an indication of ongoing and planned activities of mining of tailings could be made, exemplified in Table 2. Economics set the scene for making the tailing stocks accessible. Precious metals from tailings are made available sooner as it becomes techno-economically feasible. Based on the limited cases, it may be suggested that these re-mined tailings are in the order of 50 years old. For other metals, cases show economic viability of mining tailings with an age of about 80 years. If this retrospective analysis is used as a proxy to anticipate the duration of the tailings generated today, an average estimate of 65 years can be proposed, clearly in between the medium (25 years) and long term (500 years). It must be emphasized that the spread on the estimate of 65 years may be huge, as the embedded raw materials might have very different concentrations given that the currently targeted raw materials might be different from the originally targeted ones, that the chemical structure might be very different as a result of the original mining and processing technology, and that the environmental conditions may require environmental remediation that can include mining (Sözen et al., 2017). Additionally, the composition of tailings may also change over time due to weathering. Economic recovery may also be influenced by the spatial context and the presence of penalty elements. In a similar way, there is growing activity and economic analysis to mine old landfills (Winterstetter et al., 2016;Laner et al., 2019). Winterstetter et al. (2016) analyzed this anthropogenic deposit following the UNF-2009 classification. Based on a negative net present value, they concluded that the landfill under study cannot be classified as reserve. Nevertheless, with potential future changes of a set of key modifying factors such as an assumption of doubling ferrous and non-ferrous prices within 20 years, more efficient energy technologies and avoided aftercare costs, they consider landfill mining as 'potentially commercial', categorizing it into the 'resource' category. Hence, the duration of the inaccessibility can be set at medium term, i.e. 25 years, in best case. However, the variability might be high as not all landfills might have the potential as the case in the study of Winterstetter et al. (2016). It can be anticipated that many other landfills are far less favorable to be mined (e.g. if mainly plastic waste is landfilled), setting the range from medium term (25 years) to long term (500 years). To make a best average estimate, we may rely on the estimates for tailings as a proxy, given the similar nature to some extent; both stocks are stocks that are well confined and geographically well identified. Obviously, in this way the estimate for landfills is of lower quality than for tailings. Abandoned stocks have been studied by Swedish researchers (Krook et al., 2011;Krook et al., 2015;Wallsten et al., 2015), albeit mainly limited to copper cables in cities. They concluded that under current conditions mining urban infrastructure does not make economic sense. Apart from these interesting studies, to the best of our knowledge there is no other study available that gives any base to estimate the duration of the inaccessibility of abandoned stocks. In conclusion, the duration of inaccessibility is to be situated somewhere in the medium to long term range, this with a limited quality of the estimate. As best possible estimate, we suggest to set it at 262.5 years, i.e. at the middle of the 25-500 years time frames. Despite the high level of uncertainty, as an operational solution aimed at a transparent discussion and subsequent fine-tuning, we positioned the duration above the 65 years of landfills and tailings. At the same time -given the confined nature -we equally positioned it below the 500 years of stocks dissipated into the environment. Finally, there is poor ground to make estimates on the duration of stocks dispersed into the technosphere. To the best of our knowledge, there is hardly information on developments on economically viable technologies that are capable to recover particular raw materials or metals out of plastics, paints, papers, glass, ceramics, complex alloys or road infrastructure, just to name a few anthropogenic stocks. Ciacci et al. (2015) simply labeled these stocks as 'lost by design'. Hence, based Footnote. For definitions of 'resources' and 'reserves', the reader is referred to Drielsma et al. (2016): (1) A (mineral) resource is a concentration or occurrence of solid material of economic interest in or on the Earth's crust in such form, grade or quality, and quantity that there are reasonable prospects for eventual economic extraction. The location, quantity, grade or quality, continuity, and other geological characteristics of a mineral resource are known, estimated, or interpreted from specific geological evidence and knowledge, including sampling. (2) A (mineral) reserve is the economically mineable part of a measured and/or indicated mineral resource. It includes diluting materials and allowances for losses, which may occur when the material is mined or extracted and is defined by studies at pre-feasibility or feasibility level as appropriate that include application of modifying factors. Such studies demonstrate that, at the time of reporting, extraction could reasonably be justified. on lack of indication of their recovery on the short to medium term, it may be suggested that their inaccessibility is at the long term, 500 years, in line with the stock dispersed into the environment counterpart, although with less clear indications and hence assigning a lower quality degree to the estimate. Fig. 1 summarizes the minimum and maximum time horizons of the inaccessibility for the various stocks, along with the best estimate and the associated level of the quality of the estimate. Steps towards further implementation of the inaccessibility concepts In order to implement the inaccessibility concept in function of sustainable resource management, one may start from existing methods and data as a good starting point. In this section, we discuss material flow analysis and life cycle assessment as relevant methods, followed by a section that looks into available data. Rethinking MFA schemes to bring human compromising actions forward When looking into MFA practice today, there is good ground to embed flows into inaccessible stocks. Highlighting flows to inaccessible stocks in Sankey diagrams could take advantage of MFA's strong visualization capabilities and convey the inaccessibility issue to many stakeholders. It must be said that some MFA practices already partially embed inaccessible stocks. In their handbook of material flow analysis, Brunner and Rechberger (2016) demonstrate the common practice to include emissions into the environment (e.g. into the planetary boundary layer for atmospheric emissions) and landfilling, which is confirmed in a recent review by Graedel (2019), although landfilling is assigned as a flow to the environment in the latter document. In an MFA of aluminium, copper and iron for the EU, transfer to hibernating stocks in the technosphere covers both landfills and tailings (Passarini et al., 2018). Recently, Helbig et al. (2020) covered four inaccessible stocks: environmental dissipation, tailings, downcycling and landfilling. The authors put them under one single umbrella term, dissipation, whereby they considered tailings and landfilling as transfers to the environment. To the best of our knowledge, MFAs with a systematic visualization of flows towards the six inaccessible stocks have not been reported. One main reason is that the stock at the user is usually not differentiated in terms of stock-in-use versus stock-hoarded; the quantification of stock-hoarded is challenging and information is not widely available. Additionally, abandoning is not considered, most probably because of lack of quantitative information. Fig. 2 shows human activities and related transfers to inaccessible stocks, situating five stocks within the technosphere and one in the environment. The Fig. is in principle at the global level as resources and their inaccessibility need a global perspective given their tradeability; however a similar scheme can be developed at regional or national level on the condition that trade is represented. Rethinking cause-and-effect chains for the Area Of Protection Natural Resources in LCA Natural resources are an Area Of Protection (AOP) in LCA where it has for a long time been questioned what exactly we aim to protect (Dewulf et al., 2015). Recently, the Life Cycle Initiative, hosted by the UN Environment, established an expert task force on "Mineral Resources" to review existing methods . They classified existing life cycle impact assessment methods that deal with resources into four groups: depletion methods, future effort methods, supply risk methods and thermodynamic methods. Berger et al. (2020) mention that the ADP (abiotic depletion potential) model is valid and it has also been recommended by several initiatives. However, the authors acknowledge that the method does not distinguish between the part of the resource extraction that is occupied for current use (but can be available for other uses in the future) and the part that is "dissipated" into a technically and/or economically unrecoverable form. The discussion in the paper by Berger et al. (2020) states that mineral resources are not "lost" for human use when extracted from nature into the technosphere, as long as they can be reused, recycled, or recovered in some way. According to the authors, resources are only "lost" if converted to an "irrecoverable" state. Hence, identification and quantification of "irrecoverable" states or actions that compromise the recoverability or accessibility are exactly the key subject in the current paper and could be an important step forward in improving LCIA methods for mineral resources. It must be clear that this is not straightforward as the compromising actions in Table 1 are flows within the technosphere, except emitting to the environment. Hence, only the quantification of the compromising action dissipation into the environment leads to an immediate potential to model and characterize inaccessibility associated with this elementary flow, i.e. a flow between technosphere and ecosphere. This has been elaborated by van Oers et al. (2020) with the Environmental Dissipation Potential (EDP) as characterisation factor for environmental dissipation of resources. For the other five compromising actions that are associated with flows within the technosphere, classical LCIA modeling that typically starts from elementary flows is far more challenging. An exercise is presented in Fig. 3: -The compromising actions affect elementary flows as in consequential life cycle thinking, since compromising actions make resources inaccessible for the demand to renew or expand the functional stock at the user; -By consequence, the demand has to be met by virgin supply. That means that mining does have to deliver beyond the expansion of the in-use stock as it needs to fuel also the increase of the stocks hoarded, abandoned, landfilled, dissipated into the environment, dissipated into the technosphere, or put into mining wastes such as tailings. -That means that an elementary flow of resource use, considered to start within the ecosystem where it is appropriated by humans and where it considered as fully accessible, is characterized by a fraction that leads to inaccessibility along its further life cycle. If hypothetically 50% of the mined metal A in the end is going to be deposited in inaccessible stocks along its further fate in the technosphere, which means that 0.50 tonne (X tonne) is made inaccessible per tonne extracted (Y tonne). If for a certain metal A the tonnages that go into inaccessible stocks along the value chain in the technosphere (e.g. 0.50 tonne inaccessible/tonne extracted) are the double compared to a metal B (0.25 tonne inaccessible/tonne extracted), that means that the elementary flow of A is associated with a higher contribution to inaccessibility than B. This higher contribution to inaccessibility for metal A is to be attributed not only because of the mining via tailings, but clearly also because of more compromising actions at the manufacturing, the use and the end-of-life processing due to emissions, landfilling, downcycling, hoarding and abandoning. -A step further in developing characterization factors may lie in the differentiation of the compromising actions that can be different from one metal to another. Indeed, if the management within society for a metal C leads to the same tonnage made inaccessible as for metal A (both 0.50 tonne inaccessible/tonne extracted), metal C can be less contributing to resource inaccessibility if it has a higher share of hoarding and a lower share of dissipative flows compared to metal A. To aggregate the different degrees of inaccessibility, the estimated duration of inaccessibility (Z years) can be used as a starting point, cf. Fig. 1. If inaccessibility of A is fully due to dissipation into the environment with a duration of 500 years and the inaccessibility of C is fully due to hoarding with an estimated duration of inaccessibility of 2.5 years, that means that the inaccessibility of A is to be characterized as 200 times that of C, i.e. an inaccessibility of (0.50 kg tonne inaccessible x 500 years of inaccessibility)/tonne extracted = 250 tonne.years inaccessibility of metal per tonne extracted of A, versus an inaccessibility of (0.50 kg tonne inaccessible x 2.5 years of inaccessibility)/tonne extracted = 1.25 tonne.years inaccessibility per tonne extracted of metal C. The unit tonne.years represents a certain mass inaccessible (X tonne) for a certain time (Z years). It might not be a unit that is easily graspable intuitively; however it has some similarities with land use characterization in LCA. Therein, land use is typically expressed in terms of m 2 .years, reflecting the occupation or accessibility for its owner or user and at the same time quantifying the lack of inaccessibility for other users. Both land occupation and mass occupation/inaccessibility have an interchangeability of time and what is occupied or made inaccessible: yrs and m 2 , and yrs and kg, respectively. A time unit is also common for assessing impact on ecosystem services, such as for land use and toxicity, where impacts can be long-term (such as for persistent chemicals) or limited in time in case of short-lived compounds. Clearly, the reasoning here sets some first proposals on how to characterize resource inaccessibility within an LCA context and its area of protection Natural Resources. Further on, inaccessibility does not only result in extra primary sourcing to be characterized under the area of protection Natural Resources: extra primary sourcing consequently leads also to other effects, e.g. energy needs that can contribute to global The data requirements: what do we have so far? In order to understand the magnitude of actions that compromise accessibility, data on the flows of emitting, landfilling, tailing, downcycling, hoarding and abandoning should be available for various abiotic resources like metals. They should be available for the system under study, be it at macro-scale (globally, nationally), meso-scale (sector level) or micro-scale (specific production and consumption chains). To the best of our knowledge, there is no study published that comprises the quantification of the six compromising actions systematically for one system for one abiotic resource such as a specific metal. However, typically information on tailings, landfilling and emissions is available in material flow analysis, see e.g. Graedel (2019) and Kral et al. (2019). These flows are indeed clearly flowing from one subsystem into another one in Sankey diagrams. Less obvious is downcycling as the 'flow' stays within the subsystem of stocks within the technosphere. It is not common practice in MFA to differentiate this dispersed state, as MFA typically targets quantification, usually without specification of concentration, chemical speciation, separability or recoverability that could assist in assessing (in)accessibility. The delineation amongst accessibility and inaccessibility depends on the design, where Ciacci et al. (2015) conceptually differentiates three fractions in products that are theoretically all potentially recyclable: a fraction that is functionally recycled (hence accessibility is continued) next to two fractions that are made inaccessible and hence lost by design: downcycled and currently not recyclable at all. Recently, Helbig et al. (2020) have brought forward dissipation into the technosphere by introducing a subsystem 'Other Materials' to point to losses to other materials, e.g. as contaminants in other material cycles. It allowed them to quantify four compromising actions for 18 metals, i.e. emissions, tailings, landfilling and dissipation into the technosphere. Equally challenging in MFAs is hoarding where materials are at the user and where MFA typically does not differentiate amongst in-use and hoarded, despite in-use and hoarded stocks may be in physically separated locations within households. Hoarding is rather studied as a subject on its own, see e.g. Thiébaud The quantification of abandoning and the historical build-up of abandoned stocks may be the most challenging. To the best of the knowledge of the authors, studies that systematically address abandoning in function of materials management and its contribution to inaccessibility have not been brought forward. The work of Swedish researchers on copper stocks in power grids in urban environments is a very rare exception (Krook et al., 2011;Krook et al., 2015;Wallsten et al., 2015). When we look at what compromising actions are addressed by public bodies that deal with resource management, we observe that UNEP points to tailings and environmental dissipation at use in the visualization of metal cycles (UNEP, 2011). The latest UNEP Global Resources Outlook Report (Oberle et al., 2019) touches upon emissions, downcycling and landfilling because of several reasons, mainly from emission and toxicity point of view, not systematically from making or keeping resources accessible. The public body that studies compromising actions most systematically may be the European Commission in function of its Raw Materials Initiative. In this context, Raw Materials System Analysis (MSA) systematically studies flows into tailings and landfills within the EU for dozens of raw materials, see also in the next section. Apart from MFAs, there is a lot of information at the micro-level in LCA work, in particular within life cycle inventories of thousands of products and processes, see e.g. databases owned by eco-invent (Switzerland) and Thinkstep (Germany). This vast bottom-up information offers quantification possibilities for dissipation into the environment. For the other compromising actions like landfilling and tailings, information is in principle embodied to quantify the flows. But also here, dissipation into the technosphere by downcycling, hoarding and abandoning is not covered. In summary, there are no studies or databases that quantify the six compromising actions fully. However, there are various sources that cover compromising actions like tailings, landfilling and dissipation into the environment. Equally, information on hoarding is available despite it typically stands separately. Dispersion into the technosphere is less obvious and certainly abandoning is a challenge to quantify. An exploratory case study: cobalt in the EU In function of the calculation of the criticality of raw materials for the EU, the EC-JRC and Ghent University have made a raw material system analysis (MSA) for cobalt (Matos et al., 2020a). In essence, MSA studies Fig. 4. Flows of cobalt associated with the renewal of and addition to the functional stock at the EU user (Products at user, in use). In order not to overload the Fig., only key flows are labeled: (1) flows that compromise accessibility; (2) the product flow; and (3) trade flows. Identified actions that lead to inaccessibility are environmental dissipation (DISS), hoarding (HOARD), landfilling (LANDF), tailings (TAIL) and dispersion into the technosphere by downcycling (DOWN). For sake of the simplicity of the Fig., the destination of the refinery wastes is landfilling, although in practice they may be stored differently. Similarly, extraction may lead to storage of waste being different from tailings. The system boundary contains the stocks and flows of cobalt within the EU, excluding those dispersed into the environment and the technosphere. Markets are colored in orange, industrial operations in blue, materials in use in green, inaccessible stocks in grey, and natural deposits in yellow. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) apply the basic principles MFA on material systems within the geographical scope of the European Union, or an EU member state. They provide a material flow of a particular raw material in the EU with life cycle stages like extraction, processing, manufacturing, use, collection and recycling; with stocks like tailings, landfills, in-use stocks, and import and export of raw materials, embedded in various commodities, e.g. primary raw materials, processed materials, products, and products at end-of-life. The methodology is generic and is explained by BIO by Deloitte (2015) and in a recent report (Matos et al., 2020b). The MSA methodology and the utilization of it for the case in this paper is further documented in Appendix 2. For the exploratory case, the EU MSA cobalt study was used as starting point. Cobalt is an important metal in many applications, such as in hard metals, magnets and especially more and more in batteries (Godoy León and Dewulf, 2020). The goal was to quantify the human actions that lead to the generation of inaccessibility of cobalt along the value chain, associated with the renewal of the functional stock at the user in the EU and the net addition to it, for the year 2016. The MSA has been reworked in two stages. In a first stage, the MSA flow scheme has been reconFig.d. One first reconfiguration concerns the markets: the raw materials market has been split into a primary and a secondary raw materials market and the market in between manufacturing and use has been split into new products market and a new scrap (i.e. scrap from manufacturing) market, while the extracted materials market and end-of-life products market have been kept. Secondly, the stock at user has been split into a stock of products at user, in use, and a stock at user, end-of-use. Next, for the processes of the primary supply, extraction has been kept but the refining processes have been separated from operations that process end-of-life products and scrap. For the operations related to secondary materials, collection has been merged with recycling into end-of-life products and scrap treatment. This leads to the scheme represented in Fig. 4, which allows highlighting flows that lead to inaccessibility because of processes within the EU, with a clear separation of operations of the primary and secondary raw materials generation, next to the trade from and to the non-EU at the respective markets. By doing so, all compromising actions of Table 1 are captured, except abandoning; this latter one is presumed to be of minor importance in case of cobalt based on the understanding of its applications (Godoy León and . From the analysis and available data, the main activities that lead to inaccessibility have been identified and quantified (see Appendix 2). With respect to the dissipation into the environment (DISS), dissipation at the user is key, dissipation at the other stages is in comparison considered negligible. Known dissipation pathways at user are at industrial applications in catalysts and hard metals. Dispersion into the technosphere stems mainly from downcycling (DOWN), occurring at end-of-life treatment and at manufacturing. Equally, the cobalt MSA study allows an estimation of the hoarding (HOARD = A-B): the net increase of cobalt embedded in end-of-life products stored at the user. Further on, landfilling (LANDF) by end-of-life treatment, manufacturing and refining operations and the production of tailings (TAIL) by extraction operations can be assessed. In a second stage, the compromising actions associated with the renewal of and net addition to the functional stock at the EU user, which take place outside the EU through net imports have to be factored in. This can be done by mirroring the processes within the EU. Indeed, in the end the renewal of and the addition to the functional stock within the EU relies on materials extracted within the EU and on imported extracted materials, both with their associated actions that lead to inaccessibility. In this way, inaccessibilities taking place within and outside the EU that are associated with the processing of the EU end-ofuse stock have been calculated. Overall, based on the law of conservation of mass, extraction (EXTR) provides the net addition to the functional stock (NAFS) and to inaccessible stocks (INACCESS), i.e. at tailings (TAIL), landfilling (LANDF), dissipated in the environment by emissions (DISS), dispersed in the technosphere by downcycling (DOWN), and hoarded by the user (HOARD): The results of the calculations are presented in Fig. 5. The scheme indicates that 30% of the extracted materials are net-added to the functional stock, whereas 70% are compensating additions to inaccessible stocks due to tailings (21.3%), landfilling (31.2%), downcycling (11.6%), dissipation (1.4%) and hoarding (4.3%). When the results are compared to the simplified Sankey diagram of the corresponding MSA study, first the reader should be aware that the system under study is different, in the sense that Fig. 5 represents all flows, within and outside the EU, that are associated with operations that lead to net addition of the in use stock within the EU. This is different from the MSA studies that look to the geographical entity where processes within the EU are studied, whether the final use is in the EU or abroad via trade. Fig. 5 highlights the limited fraction that is extracted worldwide and that goes into the EU in-use stock, as there are important associated additions to inaccessible stocks both within and outside the EU at landfills and tailings, downcycling and hoarding. The results are remarkable in the sense that the society as a whole does not benefit from more than two thirds of the extracted cobalt due to actions that make it inaccessible. Hence, there seems to be huge potential for improvement through research and innovation, as well as through policy and legal instruments. For the EU for instance, reduction of inaccessibility due to landfilling, the biggest contribution with 44.7%, could take advantage of economic instruments to reduce landfilling, as for example proposed in Waste Framework Directive 2018/851 or by setting more ambitious collection targets of cobalt-rich equipment, for example in WEEE Directive 2012/19/EU. Apart from setting collection targets, behavorial change at the sorting by households and better pretreatment after collection could lead to improvements for WEEE with small items like cobalt-rich batteries (e.g. mobile phones, portable media player, etc.) . Further on, policies that offer better techno-economic conditions that lead to higher extraction efficiencies and less tailings (30.6% contribution) could be put forward. More high-quality recycling instead of downcycling (16.6%) could be ensured by setting specific high quality recycling targets for certain raw materials contained in specific products; such targets are currently under discussion for the revision of various EU policies such as the Waste Battery Directive and the End-of-life vehicles Directive. The energy and mobility transition could take advantage of the mitigation of raw materials inaccessibility with cobalt as a key example, enabling socio-economic benefits in line with the 2030 UN Sustainable Development Goals. Equally, if inaccessibilities could be reduced, mining would need only to deliver the net addition to the functional stock, which would mean a reduction of the activities and associated impacts by about a factor of three. As we learn from the Global Resources Report (Oberle et al., 2019) where extraction processes contribute 50% to the global carbon emissions and even 80% to biodiversity losses, reduction of human activities leading to inaccessibilities are a key but hidden mechanism to be tackled in function of sustainable development. From the section above on estimating the duration, we may differentiate amongst the compromising actions because of their difference in degree of inaccessibility. Based on the minimum and maximum and best estimate of the duration of the different actions, we obtained the contribution of them in terms of tonne.years they make cobalt inaccessible. These numbers allows the calculation of the contributions of the compromising actions in percentages. The results are summarized in Table 3. When estimated durations are taken at minimum and best estimate, the contribution of hoarding and emissions into the environment have a minor contribution, i.e. below 1 and 10% respectively. Far more contributing are tailings, landfills and dispersion into the technosphere by downcycling: they make up at least 90% of the generated inaccessibilities. Amongst these, downcycling is dominating based on the minimum and best estimate of the duration. Analysing from an elementary flow point of view as in LCA, the results show that there is an elementary flow of 35.9 ktonne of cobalt associated with the renewal and extension of the in-use stock in the EU in 2016, i.e. 10.8 ktonne net-added to the functional stock but 25.1 ktonne flowing into inaccessible stocks. The compromising actions make that the technosphere as a whole generate an inaccessibility of 3754 ktonne.years, based on the best estimate. This means that about 100 tonne.years inaccessibility is generated per tonne cobalt extracted. This may be a base in LCA to characterize the inaccessibility generated within the technosphere in function of the quantities extracted from the environment. This can be used as an indicator for the AoP Protection of Natural Resources. Conclusions With respect to their sustainable management, the concepts of Table 3 Estimations of contributions to inaccessibilities to cobalt due to compromising actions associated with the renewal and extension of the functional stock in the EU in 2016, expressed in ktonne, ktonne.years (for minimum, best estimate and maximum according to Fig. 1) and % contribution to ktonne.years (for minimum and best estimate according to Fig. 1; at maximum it is undefined given the infinite ktonne.years for both dispersed stocks). "running out" or "depleting" metals and minerals are not anymore so common and dogmatic as they used to be in the past: there is a growing understanding that they do not vanish by human activities. Rather, their accessibility and the continuation of their accessibility are emerging issues, especially with regards to the growing needs, including those for the energy and mobility transition. Recent work in this context mainly pointed to dissipation as important human phenomenon that compromises the accessibility. The current work has brought forward a fairly comprehensive set of six human compromising actions: emitting, landfilling, tailing, downcycling/dispersing into the technosphere, hoarding and abandoning. It became equally clear that the associated inaccessibilities and degrees of inaccessibility are different. As there is no science or technology to measure and quantify the degree of inaccessibility thoroughly, a proxy was identified. This work made an estimate of the duration of the inaccessibility, sometimes based on quite reliable information, e.g. for hoarding, but in many cases on estimates with high uncertainty. The work has also shown that the aforementioned compromising actions are not systematically considered in sustainable resource management at public bodies nor in the sustainability assessment toolbox. Nevertheless, it is obvious that current tools like MFA and to some extent LCA may be a good base to address resource (in)accessibility, although some further development is certainly needed. At public management, it turns out that the European Commission with its MSA studies has a good ground to address resource accessibility. The cobalt case study took advantage from the corresponding MSA study and allowed a quantification of five out of the six flows that impact accessibility of cobalt as a result of use within the EU. Further on, the concept of accessibility and the identification and quantification of actions that compromise the accessibility may offer new potential for a better sustainable management of metals. Rather than measuring how much we keep in the loop by means of dedicated circular economy indicators, the current approach points to opportunities to do better by reducing compromising actions. The six actions identified demonstrate that the improvement of accessibility may require a multitude of actions across the value chain and along the full life cycle of materials: at the primary production, at the manufacturing, at the use and at the end-of-life management. Finally, the elaboration of the concept with the EU cobalt case study can be seen as an eye-opener: 70% of extracted cobalt ends in inaccessible stocks. In other words, inaccessibility can lead to about a tripling of the environmental impact and costs associated with the virgin supply chain (see the cobalt case study), as this supply chain has to compensate for the generated inaccessibilities. Not only the generated inaccessibility is remarkable, but the associated surplus extraction of primary stocks to meet the continued growing demands brings economic, environmental and social consequences with it. Disclaimer The views expressed in the article are personal and do not necessarily reflect an official position of the European Commission. Credit author statement We prefer to not outline the single contribution of each author. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Appendix 2. Background information on the MSA methodology and its utilization for the case study A2.1. Background information on the MSA methodology The reader is refered to the EC -JRC report on the material system analyses (MSA) specifications (EC, 2020c). In below, the general MSA scheme is presented in Fig. A2.1. The involved stocks and flows parameters are listed in Table A2.1. A2.2.1. Calculation of extracted materials and inaccessibilities of the primary supply chain of the stock within the EU The input to the stock in use is M3. Extracted materials and inaccessibilities stem from D1.1 within EU, but globally, this should be corrected with a factor M3/D1.1. However, as the EU is exporter of secondary raw materials for manufacturing, the global supply from primary origin is to be reduced; see further. This means that the factor becomes M3/(D1.1-fsec) = Fprod. This means that the global M2' = M2 . Fprod and that M2' primary = M2primary . Fprod D1.11' = D1.11 . Fprod D1.4'= D1.4 . Fprod D1.5'=D1.5 . Fprod M2(globally primary) comes from the market and is not only supplied from EU refining. This means that globally the amount of material refined material has to account for both the trade in the refining and manufacturing stages, and has to be recalculated by the factor Fprod. A2.2.2. Inaccessibilities associated with the secondary value chain The functional stock within the EU leads to a net flow for EOL processing E1.6. However, the EOL processing within the EU also handles a net import F1.2+C1.4-F1.1 and a new scrap D1.5'. This latter one is negligible. This means that the inaccessibilities and delivered secondary raw materials The delivered secondary raw materials by the EU EOL processing, stemming from used products in the EU, G1.1'+G1.2'+G1.3', go to the secondary raw materials market where there is a net export C1.2-D1.9. Hence the flow to manufacturing from secondary origin from products used in the EU is higher than M2 secondary, i.e. a factor M2/ (M2+(C1.2-D1.9)). This means that the contributions to the EU manufacturing M2 increases by 15%, hence the virtual import ratio M3/D1.1 drops to M3/(D1.1-fsec) = M3/D1.1'. A2.2.3. Inaccessibilities at the user: hoarding In the elaboration of the MSA's, estimates were made on additions to end-of-life stock at user, i.e. hoarding. From the results with hoardings at different applications, a total of 1533 tonnes has been estimated. From the final MSA, the Net Addition to the Stock (NAS) with user equals M3 -(E1.6+E1.5) = 12354 t. As we learn from the estimates that 1533 t are no longer function, it means that the Net Addition to the Functional Stock (NAFS) = 12354 -1533 = 10 821 t. This means 87.6% functional and 12.4% non-functional at the user. As the total stock at use E1.1 equals 334 134 t and if we assume a same ratio functional stock (FS) to total stock as NAFS/NAS of 0.876, this means a total FS of 292 701 t, next to 41 433 non-functional. This means an increase of both 3.7% in one year. A2.2.4. Overall quantification of inaccessibilities After implementation of the calculations, it can be calculated how much extraction is needed and how much inaccessibilities are generated, associated with the renewal and extension of the functional stock. Overall, it means that extraction does not only provide the net addition to the functional stock, but equally compensates for generated inaccessibilities by tailings, landfilling, downcycling, environmental dissipation and hoarding: INACCESS = TAIL + LANDF + DOWN + DISS + HOARD EXTR = INACCESS + NAFS A2.2.5. Final calculation The abovementioned procedure has been implemented. A doublecheck of the global extraction between a calculation based on the modified MSA and with the overall mass balance EXTR = INACCESS + NAFS shows a gap less than 7%.
2021-05-04T22:06:16.523Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "5e345ceba705dd6ad1dfea2577425842abddcb8a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.resconrec.2021.105403", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "6c3a9a7e723493e6bffc051fb0c547f113bf1d64", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
248154044
pes2o/s2orc
v3-fos-license
De novo Creation and Assessment of a Prognostic Fat-Age-Inflammation Index “FAIN” in Patients With Cancer: A Multicenter Cohort Study Background and Aims Malnutrition is highly prevalent and is related to multiple impaired clinical outcomes in cancer patients. This study aimed to de novo create an objective, nutrition-related index specially for prognostic purposes in oncology populations. Methods We performed a multicenter cohort study including 14,134 cancer patients. The prognostic impact for each baseline characteristic was estimated by calculating Harrell's C-index. The optimal parameters reflecting the nutritional and inflammatory impact on patients' overall survival were selected to develop the fat-age-inflammation (FAIN) index. The associations of the FAIN with the nutritional status, physical performance, quality of life, short-term outcomes and mortality of patients were comprehensively evaluated. Independent external validation was performed to further assess the prognostic value of the FAIN. Results The study enrolled 7,468 men and 6,666 women with a median age of 57 years and a median follow-up of 42 months. The FAIN index was defined as: (triceps skinfold thickness + albumin) / [age + 5 × (neutrophil count/lymphocyte count)]. There were significant associations of the FAIN with the nutritional status, physical performance, quality of life and short-term outcomes. The FAIN also showed better discrimination performance than the Nutritional Risk Index, the Prognostic Nutritional Index and the Controlling Nutritional Status index (all P < 0.05). In multivariable-adjusted models, the FAIN was independently associated with a reduced death hazard both as a continuous variable (HR = 0.57, 95%CI = 0.47–0.68) and per one standard deviation (HR = 0.83, 95%CI = 0.78–0.88). External validation in a multicenter lung cancer cohort (n = 227) further confirmed the prognostic value of the FAIN. Conclusions This study created and assessed the prognostic FAIN index, which might act as a feasible option to monitor the nutritional status and help develop intervention strategies to optimize the survival outcomes of cancer patients. INTRODUCTION Cancer is a huge threat to human health, with an estimated 19.3 million global incident cases and almost 10.0 million deaths annually (1). Despite the recent introduction of new treatment options (2,3), the poor prognosis of many cancers remains largely unchanged, and the number of new cases is predicted to increase significantly in the foreseeable future (1,4). Therefore, novel diagnostic, therapeutic and management strategies have been continuously sought, and multimodal cancer care is being emphasized in current oncology practice (5,6). Oncology patients frequently experience reduced food intake, weight loss, physical inactivity, metabolic changes and systemic inflammation, which have been ascribed to the chronic consumptive nature of the malignancy itself and/or the side effects of various anti-cancer therapies (7,8). Thus, they are at particularly high risk for malnutrition compared to other patient groups (9). Additionally, cancer-related malnutrition is often linked to other nutrition status-related conditions such as cachexia and sarcopenia (5,(10)(11)(12). These conditions can independently or jointly lead to an impaired quality of life (QOL) (13)(14)(15), reduced treatment tolerance (16), increased postoperative complications (17), delayed rehabilitation of organ function (18) and a shortened overall survival (7,19,20). Previous studies have estimated that 10-20% of cancer deaths can be attributed to malnutrition rather than the cancer itself (21,22). However, malnutrition is often underestimated (23), misclassified (24), or left untreated (25) in oncology populations. To address these challenges, the European Society of Clinical Nutrition and Metabolism (ESPEN) recommends in its guidelines that all cancer patients should be evaluated regularly for the risk or presence of malnutrition to guide subsequent intervention strategies (5,6). Of the validated approaches used to screen for the risk or assess the severity of malnutrition, the Nutritional Risk Screening 2002 (NRS2002) (26) and the Patient-Generated Subjective Global Assessment (PG-SGA) (27) are the most widely used tools in Chinese oncology patients (15,19,28). The Global Leadership Initiative on Malnutrition (GLIM) (11), a set of ESPEN-endorsed guidelines aiming to unify the diagnosis of malnutrition in patients with a wide spectrum of diseases, have also been garnering increasing interest from the nutrition society (7,8,14,15,17,19,20,28). In addition to these questionnaireor expert opinion-based tools, several nutrition-related indices have also been implemented to assess the nutritional status of patients, such as the Nutritional Risk Index (NRI) (29), the Controlling Nutritional Status (CONUT) index (30) and the Prognostic Nutritional Index (PNI) (31). These scoring indices were derived from objective laboratory blood tests with/without anthropometric parameters, which have shown significant prognostic value in oncology populations (31)(32)(33). However, to our knowledge, although previous studies indicated that cancer patients can have different malnutrition phenotypes, including different anthropometric parameters compared to other patient groups (34,35), there is not yet an objective, prognosis-oriented, nutrition-related and simple-to-obtain index that is designed specifically for oncology populations. In the present study conducted in a large-scale, multicenter oncology cohort, we created a prognostic fat-age-inflammation (FAIN) index using a data-driven, outcome-oriented algorithm. We then compared the prognostic performance of the FAIN with five existing scoring systems and comprehensively investigated the associations of the FAIN with other patient characteristics, including the nutritional status, physical performance, QOL and short-term outcomes. Finally, we analyzed the associations of the FAIN, as both a continuous and categorical variable, with cancer mortality. Population and Design This was a nationwide, multicenter cohort study. All patients were derived from the Investigation on Nutrition Status and its Clinical Outcome of Common Cancers (INSCOC) project of China which was registered online at https://www.chictr.org.cn (ID: ChiCTR1800020329). The full design of the INSCOC project has been described previously (36) and the detailed inclusion and exclusion criteria are shown in Supplementary Table 1. For the present study, we included 14,908 patients aged over 18 years who were diagnosed with cancer and/or were hospitalized for anti-cancer treatment from November 2011 to April 2019 at multiple centers in four geographical regions (east, south, west and north) of China. After excluding 509 patients with nonsolid malignancies and 265 patients with an unclear pathological diagnosis, we finally included 14,134 patients with 17 types of cancer as the study population (Supplementary Figure 1). An independent cohort including 355 esophageal cancer patients diagnosed from December 2014 to November 2019 (not included in the INSCOC project) in our institution was used as the validation set to evaluate the prognostic performance of the FAIN. The study was approved by the Ethics Committees of all participating institutions and all data was analyzed anonymously. All participants in the study provided written consent for the scientific use of their data and the principles of the Declaration of Helsinki were followed. Data Acquisition The following information was collected at baseline within 48 h upon admission by a project-trained researcher via a face-toface interview or physical examination: age, sex, smoking (active tobacco smoker before admission), alcohol drinking (once a week or more frequent alcohol consumption in the past 1 year, regardless of amount), tea consumption (once a week or more frequent tea consumption in the past 1 year, regardless of amount), comorbidities, height, weight, body mass index (BMI), mid-arm circumference (MAC, non-dominant arm), triceps skinfold thickness (TSF, non-dominant arm), handgrip strength (HGS, non-dominant hand), mid-arm muscle circumference (MAMC), calf circumference (CC, left calf), unintentional weight loss within and beyond 6 months, the NRS2002 score (≥ 3 indicating nutritional risk) (26), the PG-SGA score (27), the Karnofsky Performance Status (KPS) score (37) and the European Organization for Research and Treatment of Cancer QLQ-C30 score (QLQ-C30) (38). In the present study, the BMI was also categorized as underweight (<18.5 kg/m 2 ); normal (18.5 to <24 kg/m 2 ), overweight (24 to <28 kg/m 2 ), or obese (≥28 kg/m 2 ) according to the Chinese recommendation (39). The detailed approaches and instruments used to obtain the anthropometric information (height, weight, BMI, MAC, TSF, HGS, MAMC, CC and weight loss) have been described previously (40), and are also shown in Supplementary Table 2. The gastrointestinal symptoms within the PG-SGA scale were extracted and analyzed independently. For the QLQ-C30, the global QOL scale was used in the present study, with a higher score indicating a better overall QOL. The disease and treatment information, including the cancer site, clinical stage, differentiation grade, anticancer therapies used, serum indices, length of hospital stay, presence of intensive care unit stay, length of hospitalization, cost and thirty-day death were retrospectively retrieved from electronic medical records. Serum indices were all measured at the clinical laboratories of the participating institutions using fasting blood samples drawn upon admission. Follow-Up and Main Outcome Patients were followed annually after enrollment via telephone or face-to-face interviews to obtain the survival information. The all-cause mortality was the main outcome of the present study, and the overall survival time was calculated as the time interval (months) between the first admission and the patient's date of death, the date of the last valid follow-up, or April 2020. Creation of the Fat-Age-Inflammation (FAIN) Index A data-driven, outcome-oriented approach was used to create an index reflecting the nutritional and inflammatory impact on the patients' overall survival. First, Harrell's C-index was calculated to assess the prognostic impact of each baseline parameter. Then, the TSF (mm) (20), age (years), neutrophil to lymphocyte ratio (NLR, as an inflammatory marker, same unit for the neutrophils and lymphocytes, such as number/L) (15) and the serum albumin (g/L, as an inflammatory and prognostic marker) (41) were manually selected to develop the FAIN index since they showed the highest C-index within their respective categories. The prototypic definition of the FAIN was: (TSF + albumin)/(age + NLR). To maximize the prognostic value, the optimal formula of the FAIN was explored by multiplying each component with different coefficients (the other three parameters remained unchanged during the tuning of one parameter) and the corresponding C-index was observed. The FAIN index was finally determined to be: (TSF + albumin) / [age + 5 × (neutrophil count/lymphocyte count)]. Statistical Analysis Continuous data are shown as the medians [interquartile range] and were compared using Wilcoxon's rank-sum test. Categorical data were expressed as numbers (percentage) and compared using a Chi-squared test. The two-variable correlation was examined using a Spearman's rank correlation test. The baseline NRI, PNI and CONUT indices were also calculated to compare their prognostic value with the FAIN according to the following approaches: NRI = (1.519 × serum albumin, g/L) + (41.7 × present weight / usual weight); PNI = 10 * serum albumin (g/dl) + 0.005 * total lymphocyte count (mm 3 ); CONUT includes the serum albumin level, total lymphocyte counts and serum total cholesterol level. The detailed scoring method of the CONUT has been described previously (31). A restricted cubic spline was used to flexibly analyze the potential non-linear associations of the continuous FAIN index with survival. The potential non-linearity was tested using a likelihood ratio test with P < 0.05 indicating a non-linear relationship. We also categorized the continuous FAIN as a dichotomous variable to define the low and high groups using the median value and the optimal stratification (OS)-defined threshold. The OS method selects the threshold for a continuous factor by maximizing the between-group log-rank statistic for the overall survival (42). We also categorized the FAIN in tertiles, quartiles and quintiles to partially minimize the limitations associated with the variable dichotomization. The associations between the FAIN categories and survival was evaluated using Kaplan-Meier curves and log-rank tests. Multivariable-adjusted Cox proportional hazards models were used and hazard ratios (HR) with 95% confidence intervals (95%CIs) were calculated to estimate the association between the FAIN and mortality. We used the Schoenfeld individual test and Kaplan-Meier curves to statistically and visually estimate the proportional hazards assumption for each covariate adjusted (Schoenfeld test P > 0.05 indicates that the proportional hazards assumption is satisfied). The linearity assumption between covariates and outcome was confirmed by the Martingale residual plots. Incremental models with increasing numbers of covariates were created. A dual-direction stepwise method based on the Bayesian Information Criterion (BIC) was used to help select the significant covariates. Model 1 was an unadjusted crude model. Model 2 was adjusted for the age at baseline. Model 3 was adjusted for age, sex and the BIC-screened independent predictors, including the tumor stage, radical surgery, curative chemotherapy, serum prealbumin level, HGS, the NRS2002 score, length of hospital stay and cancer type. Model 4 was adjusted for all variables in Model 3, plus the calf circumference, PG-SGA score, KPS score and the global QOL score. Sensitivity analyses were performed to test the robustness of the multivariate Cox regression models by excluding the patients who died within the first 3 (Model 5), 6 (Model 6) and 12 months (Model 7), respectively. Multiplicative interactions were tested by adjusting the cross-product terms. Those covariates showing a statistically significant multiplicative interaction (P < 0.05) were defined as potential effect modifiers and subgroup analyses were performed in different strata of these variables to evaluate the modification of the associations observed in the overall population. The proportional hazards assumption and linearity assumption were also confirmed for the Cox regression models obtained through stratification using the approaches described above. All tests were two-sided, and P < 0.05 was regarded as statistically significant. All analyses were performed using R (version 3.6.3, http://www.rproject.org). The FAIN and Patient Characteristics The baseline patient characteristics, as stratified by the mediandichotomized FAIN, are presented in Table 2. Compared to the FAIN low group, the FAIN high group was associated with a higher value/rate of total protein, pre-albumin, albumin, transferrin, alanine transaminase, cholesterol, triglycerides, high density lipoprotein, low density lipoprotein, hemoglobin, red blood cells, platelets, lymphocytes, weight, BMI, MAC, TSF, HGS, CC, radical surgery, postoperative adjuvant chemotherapy, KPS score, global QOL score, NRI score, and PNI score, and was associated with a lower value/rate of age, male sex, smoking, alcohol drinking, tea consumption, hypertension, diabetes, coronary heart disease, chronic biliary disease, anemia, urea nitrogen, creatinine, total bilirubin, direct bilirubin, glucose, white blood cells, neutrophils, C-reactive protein, NLR, height, MAMC, weight loss within and beyond 6 months, curative radiotherapy, curative chemotherapy, other anticancer therapy, NRS2002 score, PG-SGA score, gastrointestinal symptoms (no appetite, nausea, vomiting, constipation, dry mouth, things taste funny or have no taste, dysphagia, feel full quickly, abdominal pain and other symptoms) and the CONUT score. As was expected, the cancer types, clinical stage and differentiation grade were also different between the low and high FAIN groups. Additionally, a univariate analysis on the short-term outcomes also showed that a higher FAIN was associated with a shorter length of hospital stay, fewer incidents of an intensive care unit stay, lower costs during hospitalization and lower rates of thirty-day mortality (all P < 0.05). Correlations Sex-specific spearman's rank correlation tests were performed to assess the degree of relevance for the associations of the continuous FAIN with the BMI, weight loss beyond 6 months, CC, HGS, C-reactive protein, NRS2002 score, PG-SGA score, KPS score and global QOL score (Figure 1). The results were similar for both genders, showing a positive correlation between the FAIN and BMI (Figure 1A), CC ( Figure 1C), HGS ( Figure 1D), KPS score ( Figure 1H) and global QOL score (Figure 1I), and a negative correlation between FAIN and weight loss ( Figure 1B), C-reactive protein ( Figure 1E), NRS2002 score ( Figure 1F) and PG-SGA score ( Figure 1G, all P < 0.05). Univariate Survival Analysis A restricted cubic spline analysis showed that the continuous FAIN index was associated with a reduced mortality risk (P < 0.001) and no significant non-linearity was observed for this relationship (P = 0.489). The optimal threshold of the FAIN was 0.82, as determined by the OS (high: ≥0.82; low: <0.82, Figure 2A). Kaplan-Meier curves demonstrated that patients with a higher FAIN had better overall survival than those in the lower groups, regardless of the categorization approach used (all P < 0.001). For the FAIN tertiles, quartiles and quintiles, the tests for P of the trend indicated that the FAIN was monotonically associated with better overall survival of the patients (all P for trend <0.001, Figures 2B-F). Multivariable Survival Analysis The results of the multivariable Cox proportional hazards models on the associations between the FAIN and mortality are shown in Table 3. . These associations were all sustained in the sensitivity analysis (Model 5-7) and tests for P of the trend showed that the positive associations between the FAIN and overall survival were all "dose-dependent" (all P for trend <0.001). Interaction and Subgroup Analysis All covariates were screened for potential interactive effects, and the patient sex, tumor stage, curative chemotherapy, prealbumin, cancer type, PG-SGA category, KPS score and global QOL score showed statistically significant interactions with the FAIN (all P < 0.05). The fully-adjusted models were then repeated in different variable strata to study the effect modifications ( Table 4 Independent Validation The prognostic impact of the FAIN was further assessed in an independent multicenter lung cancer cohort (n = 227) which was not used for the derivation of the FAIN. The baseline characteristics of the validation cohort are shown in Supplementary Table 4 DISCUSSION This was a large-scale, observational cohort study including 14,134 patients with 17 cancers at multiple centers in China. Based on a data-driven, outcome-oriented approach, we developed a new prognostic index, the FAIN, that integrates information on the inflammation and nutrition. To our knowledge, this is the first study to date that proposes such an index specially designed for oncology populations. We demonstrated that this index effectively reflects the nutritional status, physical performance and QOL of the patients, and is associated with the short-term clinical outcomes of patients. We also performed parallel comparisons that indicated that the FAIN index has better discrimination performance to predict cancer mortality than the existing NRI, PNI, CONUT, NRS2002 and PG-SGA systems in the study population. We revealed that the FAIN is independently associated with the death hazard. Importantly, the components used to create the FAIN index were simple to obtain, and the association between the FAIN and mortality is linear-like and robust to time. Additionally, we validated the performance of the FAIN in an independent lung cancer cohort. These findings suggest that the FAIN might act as a feasible, cost-effective option to monitor the nutritional status of patients and help develop intervention strategies to optimize the survival outcomes of cancer patients. A distinct feature of the FAIN index is the inclusion of a fat mass assessment, which is not included in most existing scoring systems such as the NRI, PNI and CONUT. The PNI and CONUT only consist of serum laboratory indices, while the NRI also considers some anthropometric changes of patients (e.g., weight loss) (31). However, the weight loss parameter is often obtained based on patient-reported usual/historic weights, which is subject to recall bias that can cause instability when calculating the NRI. In contrast, the fat mass assessment (through measurement of the skinfold thickness) is a relatively objective parameter, which was included in the PG-SGA (27), a nutritional assessment tool dedicated to oncology patients which is currently recommended for use in China. A previous study conducted in a large Chinese oncology cohort also indicated that a low TSF was associated with poorer nutritional status and had greater prognostic impact on cancer mortality than other muscle parameters such as the CC and MAMC (40). A lower TSF was also associated with increased death hazard and actually enhanced the prognostic value of the GLIM-diagnosed malnutrition in lung cancer patients (20). Similarly, the positive association between a low TSF and mortality was also reported in patients with cancer cachexia (43) and in terminally ill cancer patients (44). These results are consistent with our observations in the present study and further support the inclusion of the TSF in the FAIN. Additionally, the inclusion of an objective measurement of body fat might partially explain the superior prognostic value of the FAIN compared to the other three scoring systems in cancer patients. However, the impact of the fat mass on cancer mortality can vary based on the cancer type (40). For example, higher adiposity was associated with higher all-cause and cancer-specific mortality in breast cancer patients (45). In the present study, the favorable impact of the FAIN on patient survival was also attenuated in breast cancer patients ( Table 4), which might suggest that the FAIN would be of limited use in breast cancer patients. Intriguingly, a recent study conducted in a large dataset has shown that, paradoxically, in patients with HER2-positive advanced breast cancer, a higher BMI was independently associated with improved survival (46). Since we lack the data about HER2 expression in our patients, this possible link cannot be assessed in our study cohort, and future studies with gene test results are needed to clarify the role of the FAIN in greater detail among breast cancer patients. Another related concern is the potential impact of sex difference of TSF on the prognostic performance of the FAIN. To examine this, we calculated sex-specific FAIN thresholds (male < 0.69 or female < 0.82) based on the OS method to defined a low FAIN in an exploratory analysis. However, this leaded to a statistically significant reduction of the Harrell's Cindex (0.592 vs. 0.601, P = 0.002) compared to the current threshold (<0.82) calculated for the overall study population. Therefore, pragmatically, we used the gender-neutral threshold 0.82 to maximize the prognostic value of the dichotomized FAIN in the present study. Nevertheless, the optimal approach to define a low FAIN should be re-evaluated when the FAIN index is used for non-prognostic purposes in future studies. In an exploratory analysis, we also calculated thresholds for the most prevalent lung cancer (value = 0.83) and colorectal cancer (value = 0.68) based on the OS method. However, limited to the study scope, future studies need to evaluate the prognostic value of these thresholds in specific cancer groups. The definition of malnutrition is still not possible using a universally-accepted framework (11,19,(26)(27)(28), largely due to the factors including the diversity of indices used for its identification, the different parameter thresholds, racial/disease-specific differences, complicated etiology and even the continuously evolving but inconsistent understanding of this issue (11,19,28). Of note, fat mass assessment was not included as a component in the recent GLIM criteria that were proposed for assessing malnutrition (11). However, depletion of the free fat mass is prevalent in cancer patients, especially among those undergoing chemotherapy/radiotherapy or having cancer cachexia (10), and has been correlated with impaired clinical outcomes (47,48). A recent study conducted in a Chinese lung cancer population also indicated that adding the TSF can help assess nutritional status and enhance the prognostic value of GLIM-defined malnutrition (20). In support of that study, our present findings also suggest that fat mass assessment might be helpful during the assessment of cancer patients for malnutrition. However, since the present study did not consider the use and impact of nutritional intervention, future studies are still needed to explore whether inclusion of a fat mass assessment during the nutritional assessment would help guide the subsequent nutritional intervention in cancer patients. There are several potential limitations of this study that must be noted. First, we used a data-driven approach to derive the FAIN index, so the associations between the FAIN and cancer mortality might not be generalizable to other populations. Future validation of the FAIN is needed in all types of cancer and in different populations with characteristics different from those of the group where it was developed before being put into routine clinical or research applications. Second, some the associations we observed in the multivariable survival analysis may be explained by reverse causality. However, we performed sensitivity analyses by excluding those patients who died within the first 3, 6, and 12 months, and the results were robust to time, which should help to reduce this probability. Third, unmeasured confounders are possible in all observational studies. However, we comprehensively collected the baseline characteristic of patients and adjusted the covariates based on both statistical and scientific approaches to minimize this possibility. Fourth, since Asian populations have anthropometric differences compared with their Western counterparties (12), the generalizability of the FAIN should be re-evaluated when applied in non-Asian oncology populations. Fifth, we proposed median value and outcome-oriented threshold that transformed the FAIN into a dichotomous variable (low vs. high). However, dichotomizing continuous variables can lead to an reduction of information (49). Although additional statistical approaches (by analyzing the FAIN as continuous, per standard deviation and percentiles) might provide additional insights, future assessment is still required to determine the optimal grouping algorithm/risk intervals of the FAIN to facilitate its clinical use. Sixth, although being inexpensive and simple, TSF was less accurate to measure body fat than those parameters obtained from more advanced technologies such as dual-energy X-ray absorptiometry. Future studies need to assess the certainty of the FAIN index in greater details. Eighth, we did not have data on other treatments (besides anticancer therapies) which might confound the associations we observed in the present study. Ninth, since not all of the continuous variables (such as the weight loss percentage) had normal distribution, we conservatively used non-parametric statistical approaches to test between-group differences despite the large sample size. Parametric method may have better performance for some normally-distributed continuous variables. Future studies need to address the above issues. In conclusion, this study de novo created and assessed a prognostic index, the FAIN, that integrates information on the patient fat mass/nutrition, age and inflammation. This index effectively reflects the nutritional status, physical performance and QOL of oncology patients, and is associated with improved short-term clinical outcomes. The FAIN has better discrimination performance to predict cancer mortality than the existing NRI, PNI, CONUT, NRS2002, and PG-SGA systems. The impact of the FAIN on cancer mortality is linearlike, independent and robust to time. These findings suggest that the FAIN might act as a feasible, simple-to-obtain option to monitor the nutritional status and help develop intervention strategies to optimize the survival outcomes of cancer patients. DATA AVAILABILITY STATEMENT The datasets generated and/or analyzed during the current study are not publicly available to protect patient confidentiality but are available from the corresponding author on reasonable request. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Daping Hospital. The patients/participants provided their written informed consent to participate in this study.
2022-04-14T23:03:54.209Z
2022-04-13T00:00:00.000
{ "year": 2022, "sha1": "51b32a3f48bbdd37b1aa89d1ee9d3b5e3e25b49a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2022.860285/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "51b32a3f48bbdd37b1aa89d1ee9d3b5e3e25b49a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
263702174
pes2o/s2orc
v3-fos-license
Determinants of primary healthcare providers’ readiness for integration of ART services at departmental levels: A case study of Lira City and District, Uganda Background Decreasing or flattening funding for vertical HIV services means that new and innovative ways of providing care are necessary. This study aimed to assess the determinants of readiness for integration of Antiretroviral Therapy (ART) services at the departmental level among primary health care providers (PHCP) at selected health facilities in Lira District. Methods A cross-sectional survey employing mixed methods approaches was conducted between January and February 2022 among 340 primary healthcare practitioners (PHCP) at selected health facilities in Lira district. An interviewer-administered questionnaire was used to collect quantitative data. Quantitative data was analyzed using Stata version 15. and presented as proportions, means, percentages, frequencies, and odds ratios. Logistic regression was used to determine associations of the factors with readiness for ART integration at a 95% level of significance. Thematic analysis was used to analyze qualitative data. Results The majority 75.2% (95% CI; 0.703–0.795) of the respondents reported being ready for the integration of ART services. PHCPs who were aware of the integration of services and those who had worked in the same facility for at least 6 years had higher odds of readiness for integration of ART, compared with their counterparts [aOR = 7.36; 95% CI = 3.857–14.028, p-value <0.001] for knowledge and duration at the current facility [aOR = 2.92; 95% CI = 1.293–6.599, p-value < 0.05] respectively. From the qualitative data, the dominant view was that integration is a good thing that should be implemented immediately. However, several challenges were noted, key among which include limited staffing and drug supplies at the facilities, coupled with limited space. Conclusions The study reveals a high level of readiness for the integration of ART services at departmental levels among Primary Healthcare Providers. Notably, PHCPs knowledgeable about integration and those who spent at least six years at the current health facility of work, were strong determinants for the integration of ART services in resource limited settings. In light of these findings, we recommend that policymakers prioritize the implementation of training programs aimed at upskilling healthcare workers. Furthermore, we advocate that a cluster randomized controlled trial be conducted, to evaluate the long-term effects of this integration on overall health outcomes. Introduction Sub-Saharan Africa (SSA) bears the greatest burden of HIV/AIDS, accounting for the majority of new infections and deaths worldwide [1]. In Uganda, approximately 1.4 million people, both adults and children, are infected with HIV [2]. Many African countries, including Uganda, have adopted the UNAIDS 95-95-95 target of ensuring that 95% of people are aware of their HIV status, 95% of people diagnosed with the virus receive antiretroviral therapy, and 95% of those on therapies have undetectable viral load [3]. However, the UNAIDS-focused target of near-universal 95% antiretroviral therapy (ART) access for people living with HIV (PLHIV) by 2030 has several potential challenges [3]. The current success in the effectiveness of the vertical programs has been attributed to the donor funds [4]. In Uganda and across the region, HIV services are provided by vertically operating separately from other health system functions [5]. Despite the successes and achievements already realized, several questions around its sustainability are becoming very critical. Moreover, there is evidence of decreasing or flattening funding for vertical HIV services [6]. Additionally, many patients are still lost at various stages of the continuum of care [7]. This, therefore, implies that new and innovative ways of providing care are vital. A possible approach that has shown positive outcomes in improving HIV/AIDS services along the continuum of care could be horizontal integration at the point of service delivery in the various departments [8]. Several actors have also highlighted the need for the integration of ART management services at various health facility department levels [9]. Furthermore, evidence of the medical and public health benefits of HIV/AIDS service integration seems to back it up [10]. Integration in the context of this study refers to the act of combining ART services at departmental levels with other non-HIV-specific services, such as out-patient departments (OPD), primary healthcare (PHC), in-patient departments (IPD), maternal, newborn, and child health (MNCH) services, sexual and reproductive health (SRH), and family planning (FP), among others. Some of the determinants can be classified into, the health systems factors (availability of resources, funding, personnel, and infrastructure), provider factors (knowledge, skills, and attitudes of healthcare providers towards ART services, such as their ability to provide counseling and support), community factors (social and cultural norms, stigma, and discrimination) can also influence the integration of ART services in health facilities [11][12][13][14]. Analysis of these determinants will enable health systems to identify gaps and challenges that need to be addressed to effectively integrate ART services. The benefits of integration include lowered costs at primary health clinics [15], improvement in HIV service uptake, health outcomes, as well as outcomes related to other services [8]. In addition, this novel idea would benefit patients with comorbidities in terms of conformity of care and increased access to HIV/AIDS services. It can also allow healthcare providers to share the workload for all patients, resulting in more efficient use of resources and reduced patient waiting time [16]. Available evidence also suggests that integration could reduce discrimination by "normalizing" HIV services [17]. When viewed from the providers' and funders' perspective, integration has the potential to improve processes and resource allocation [9,18]. It is also important that people get the care they need, when they need it, in ways that are user-friendly, thereby achieving desired results and value for money. Currently, there is an inadequate understanding of healthcare workers' readiness for the integration of ART services at the departmental level. This could pose challenges to its implementation. This study, therefore, assessed the determinants of primary healthcare providers' readiness toward the integration of ART management services at departmental levels in health facilities in Lira district, to inform policy formulations and guide successful implementation. Design The study adopted a descriptive cross-sectional study design and was conducted using mixed methods approaches to data collection and analysis. The qualitative method enabled us to document the achievements and challenges faced by the primary healthcare workers in the provision of ART services at the selected health facilities. The quantitative objectives were to assess the level of readiness of the health facilities regarding the integration of ART services and their determining factors. The choice of the design was informed by the fact that a comprehensive understanding of the readiness would guide policy formulation. Furthermore, a proper understanding of the achievements and challenges faced by the health facilities would guide the directions for future service provision. Study setting and population The study was conducted in Lira District, located about 340 kilometres north of Kampala, Uganda's capital city. Four health facilities, comprising Lira Regional Referral Hospital (LRRH), PAG mission hospital, Ogur Health Centre IV, and Amach Health Center IV, were selected for the study. LRRH is the major referral hospital in the sub-region (Lango) with a total bed capacity of 254 running different units as per the government health systems structure. PAG Mission Hospital is a faith-based, private, not-for-profit institution registered with the Ugandan MoH at level five. Meanwhile, both Ogur and Amach are public health center IVs administered by the MoH with OPD, IPD, a theatre, maternity services, and medical departments. The study population consisted exclusively of primary healthcare providers (PHCP's) from the above-selected facilities. The PHCPs involved in the study included medical doctors, pharmacists, clinicians, nurses, midwives, laboratory technicians, and counsellors. Health workers who were not full-time employees of the participating health facilities were excluded. Sample size and sampling criteria We purposefully selected the four health facilities in the greater Lira district, considering their number of departments. A total of 340 PHCP who were available during the interview days were recruited for the quantitative survey using a census. We also purposively recruited PHC providers for key informant interviews depending on their roles within the departmental units and work experience (at least 6 months at work). We structured focus group discussions consisting of multi-disciplinary PHC workers. Data collection and analysis Data was collected in January and February 2022. The study tools for interviews were modified from the WHO reproductive health readiness assessment-hexagon tool [19]. Readiness was measured as a binary outcome. According to this tool, most of the determinants were in the domains of readiness assessment, including needs, fit, resources, capacity, readiness, and evidence. The tool was modified and designed by the research team to fit the context of the study setting. The modified tool was pretested before actual data collection and the information obtained was used to improve the tool. A total of 20 key informant interviews (KII) and four focused group discussions (FDG) were conducted at the four selected health facilities by three experienced interviewers of social science background. The quantitative data collected was checked for completeness and later entered into SPSS version 23 with consistency checks to ensure correctness. The dataset was cleaned for out-ofrange values and exported to STATA 15 (StataCorp, College Station, TX) software for analysis. We conducted a descriptive analysis to determine the proportions of the different variables in the respondents' characteristics, such as place of residence, age, sex, and marital status, among others. At the bivariate level, chi-square tests were performed to determine the association between dependent and independent variables. Further, odds ratio analyses were used to compute the unadjusted associations between the use of HIV prevention strategies and independent variables, including socio-demographic characteristics (such as age, marital status, level of education, sex, and occupation of participants). The results were expressed in terms of odds ratio with a 95% level of confidence and a P-value < 0.05. Finally, variables that were significant in the bivariate analysis (p < 0.2) were considered for the multivariable analysis. Logistic regression was performed to come up with a suitable model to explain the determinants of the PHCP readiness for ART integration at the departmental level and the statistical significance of p<0.05. Qualitative data collected was transcribed and entered into Nvivo version 12 software ready for onward analysis. A seven-step thematic analysis model by Clerke and Braun was used to analyse qualitative data using thematic analysis [20]. The steps include 1) transcription, 2) reading and familiarization 3) coding, 4) searching for themes, 5) review of themes, 6) naming the themes, 7) finalizing the analysis and interpretation of the results. A total of 15 themes and subthemes were developed from the data which include among others: Capacity and lack of capacity, qualification, experience, staffing, preparedness and unpreparedness, knowledge gap, unique nature of ART services, Need, fit and evidence. To ensure rigor and trustworthiness of the data the research team employed member checking, triangulation, detailed transcription, and following systematic plans and coding [21]. Ethical consideration The study protocol was reviewed and cleared by the Gulu University Research and Ethics Committee (GUREC-2021-173). Approval to conduct the study from Uganda was sought from the Uganda National Council for Science and Technology (UNCST). Administrative permission was obtained from the Resident City Commissioner of Lira City and the Resident District Commissioner of Lira District, the Chief Administrative Officer and the District Health Officer. Further permission was obtained from the heads of the selected health facilities, and written informed consent was sought from all respondents before interviews commenced. Socio-demographic characteristics of participants A total of 340 participants were interviewed using an interviewer-administered questionnaire. Nearly 70% of respondents were less than 35 years old, and a large proportion of them, 65.6% (223/340), were employed by the government of Uganda. The mean age of the respondents was 33.2 years (standard deviation = 8.8), with the majority of 55.3% (188/340) being males. The majority of respondents, 84.4% (287/340), were aware of service integration, and slightly more than half (174/340) had worked at the current health facility for 2 to 5 years. A significant proportion of the respondents 75% (255/340) were trained in ART management, 67.9% (231/ 340) are currently engaged in ART management and 83.2% (283/340) reported that the implementation team had the capacity and was ready for wider implementation of integration (Table 1). Readiness by the healthcare providers regarding the integration of ART services From the study, majority 75.2% (255/340), of the respondents reported being ready for the integration of ART services at the departmental level. Using the domains of readiness, nearly universal 94.4% (321/340) of the healthcare providers interviewed reported that integration meets the need for HIV management, and 81.8% (278/340) felt it fits the current guidelines by the ministry of health. A huge proportion, 79.1% (269/340) of the study participants noted their health facilities were prepared ( Table 2). The qualitative aspects of readiness. From the qualitative point of view, participants were asked to state their level of readiness. Key themes included the capacity of the facility; preparedness; fitting the current policy; evidence; resources; needs; and reasons for the views. Whereas respondents from the quantitative side generally reported being ready for integration, participants from both key informant interviews and focus group discussions expressed positivity with some reservations regarding the same. For instance, in this study, we assessed healthcare facilities' readiness for ART services integration. The capacity to handle ART integration. Participants were asked to share their views on health facilities' capacity to handle ART integration. According to their responses, participants understood capacity in terms of qualifications, experience, and staff. Qualification. In their own opinion, generally, healthcare facilities have staff who are qualified to handle ART services. Participants mentioned that they have staff with degrees, diplomas, and certificates in various fields suitable for the services. "The qualification of the human resource within the ART clinic is very okay, we have the doctors and nurses; we have staffed with degrees, diplomas, and certificates. Peer mothers who help us to follow up with clients in the communities" (FGD participant 1 Amach HCIV) "Homan resources are qualified, if they are to be interested they would because you know ART is all about the interest. The qualification is okay, we have the doctors, senior medical officers, and medical officers, and we have nurses. (KII 3 Amach HCIV) Experience. In addition, the staff are experienced enough, with some having worked in ART sections already. Besides, they said that the number of staff at the moment is sufficient for a Lack of capacity to handle ART service integration. Although there was a general feeling that the facilities had sufficient capacity to handle integration, a few participants had the opposite opinion. This was explained in terms of knowledge gap to handle ART services and unique nature of ART services respectively. Knowledge gap. Such participants reasoned that there might exist knowledge gaps among staff. This according to them is because not everyone is trained in HIV-related matters, which would pose a serious risk of increased mortality. In addition, those who held this view argued that even for those who claim to be skilled and knowledgeable in HIV matters, the knowledge keeps changing because new things keep emerging. Some of them had the following to say in support of their views: "About the capacity, the staff are well qualified but only that some of them lack knowledge about certain services in some departments, like TB ward where it is very hard for someone working maternity to work there and vice versa" (KII 8 Amach HCIV) Unique nature of ART services. Other participants argued that despite their qualifications, ART services are unique which some staff have not been trained. That is why they lack the knowledge to handle. "Haa. . . people are qualified but you know HIV is a unique thing, first of all when you are at school, the kind of knowledge you get is quite different from what you get from the field, if this is going to be taken as a normal thing, it means we have to go into the curriculum that has to adapt and integrate this so that whoever is going to come out as a qualified medical staff would be able to handle HIV as any other condition" (FGD Participant 2 PAG Mission hospital) Preparedness of the health facility for integration. Concerning readiness for ART service integration in terms of preparedness, participants understood it to mean the presence or absence of infrastructure, equipment, andspace, among others. There were mixed opinions of equal measure by both those who said they were prepared and those who said the facilities were not prepared for integration. Availability of infrastructures, equipment and space. According to those who said that the facilities were prepared, they gave reasons such as availability of infrastructure, presence of equipment and space, including the welcoming nature of the staff. Participants reasoned that they would be able to accommodate the increased number of clients due to available infrastructure. Furthermore, the space will allow them to serve the clients effectively. Below is what some of the participants said in support of their views: "If the integration is to be in place, the government is also aware, I think with that I may say the hospital may be prepared because all the required equipment must have been in place with the support from the government" (KII 10 PAG Mission hospital) "I think for us in OPD here, since we have the next building, we are ready except we need that knowledge" (KII 12 Lira RRH) Unpreparedness. On the other hand, some participants who were opposed to preparedness argued that the facilities are not yet ready because of the following reasons: "Inadequate drugs, Inadequate infrastructures like buildings, beds for clients, bleeding machines, radiation machines, etc. Knowledge gaps among the health workers about ART services; fear among clients if they are to be mixed with normal people" (FGD Participants 1,8, 5,3, Lira RRH) "Not yet ready: Because of inadequate staff that facility has, Inadequate building which is not enough for all the clients, knowledge gap among the few staffs available" (KII 15 PAG Mission hospital) As can be observed from the above responses, relatedly, both those who opposed and those who supported their preparedness gave similar but opposite viewpoints on the same. For instance, while some participants said they had the infrastructure, others said the infrastructure was not enough. Same with knowledge, which was also mentioned under capacity. Integration meets the needs for HIV management. In terms of need, participants were asked to express their views on whether they thought integration would meet the need for HIV management. As with the preparedness for ART service integration, participants gave varied but equally opposite opinions. There was an equally strong view that the integration of ART meets the need for HIV management just like those who are opposed. Those who stated that there was a need gave examples of the availability of record systems, computer equipment, and data management systems, among others. It is not clear, however, if these are sufficient to understand the need for integration. Integration fits the current guidelines. In terms of fitness for integration, participants also gave varied views, both of which had equal positions, with some saying the protocols and guidelines for integration will fit the current system. Furthermore, the existing staff and their qualifications will make it fit the integration without any challenges. They said, in their own words, that. . . "Yes, it will fit and match, because it is going to expand knowledge into staffs and what we were already doing is not far from integrations so I believe the protocol and guidelines, we have doctors and nurses who will be working together" (KII 10 PAG Mission hospital) "The current guidelines talk about the integration of services in the community, and also the facility. It fits because of that linkage and referrals" (KII 7 Lira RRH) "The system will fit without any problem because it is the same professionals operating and some trained medical workers" (KII 19 Ogur HCIV) Unfit. On the other hand, those who are opposed to fitness argue that the absence of guidelines and policies makes it difficult for integration to fit. The guidelines are supposed to explain how integration should be done, who should do what where, when and how. In the absence of this, it becomes difficult to operate. Interestingly, some of the reasons given against fitness were also the same as those mentioned earlier for preparedness and capacity. They include knowledge gaps, inadequate staff, limited infrastructure, and many others. In support of their views, some of the participants said that. . .. "All, I see this facility is not yet ready for integration since we still don't have the guidelines for how we should work, the infrastructures are still very poor, we even lack the staff to promote the integration, clients themselves need to be sensitized on the integration of ART clinic with other department and how it will help them assess their services" (FGD participants, Lira RRH) "We don't have a specific guideline; we are using the national guideline adopted by the ministry of health, including integration. It has not been taken up seriously, there is a need to bring staff on board" (FGD Participants PAG Mission hospital). Evidence integration can improve outcomes for HIV-positive clients. As regards evidence that integration of ART services can improve the outcomes for HIV-positive clients, participants also gave different opinions and views. From the data, however, the most pronounced opinion is that there will be evidence of integration improving the lives of HIV-positive clients. This will be seen in various ways, such as increased adherence resulting from a reduction in stigma. This will also lead to a suppressed viral load and hence improvement in the health status of the positive clients. In addition, there will be evidence of an early diagnosis, reduced waiting time and ease of service, and generally improved ART service delivery. In their own words, the participants said thus: "One thing that will come over is the treatment, it will improve clients' satisfaction, load, it will reduce waiting time for clients, retention will also be good, viral suppression as well" No tangible evidence. As noted earlier, a few of the healthcare workers still felt that there would be no tangible evidence of integration. In their opinion, they reasoned that integrating ART services would produce negative evidence such as a reduction in client turn-up, which would lead to their loss. This is due to fear of the stigma that will arise. Also, there will be evidence of wastage of ARV drugs, as some will not be taking them from the different units. In addition, it will cause discomfort for some of the clients who will not want to be known or seen by others. Below is what they had to say: "There will be lost clients, like when these people are mixed up there will be no way that you tell me that a patient will never get to realize that she is on ART services, because like in terms of dispensing, that is when I know that the mistake might come out" (KII 20 Ogur HCIV) "I don't know whether our patients will be comfortable here, I don't know whether they will go by waiting time, and yet the emergencies will also be coming in" (KII 12 Lira RRH) "I think this will discourage many clients to come. The number of clients will also reduce this is because of the loss of those clients, since some of them will go and never come back as a result of stigma" (FGD participants, Lira RRH) Inadequate resources. Finally, our data shows that there was generally a dominant view that the resources available from the facilities are not enough to warrant ART service integration. Participants from across diverse units and facilities agreed that the facilities have inadequate human resources and infrastructure such as space, equipment, and machines. They also noted that even the existing staff are not fully trained to handle ART services, hence making them unready. Some of the participants said these things in their own words: "For now, services are suitable as you see the monitoring and evaluation bit of it. Ok resources like human resources is always a challenge, that section is even worse, sometimes they are not enough, space challenges, storage facilities, racks " (KII 17 Lira RRH) As can be seen from the above, it appears that those who felt that the resources were insufficient to warrant integration are the ones who do the actual work, while those who said the resources were sufficient seem to be the ones in charge and heads of units who may not be in touch with reality. Health systems determinants of readiness of primary health care workers for integration of ART services at the departmental level In this study, healthcare providers who were aware of integration services were associated with higher odds of readiness for integration of ART, compared with healthcare providers who were not aware of integration services. And the association was statistically significant [aOR = 7.36; 95% CI = 3.857-14.028, p-value <0.001]. Our study shows that knowledge is very critical in the provision of healthcare to clients, and those who are aware are likely to welcome the idea. The healthcare providers who had worked in the same facility for at least 6 years had higher odds of readiness compared to those who had worked for less than 1 year. And the association was statistically significant [aOR = 2.92; 95% CI = 1.293-6.599, p-value = <0.05] ( Table 3). Discussion In resource-limited settings, integrating antiretroviral treatment (ART) services into departmental levels at healthcare facilities is an effective strategy for improving HIV care outcomes. To the best of our knowledge, this is the first study assessing the determinants of readiness of the primary healthcare providers for the integration of ART services at departmental levels in this setting. Healthcare provider readiness for integration reflects the disposition or ability of a healthcare provider, regardless of the cadre, to offer ART and other clinical services at a single point in time to make the most of available resources. In our study, readiness was assessed as a composite (binary outcome) based on the WHO domains of readiness assessment, which included needs, fit, resources, capacity, readiness, and evidence. Regarding the readiness of PHC providers for integration of ART services, results from our study show that a high proportion of PHC providers reported being ready for the integration of ART services into departmental levels. This finding is consistent with the evidence from a study in which approximately 78% of staff were ready for the integration of routine rapid HIV screening in urban family planning (FP) clinics [22]. This can be explained by the fact that in both studies have been done in a similar context. Therefore, integrating ART services will not be difficult because the majority of PHC providers have been trained and many have previously participated in integrated PHC services such as TB and MCH. In the event policymakers decide to integrate ART services, most healthcare providers would be willing to take up the policy change positively. Furthermore, in our study, the majority of staff reported that integrating ART services at the departmental level would improve ART therapy outcomes among PLWHIV. Our findings are consistent with evidence from elsewhere, which found that integrating HIV services increased patient satisfaction, perceived quality of care, and patient access to services, and patient health outcomes [23,24]. In addition, our findings indicate that integration may reduce stigma and discrimination, which is consistent with data from other studies suggesting that integration may reduce discrimination by 'normalizing' HIV services [17,25,26]. On the other hand, patient outcomes may also be better and costs lowered at primary health facilities [15] as a result of integration of ART services at departmental level. Therefore, the integration of HIV services is feasible and has the potential for positive outcomes to improve health and health systems [27]. However, according to one study, there was no statistically significant difference in viral suppression between integrated and separate services [28]. Our study also reveals some concerns by the respondents about integration increasing the workload among healthcare providers. This finding is consistent with evidence from studies that suggested integration had a risk of overburdening healthcare providers [5,29], especially where the prevalence of HIV is very high [23]. This could be attributed to increased client turn-up as a result of receiving all necessary services from the same point of care, which means more work for the staff. Furthermore, other studies have found insufficient human resource capacity to provide additional services [30], which could be difficult for the already overburdened health system. A plausible reason could be related to the context in which the studies were conducted. Other evidence on the other hand seem to suggest that one way to address these challenging issues, is to integrate healthcare at all levels [31]. The concerned authorities should conduct massive recruitment and conduct training if the integration of ART at the departmental level becomes policy. Training and capacity building are important for preparing PHC providers for the integration of ART services. In this study, our result show that the majority of the primary care providers were trained in ART management which is very important in the HIV care cascade. This is consistent with other studies' findings that competency-based capacity-building for various health worker cadres along the training continuum are effective [32]. Furthermore, for healthcare providers to provide multiple desired services, including ART, at a single point of contact, they must be adequately trained in all aspects [33]. It is worth noting that vertical HIV care had funding and numerous opportunities for staff training on HIV care [34]. Fortunately, most of the medical-related training institutions in Uganda provide HIV training as a crosscutting issue something that would add value to ART integration. Finally, according to the findings of this study, many participants believed that the resources available from the facilities were insufficient to justify the integration of ART services. Similar findings show that integrating services can result in more efficient use of available resources, such as human resources, medical supplies, and drugs [23]. This means that integration is particularly important in resource-constrained settings where healthcare workers and medical supplies are scarce. Comparable to studies also noted that the success of integrated delivery models depends on a wide range of resources, [23,35,36]. And with the massive cuts in funding of the HIV vertical program, integrating ART just like other HIVrelated services may be a novel path. Whether or not policymakers decide to integrate ART services at the departmental level, their success will be largely determined by the system and resources available. Limitations Our study had several limitations. First, the few selected health facilities and the number of healthcare providers interviewed within them may not be representative of the reference population, therefore restraining the generalizability of our findings. We are unable to determine whether ART service integration at departmental levels can have a positive effect on the longterm outcomes of HIV-related care. Thirdly, our study did not include lower-level health facilities that are providing ART services, and that could limit the generalizability of our findings. Conclusion and recommendations Our study shows that the level of readiness for integration of ART services at departmental levels is high among primary healthcare providers. The care providers, furthermore, were confident about the model of care for HIV patients that integrates ART services into departments in resource-limited settings. Policymakers should consider exploring the integration of ART services into departmental levels as a novel path to providing care while maintaining success given the reduction in funding for vertical HIV care. We recommend that future studies be conducted to determine the acceptability of the ART-integrated model of care. Health systems need to invest in training health workers to provide comprehensive ART services and ensure that adequate medical supplies and drugs are available. Furthermore, to confirm our findings, a cluster randomized controlled trial and assessment of the cost-effectiveness analysis, quality of care, and long-term impact of the integration on health outcomes should be conducted.
2023-10-07T05:07:13.956Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "e7fc023644ce9945b69727c5f9f2563d66063293", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e7fc023644ce9945b69727c5f9f2563d66063293", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18242985
pes2o/s2orc
v3-fos-license
A topological formal treatment for scenario-based software specification of concurrent real-time systems Real-time systems are computing systems in which the meeting of their requirements is vital for their correctness. Consequently, if the real-time requirements of these systems are poorly understood and verified, the results can be disastrous and lead to irremediable project failures at the early phases of development. The present work addresses the problem of detecting deadlock situations early in the requirements specification phase of a concurrent real time system, proposing a simple proof-of-concepts prototype that joins scenario-based requirements specifications and techniques based on topology. The efforts are concentrated in the integration of the formal representation of Message Sequence Chart scenarios into the deadlock detection algorithm of Fajstrup et al., based on geometric and algebraic topology. INTRODUCTION A predominant characteristic of real-time systems is concurrency. Concurrent systems are composed of concurrent tasks, processes or objects, and they typically manage shared resources, which means a demand for predictability, flexibility and reliability [10] [5]. For the class of hard-real-time systems addressed by this research, there should be mechanisms and policies that ensure consistency and minimize worst case blocking, any boundless or excessive run-time overheads. Since such aspects are strongly associated with the behavioral model of the system, techniques and tools that more appropriately express concurrency, distribution and parallelism are indispensable. The comprehension of concurrent systems is more difficult than the sequential ones for various reasons. Perhaps the most obvious is that in a concurrent system all the different components are in independent states and the combinations of states grow exponentially. Deadlocks can rise in these systems, which means that no component can make any progress, generally because each is waiting for communication with others. The formal understanding and reasoning can help to establish properties of such a system. A formal specification of a system is concerned with producing an ambiguous set of system specifications that can be formally verified, so that requirements, as well as the environment constraints and design intentions, are correctly reflected, thus reducing the chances of accidental fault injections. Recently, techniques based on algebraic topology [4] have been introduced into concurrency theory in order to deal with the high level complexity of verifying and analyzing properties of concurrent real-time systems. There is an entire assemblage of well-studied topological techniques that could be used, with some adaptations, to formally prove properties of concurrent systems. However, given the very recent acknowledgement and interest of computer scientists in topological techniques, there have been very few actual implementations of such ideas in real case situations. The use of formal specification is an important feature that adds a great value at the initial stage of the software development. However, a certain level of non-formalism is necessary in the beginning of the development when the requirements are not completely understood and some flexibility is essential for trying out some alternatives to the design. Scenario-based requirements specification, expressed using Message Sequence Charts (MSCs), has a visual and proper semantics and has increasingly being used by analysts for specifying requirements of a software system [20] [8] [12] [2] [16]. MSCs have been major topic of research and practice [18] [21] and their semantics is also formalized [7] [6]. This work presents a simple method that maps scenario-based specifications, represented formally by MSCs, to a topological space in order to formally verify these specifications. The efforts are concentrated in the integration of the deadlock detection algorithm of Fajstrup et al. [15], based on topological techniques, to MSC scenarios, addressing the problem of detecting deadlock situations early in the requirements specification phase, proposing a simple "proof-of-concepts" prototype. This paper is organized in six sections. Section 1 is the introduction, followed by Section 2 that presents the approach of modeling possible deadlocks scenarios using MSCs and process algebra. Section 3 establishes the necessary topological concepts for this work and Section 4 describes the integration between the deadlock detection algorithm and the MSCs models as a simple "proof-of-concepts" prototype. Finally, the conclusion and future prospects are summarized in Section 5. MSC as a graphical language for describing scenarios Message Sequence Chart is one of the most widespread approaches to documenting scenario-based specifications, is relatively easy to use, has a wide acceptance in industry and is well suited for developing first approximations of intended behavior of a system. Scenarios describe a sequence of events or activities [11] [13] [9] and they refer to interactions between independent entities. A complete reference about the MSC language can be found in Recommendation Z120 [7]. The formalization of MSCs in process algebra The ITU (International Telecommunication Union) Recommendation Z 120 [7] is the standardization for the MSC language. Due to its widespread use and popularity, the MSC semantics and static requirements were also formalized [6] [17]. [19]. The principal motivation behind this formalization is to offer a proper base for the language users to avoid ambiguities, inconsistencies and obscurities. The description of the semantics of a MSC uses process algebra, based on the algebraic theory of process description ACP (Algebra of Communicating Process) [14]. A MSC is characterized by the sequence of events along an instance axis and it is assumed that there is an asynchronous communication between its instances. In order to present some aspects of the MSC formalization related to this work, process algebra theory will be avoided and the necessary concepts will be introduced through some simple examples, using the MSC graphical representation. These examples do not exhaustively show all the elements of the basic language, but only presents the essential idea behind the formal semantics expressed in a process algebra sentence. An excellent tutorial about the formalization of MSC can be found in [19]. The MSC M1 in the Figure 1 describes two instances P 1 and P 2 , which have two communications with the environment and one communication between them. This MSC can be characterized by two traces generated by P 1 and P 2 . In process algebra, the semantics of P 1 and P 2 is: where the operator "." is strict sequential composition. As P 1 and P 2 operate in parallel independently of each other, the semantics of the MSC M1 is: out (P 1 , P 2 , m 2 ). in (env, P 1 , m 1 ) in (P 1 , P 2 , m 2 ). in (env, P 2 , m 3 ) where the operator " " is parallel composition. The operator parallel defines an interleaved execution of its operands. There is a basic static requirement, which establishes that a message must be sent before it is received. Therefore, the expression (1), after expansion, has several traces that must be eliminated. In order to enforce this basic static requirement, the operator (state operator) is introduced. After applying , the semantics of the MSC M1 is established as follows: where the operator "+" represents alternatives. The MSC M2 in the Figure 2 describes two instances P 1 and P 2 . As the instance P 2 has a coregion, the ordering of the events is completely free. Instead of using the sequential composition operator, the operator is used. Description of possible deadlock scenarios There are two possible ways that concurrency can be raised from MSC scenarios: internal (within the MSC) and external (among different MSC scenarios). If deadlock conditions are detected early, changes in the system model can be made not only to eliminate them, but also to circumvent them in the future. Some considerations have to be done in order to represent possible deadlock scenarios using MSCs. The identification of processes, resources and messages must reflect situations where there are a certain number of processes sharing resources in a mutual exclusion regime. (1) ICSSEA The processes will send two types of messages to the resources: lock and unlock (release). When a process sends a message lock to a resource, it will take that resource exclusively for a certain time in order to realize some processing and after that, this same process will release the resource by sending the message unlock. The resources are passive instances that only receive input messages for locking and unlocking themselves. As a result of these considerations, each instance of a MSC will to be a process or a resource. Figure 4 shows a scenario that illustrates two processes P1 e P2 sharing a resource R1. The process algebra expression for possible deadlock scenarios According to the assumptions established in the previous section, a MSC that represents a concurrent scenario will have only two possible types of messages: lock and unlock. So, the general expressions in process algebra can be established as follows: Messages sent by processes: out (process, resource, lock/unlock); Messages received by resources: in (process, resource, lock/unlock); As the resources are considered passive instances, where the ordering that they will receive the messages lock and unlock is not determined in concurrent systems, they will be represented with coregion. So, assuming that there will be no lost messages, all the possible traces that correspond to all the ways the resource can be locked and unlocked will be the parallel composition of its process instances. This assumption eliminates the necessity of applying the operator . In addition, the fact that the resource instances will receive all the messages sent to them in any order, and considering the geometric and topological treatment that will be used later (section 3 and 4), there will be no need to expand the whole process algebra expression for the semantics of the MSC. The semantics of its processes, written in a process algebra expression, will have all the information necessary to seek for deadlock scenarios. There is a significant simplification when considering that the semantics of a MSC representing a possible deadlock scenario can be only characterized by the semantics of its identified processes. The resulting process algebra expression is now simple to understand and easier to create. The Figure 5 shows an example of a MSC that represents a possible deadlock scenario and the corresponding process algebra expression of its processes. Figure 5: MSC and its algebra process sentence. TOPOLOGICAL FORMAL TREATMENT In recent years, topological methods have been introduced into concurrency theory (e.g., [4]). Most notably, the development of partial order reduction techniques based on topology have been used to tackle the well known "state-space explosion problem" [3]. Concurrency theory deals with a very large, although finite, space of states (a discrete space), whereas topology deals with the properties of geometrical figures that are preserved under continuous deformations. In order to apply the continuous topological techniques to the discrete space of concurrency, the latter is represented as a subset of the Euclidean space R n : the unit cube in n-space, I n = I 1 x … x I n , where I is the unit interval [0,..,1]. Each coordinate axis corresponds to a process (from a set of n concurrent processes defined by the system). The set of coordinate points on each axis compose an ordered sequence of real numbers between 0 and 1, representing the scheduled actions (a transaction) that a given process will execute. For instance, consider a finite set of transactions acted upon a centralized database. Each transaction can be abstracted as a sequence of locking (represented by P, according to Dijkstra's nomenclature [1]) and unlocking (V) to the database's shared resources (e.g., data records). The state of the database corresponds to a point in the n-cube space; particularly, the initial state is the n-dimensional vector with coordinates (0,…,0), whereas the final state is the (1,…,1) vector. If we consider that only two transactions, T1={PaPbVbVa} and T2={PbPaVaVb} (where a and b label the shared resources that are being locked and unlocked), will be acted upon the database, it is already clear from the geometrical representation that there will be three types of critical regions: unsafe, forbidden, and unreachable. An illustration of this two-transaction example is shown in Figure 6 (adapted from [15]). Hence, if a concurrent system, composed of a certain number of independent processes, share one or several resources, a trajectory or path in the n-cube space corresponds to the correct synchronization between the processes only if the path does not cross the critical regions mentioned above. The forbidden region is actually a "hole" in the n-cube space that is inaccessible to the processes due to mutual exclusion. The unsafe region indicates a deadlock, whereas the unreachable region represents the set of impossible states of the system. Such a geometrical model of concurrency is referred to as a "progress graph". Given the existence of critical regions in the progress graph, it is already intuitively clear that it is possible to identify different sets of equivalent paths, in the sense that they will be performing essentially the same scheduling. For instance, if two or more execution paths can be continuously deformed into each other, then in topological terms they are homotopically equivalent. If a path cannot be deformed into another one, due to the presence of the excluded region between it and the other path, then it performs a different scheduling. A progress graph is actually a topological space in which points representing the states of the concurrent system are ordered globally through time. Thus, it is already qualitatively evident from simple examples that the "statespace explosion problem" can be naturally tackled by such a topological formalism [3], given that there is no need to traverse all execution paths to check for given properties of the system. In particular, the existence of deadlocks can be geometrically determined by a simple algorithm developed by Fajstrup et al. [15]. INTEGRATING A DEADLOCK DETECTION ALGORITHM BASED ON TOPOLOGICAL METHODS TO MESSAGE SEQUENCE CHARTS The main philosophy behind the "proof-of-concepts" prototype proposed in this paper is the formal verification of the implementability of MSC specifications, concerning to deadlocks scenarios, at an early stage of development of a concurrent system, based on ready-to-use topological concepts. Such an integrability can be achieved in a relatively straightforward manner by recognizing that the fundamental actions of the semantics of each process identified in the MSC scenario (section 2.3) will be mapped in one coordinate axis, which corresponds to a process from a set of n concurrent processes defined by the system, in the topological space. The actions out (process, resource, lock/unlock) that a certain process performs correspond to the set of coordinate points on each axis composed by an ordered sequence of real numbers between 0 and 1, representing the scheduled actions (a transaction) that a given process will execute. For every concurrent process in the MSC, each action out (process, resource, lock) upon the shared instance resource, will be identified with a lock (P) action and each action out (process, resource, unlock) upon the shared instance resource, will be identified with an unlock (V) action. Consequently for each concurrent process, there will be its corresponding partially ordered actions, into a sequence of ordered real numbers along the axis interval [0,..,1]. The resources identified in each action out (process, resource, lock/unlock) will be labeled shared resources that are being locked and unlocked. Once this mapping is realized, the progress graph is created. The next step is the application of the deadlock detection algorithm [15] to each resulting topological spaces. Figure 7 is a simple illustration of the correspondence between MSC and progress graph, with two scenarios identified by the topological technique: a safe execution path (1) and a deadlock situation (2). CONCLUSIONS AND FUTURE PROSPECTS This paper proposes a simple "proof-of-concepts" prototype to formally treat concurrency in real time systems by considering the integration of a deadlock detection algorithm based on topology, to MSC. One of the initial results of this integration is the possibility of formally verifying MSC scenarios and reliably finding forbidden scenarios at the early phases of development. The use of geometric and algebraic topology concepts allows to promptly identifying critical regions for decision making considerations. The use of MSC as a language to express scenarios of a system practically provides a one-to-one correspondence between both formalisms. This fact guarantees that the use of the proposed method does not affect the necessary flexibility. Reliability is also accomplished by the implementation of a simple and precise algorithm derived from the geometrical configuration of the state space of the system. The integration of formal methods with behavioral models that are flexible enough to be used at the early phases of the software development is a step forward in the characterization of a rigorous treatment of concurrency. Although the method was proposed to be primarily applied during the system requirements analysis, it can also be used to formally verify refined MSC scenarios in the detailed design. There are still many aspects to be considered in this research. Future implementation of the prototype has to aim at the development of a friendly user interface. The proposed method can also be extended with apparently
2008-02-01T14:12:47.000Z
2008-02-01T00:00:00.000
{ "year": 2008, "sha1": "c5dec89d364797c691f193f08df0cbb972c5acd7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c5dec89d364797c691f193f08df0cbb972c5acd7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
215760923
pes2o/s2orc
v3-fos-license
LncRNA LINC00483 promotes gastric cancer development through regulating MAPK1 expression by sponging miR-490-3p Background Previous studies have shown that long noncoding RNA (lncRNA) LINC00483 was aberrantly expressed in human cancers, including gastric cancer. However, the regulatory mechanism of this lncRNA in gastric cancer remains largely unknown. The present study aimed to investigate the effect of LINC00483 on gastric cancer development and explore the potential regulatory network of LINC00483/microRNA (miR)-490-3p/mitogen-activated protein kinase 1 (MAPK1). Methods Thirty patients with gastric cancer were recruited for tissues collection. The expression levels of LINC00483, miR-490-3p and MAPK1 were detected by quantitative real-time polymerase chain reaction or western blot. Cell viability, apoptosis, migration and invasion were determined by MTT, flow cytometry, transwell assays and western blot, respectively. The target association between miR-490-3p and LINC00483 or MAPK1 was confirmed by luciferase reporter assay. Xenograft model was established to assess the function of LINC00483 in vivo. Results LINC00483 and MAPK1 levels were increased in gastric cancer tissues and cells. Knockdown of LINC00483 or MAPK1 inhibited cells viability, migration and invasion but promoted apoptosis in gastric cancer cells. Moreover, MAPK1 overexpression attenuated the effect of LINC00483 knockdown on gastric cancer development. LINC00483 could increase MAPK1 expression by competitively sponging miR-490-3p. miR-490-3p overexpression suppressed gastric cancer development, which was abated by introduction of LINC00483. Besides, inhibition of LINC00483 decreased xenograft tumor growth by regulating miR-490-3p/MAPK1 axis. Conclusion Knockdown of LINC00483 inhibited gastric cancer development in vitro and in vivo by increasing miR-490-3p and decreasing MAPK1, elucidating a novel mechanism for understanding the development of gastric cancer. Background Gastric cancer is a serious health problem with leading cause of cancer death worldwide [1]. Recently, great advances have been gained in early diagnosis and management of gastric cancer [2,3]. However, the prognosis and treatment of patients at advanced stage remain unsatisfactory. Therefore, it is urgent to explore novel targets for understanding the pathogenesis and treatment of gastric cancer. Long noncoding RNAs (lncRNAs) as a class of noncoding RNAs with more than 200 nucleotides in length play essential roles in diagnosis and progression of cancers [4]. Moreover, many lncRNAs are abnormally expressed in gastric cancer [5], and they have important roles in diagnosis, prognosis and development of gastric cancer [6]. For instance, Chen LINC01939 suppresses migration and invasion of gastric cancer by decreasing miR-17-5p and increasing early growth response 2 (EGR2) [7]. Zhang et al. suggest that LINC02532 could promote proliferation, migration and invasion of gastric cancer cells [8]. Furthermore, Hu et al. reveal that LINC00337 contributes to proliferation of gastric cancer cells by regulating p21 and enhancer of zeste homolog 2 (EZH2) [9]. As for LINC00483, a novel lncRNA, it has been indicated to promote cancer development in endometrial cancer, lung adenocarcinoma and colorectal cancer [10][11][12]. Besides, emerging evidence suggests that LINC00483 could facilitate proliferation, migration and invasion of gastric cancer [13]. However, little is known about the mechanism allows this lncRNA in regulation of gastric cancer development. The available evidence indicates that lncRNA exhibits pivotal roles in regulating gene expression via functioning as competing endogenous RNA (ceRNA) of miRNA [14]. Previous studies have suggested miR-490-3p as a tumor suppressor in multiple cancers, including esophageal squamous cell carcinoma, prostate cancer and glioma [15][16][17]. Moreover, accruing works demonstrate the importance of miR-490-3p in gastric cancer development by acting as a tumor suppressor [18][19][20]. In addition, mitogen-activated protein kinase (MAPK) signaling is activated and could be as therapeutic target in human cancers [21]. In studies on gastric cancer, MAPK1 has been reported as promising target of miRNAs to participate in the development of gastric cancer [22,23]. More importantly, the database of miRcode and TargetScan online predicted that LINC00483 and MAPK1 have the similar complementary sequences of miR-490-3p, indicating the potential ceRNA network of LINC00483/miR-490-3p/MAPK1. In this study, we explored the biological role of LINC00483 and focused on the regulatory mechanism of LINC00483 in gastric cancer. By combining in vitro and in vivo experiments, we analyzed the carcinogenic role of LINC00483 and the interaction with miR-490-3p and MAPK1 in gastric cancer. Patient samples and cell culture Thirty cancer tissues and matched adjacent normal tissues were collected from patients with gastric cancer recruited from the Second Xiangya Hospital of Central South University. Patients have received radiotherapy or chemotherapy were excluded and the tissues were stored at − 80 °C. Written informed consents were provided by all participants and this study was approved by the Ethics Committee of the Second Xiangya Hospital of Central South University. Flow cytometry Cell apoptosis was assessed by flow cytometry using an Annexin V-FITC Apoptosis Detection Kit (Beyotime). MKN-45 and MGC-803 cells (2 × 10 5 /well) were placed into 24-well plates in triplicates and cultured for 96 h. Cells were washed with PBS, resuspend in binding buffer, and then stained with Annexin V-FITC and PI for 15 min in the dark. The cell apoptosis was analyzed with a flow cytometer (BD, San Jose, CA, USA) and apoptotic rate was expressed as percentage of cells at early and late apoptotic phases. Transwell assay MKN-45 and MGC-803 cells (2 × 10 5 /well) were seeded in serum-free medium in the upper chambers with Matrigel-coated membranes for invasion assay and without coated membranes for migration assay. Meanwhile, the lower chambers were filled with 500 μL DMEM containing 10% fetal bovine serum. After a 24-h culture, the cells transferred to the membranes were stained with 0.1% crystal violet and counted under a microscope (Olympus, Tokyo, Japan) with three random fields (magnification 200×). Western blot RIPA buffer (Beyotime) containing protease inhibitor was used for protein extraction from the collected tissues or cells. The protein concentration was determined by using BCA Kit (Beyotime). Then equal amounts of proteins were denatured by boiling water bath for 10 min and then separated by SDS-PAGE. The separated proteins were transferred onto 0.45 μm PVDF membranes (Millipore, Billerica, MA, USA) and then blocked with 5% non-fat milk for 1 h. Subsequently, the membranes were incubated with primary antibodies against MAPK1 (sc-136288, 1:1000 dilution, Santa Cruz Biotechnology, Santa Cruz, CA, USA), c-Myc (ab39688, 1:500 dilution, Abcam, Cambridge, MA, USA), Bax (ab199677, 1:1000 dilution, Abcam) or MMP9 (ab119906, 1:1000 dilution, Abcam) at 4 °C overnight and corresponding secondary antibody at room temperature for 2 h. GAPDH (ab37168, 1:2000 dilution, Abcam) was used as an internal control. The ECL system (Beyotime) was applied to visualize the protein blots and the relative protein levels were normalized to corresponding control group. Xenograft model The procedures of this animal experiment were approved by the Ethics Committee of the Second Xiangya Hospital of Central South University. Five-week-old male BALB/c nude mice were purchased from Shanghai Animal Laboratory Center (Shanghai, China) and then randomly divided into two groups (n = 7 per group). MGC-803 cells (2 × 10 6 cells) stably transfected with sh-LINC00483 or sh-NC were subcutaneously injected into the left flank of the mice. The tumor volume was monitored every week and calculated as volume (mm 3 ) = length × width 2 × 0.5. After cell injection for 5 weeks, mice were killed and tumor tissues were harvested. Tumor weight and related molecular analyses were determined. Statistical analysis The experiments were repeated more than three times with 3 replicates and data were expressed as mean ± standard deviation (SD). Statistical analysis was performed by GraphPad Prism 7 software (GraphPad Inc., La Jolla, CA, USA) to compare the difference in different groups via Student's t test or ANOVA followed by Tukey's test. The linear relationship among the levels of LINC00483, miR-490-3p and MAPK1 in gastric cancer tissues was analyzed by spearman's correlation coefficient. P < 0.05 was considered significant. The levels of LINC00483 and MAPK1 are increased in gastric cancer The expression levels of LINC00483 and MAPK1 were measured in 30 gastric cancer tissues. As shown in Fig. 1a, b, the levels of LINC00483 and MAPK1 mRNA were markedly enhanced in gastric cancer tissues compared with those in adjacent normal samples. Meanwhile, the protein expression of MAPK1 was also notably upregulated in gastric cancer tissues in comparison to that in normal group (Fig. 1c). Moreover, there was a positive correlation between levels of MAPK1 and LINC00483 in gastric cancer tissues (r = 0.7748, P < 0.0001) (Fig. 1d). In addition, their abundances were also examined in gastric cancer cells. Compared with GES-1 cells, the levels of LINC00483 and MAPK1 mRNA and protein were significantly increased in gastric cancer cells (AGS, MKN-74, MKN-45 and MGC-803 ( Fig. 1e-g). MKN-45 and MGC-803 cells with relative higher expression of LINC00483 were used for further experiments. Knockdown of LINC00483 suppresses progression of gastric cancer cells To investigate the effect of LINC00483 on gastric cancer development, its abundance was knocked down in MKN-45 and MGC-803 cells using sh-LINC00483-1 and sh-LINC00483-2. The transfection efficacy was confirmed in Fig. 2a, b. Moreover, the data of MTT assay showed that knockdown of LINC00483 evidently decreased viability of MKN-45 and MGC-803 cells at 96 h (Fig. 2c, d). In addition, down-regulation of LINC00483 led to great apoptosis in MKN-45 and MGC-803 cells at 96 h (Fig. 2e). Furthermore, the abilities of migration and invasion in MKN-45 and MGC-803 cells were significantly repressed by interference of LLINC00483 (Fig. 2f, g). Besides, the levels of protein associated with these processes were detected. Results displayed that knockdown of LINC00483 led to obvious reduction of c-Myc and MMP9 protein levels and increase of Bax level in the two cell lines (Fig. 2 h, i). Silence of MAPK1 inhibits progression of gastric cancer cells The role of MAPK1 in gastric cancer development was evaluated in MKN-45 and MGC-803 cells transfected with sh-MAPK1-1, sh-MAPK1-2 or sh-NC. The expression of MAPK1 was effectively decreased at mRNA and protein levels in MKN-45 and MGC-803 cells transfected with sh-MAPK1-1 or sh-MAPK1-2 compared with that in sh-NC group (Fig. 3a, b). Furthermore, results showed that viability of MKN-45 and MGC-803 cells was significantly reduced by knockdown of MAPK1 at 96 h (Fig. 3c, d). Meanwhile, inhibition of MAPK1 induced higher apoptotic rate in MKN-45 and MGC-803 cells at 96 h (Fig. 3e, Additional file 1: Figure S1A). What's more, transwell analysis described that the number of migrated and invasive cells was remarkably smaller in sh-MAPK1-1 Fig. 1 The expression levels of LINC00483 and MAPK1 are up-regulated in gastric cancer. a, b qRT-PCR assay detected the levels of LINC00483 and MAPK1 in gastric cancer tissues and normal samples. n = 30. c Western blot assay was performed to measure the MAPK1 protein level in gastric cancer tissues and normal tissues. d The association between levels of LINC00483 and MAPK1 in gastric cancer tissues was assessed. e-g The expression levels of LINC00483 and MAPK1 were detected in gastric cancer cells via qRT-PCR or western blot. GC: gastric cancer. *P < 0.05 compared with normal or GES-1 group and sh-MAPK1-2 group than that in sh-NC group (Fig. 3f, g, Additional file 1: Figure S1B and 1C). Besides, the protein levels of c-Myc and MMP9 were significantly decreased and Bax level was increased by knockdown of MAPK1 in MKN-45 and MGC-803 cells (Fig. 3h, i). Restoration of MAPK1 reverses the effect of LINC00483 knockdown on progression of gastric cancer cells As shown in Fig. 4a-d, the expression of MAPK1 in MKN-45 and MGC-803 cells was positively regulated by LINC00483. In order to explore whether MAPK1 was required for LINC00483-mediated regulation on gastric cancer progression, MKN-45 and MGC-803 cells were transfected with sh-NC, sh-LINC00483, sh-LINC00483 + pcDNA or MAPK1. As displayed in (Fig. 4g, h, Additional file 2: Figure S2A). Additionally, overexpression of MAPK1 weakened the suppressive effect of LINC00483 on migration and invasion in MKN-45 and MGC-803 cells ( Fig. 4i-l, Additional file 2: Figure S2B, C). Besides, the regulatory effect of LINC00483 on protein levels of c-Myc, Bax and MMP9 was abated by addition of MAPK1 in the two cell lines (Fig. 4m, n). LINC00483 positively regulated MAPK1 by sponging miR-490-3p in gastric cancer cells To explore how LINC00483 regulated MAPK1, the bioinformatics analysis was performed. This study using miRcode and TargetScan predicted that LINC00483 and MAPK1 have the similar binding sites of multiple miR-NAs. We selected 5 lowly-expressed miRNAs in gastric cancer and measured the effect of LINC00483 on their expression. In these 5 miRNAs, miR-490-3p expression was increased most by LINC00483 knockdown (Additional file 3: Figure S3). Hence, miR-490-3p was chosen for further experiments. The binding sites of miR-490-3p and LINC00483/MAPK1 were shown in Fig. 5a. To confirm the association between LINC00483 and miR-490-3p, the luciferase reporter vectors WT-LINC00483 and MUT-LINC00483 were generated and transfected into MKN-45 and MGC-803 cells. As shown in Fig. 5b, c, overexpression of miR-490-3p significantly reduced luciferase activity in WT-LINC00483 group, while it did not affect the activity in MUT-LINC00483 group. Moreover, qRT-PCR assay showed that miR-490-3p expression was negatively regulated by LINC00483 in MKN-45 and MGC-803 cells (Fig. 5d). To validate the target association between miR-490-3p and MAPK1, we constructed MAPK1 3′UTR-WT and MAPK1 3′UTR-MUT luciferase-expressing vectors and transfected them into MKN-45 and MGC-803 cells. Results displayed that luciferase activity in MKN-45 and MGC-803 cells was obviously decreased by miR-490-3p overexpression in MAPK1 3′UTR-WT group, whereas it was not changed in MAPK1 3′UTR-MUT group (Fig. 5e, f ). Furthermore, analysis of western blot demonstrated that the protein level of MAPK1 was markedly decreased via miR-490-3p overexpression in MKN-45 and MGC-803 cells and this effect was abrogated by introduction of LINC00483 (Fig. 5g, h). In addition, the expression of miR-490-3p was remarkably down-regulated in gastric cancer tissues and cells when compared with normal tissues or GES-1 cells (Fig. 5i, j). What's more, the abundance of miR-490-3p in cancer tissues was negative correlated with level of LINC00483 or MAPK1 (Fig. 5k, l). Besides, overexpression of miR-490-3p resulted in obvious reduction in viability, migration and invasion as well as increase in apoptosis of MKN-45 and MGC-803 cells, while these events were mitigated by addition of LINC00483 ( Fig. 5m-p, Additional file 4: Figure S4A-C). Knockdown of LINC00483 decreases tumor growth in gastric cancer xenograft model In order to evaluate the role of LINC00483 in gastric cancer in vivo, MGC-803 cells stably transfected with sh-LINC00483 or sh-NC were injected into nude mice and classified as sh-LINC00483 and sh-NC group, respectively. As shown in Fig. 6a, b, the tumor volume and weight were markedly decreased in sh-LINC00483 group compared with those in sh-NC group. Moreover, the data of qRT-PCR assay showed that the level of miR-490-3p was significantly increased in sh-LINC00483 group compared with that in sh-NC group, while the abundances of LINC00483 and MAPK1 displayed opposite trend (Fig. 6c). Additionally, results of western blot presented that the protein levels of MAPK1, c-Myc and MMP9 in tumor tissues were significantly reduced but Bax expression was enhanced in sh-LINC00483 group in comparison to those in sh-NC group (Fig. 6d). Discussion LncRNAs are abnormally expressed and implicated in regulation of cell processes in gastric cancer [27]. LINC00483 as a novel lncRNA has been reported as key oncogene in human cancers, including gastric cancer [11][12][13]. Previous study has reported that LINC00483 expression was enhanced and promoted gastric cancer cell proliferation via regulating miR-30a-3p and spermassociated antigen 9 (SPAG9) [13]. However, the regulatory mechanism by which LINC00483 mediated gastric cancer progression remains largely unclear. This study investigated the function of LINC00483 in gastric cancer development in vitro and in vivo. The novelty of this study was MAPK1 as a novel target for LINC00483, and here we were the first to confirm LINC00483 could target MAPK1 by miR-490-3p in gastric cancer. Here we found that LINC00483 expression was increased in gastric cancer, indicating that high expression of LINC00483 might contribute to gastric cancer development. To explore the role of LINC00483, lossof-function experiments were performed. c-Myc is a key protein associated with metabolism, proliferation and oncogenesis of cancers [28], which could be modulated by lncRNA to influence cell viability [29]. Bax is an important member of Bcl-2 family, which displays pro-apoptotic role in cancers through intrinsic apoptosis pathway [30]. Moreover, MMP9 has been regarded as an essential marker for metastasis of cancers, including gastric cancer [31,32]. Through detecting protein levels of c-Myc, Bax and MMP9 combined with corresponding analyses of MTT, flow cytometry and transwell, we found that LINC00483 knockdown suppressed gastric cancer development by inhibiting cell viability, migration and invasion and inducing apoptosis. These findings indicated the suppressive effect of LINC00483 inhibition on gastric cancer cell development, which was also consistent with former work [13]. This study indicated the potential anti-cancer role of LINC00483 knockdown in gastric cancer, which might be important target for treatment of gastric cancer. MAPK1 has been reported as an important oncogene in gastric cancer progression to promote cell proliferation, migration and invasion [22,23,33,34]. In this research, we also found high expression of MAPK1 in gastric cancer and its knockdown inhibited gastric cancer cell viability, migration and invasion but promoted apoptosis, indicating the carcinogenic role of MAPK1 in gastric cancer. Moreover, analysis of linear relationship showed that MAPK1 level was positively correlated with LINC00483 expression, uncovering the potential interaction between LINC00483 and MAPK1. By transfection of MAPK1 overexpression vector in the presence of sh-LINC00483, the results revealed that MAPK1 was responsible for the function of LINC00483 in gastric cancer. However, how LINC00483 could mediate MAPK1 remains unclear. The interaction between lncRNA and RNA could be mediated by miRNA, in which lncRNA acts as a miRNA sponge for miRNA inhibition, therefore leading to derepress of mRNA [35]. To find out the potential intermediator, we performed bioinformatics analysis and found that LINC00483 and MAPK1 have the similar seed sites of miR-490-3p. Although the target association between miR-490-3p and MAPK1 has been reported by previous studies in esophageal squamous cell carcinoma and acute myeloid leukemia [36,37], it did not indicate this axis was present in gastric cancer because of the alteration of tumor microenvironment. Here we confirmed their association using luciferase reporter assay and demonstrated that LINC00483 could up-regulate MAPK1 expression by competitively sponging miR-490-3p in gastric cancer. Moreover, there are many binding sites of LINC00483 and other miRNAs or mRNAs, such as miR-30a-3p and SPAG9 [13]. Hence, we hypothesized it was possible LINC00483 had additional targets. In any case, MAPK1 was one important target of LINC00483 in this study and it was indirectly regulated via LINC00483 through miR-490-3p. Furthermore, this study showed that miR-490-3p expression was decreased and its overexpression suppressed development of gastric cancer, which was also in agreement with previous studies [19,20]. Additionally, rescue experiments further validated miR-490-3p was required for LINC00483-mediated regulatory mechanism in gastric cancer progression in vitro. Meanwhile, we also using a xenograft model disclosed the anti-cancer role of LINC00483 inhibition in gastric cancer. Conclusion Our research on the oncogenic role of LINC00483 in gastric cancer showed that silence of LINC00483 repressed progression of gastric cancer in vitro and in vivo, possibly by acting as a sponge of miR-490-3p to regulate MAPK1, which provided a new mechanism for development of gastric cancer and indicated a novel target for treatment of gastric cancer.
2020-04-15T14:32:51.393Z
2020-04-15T00:00:00.000
{ "year": 2020, "sha1": "2b5e9c3c27a3b83d3d8697957e22f6cb02225776", "oa_license": "CCBY", "oa_url": "https://biolres.biomedcentral.com/track/pdf/10.1186/s40659-020-00283-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b5e9c3c27a3b83d3d8697957e22f6cb02225776", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247303059
pes2o/s2orc
v3-fos-license
An Intercomparison of Satellite Derived Arctic Sea Ice Motion Products : Arctic sea ice motion information provides an important scientific basis for revealing the changing mechanism of Arctic sea ice and assessing the navigational safety of Arctic waterways. For now, many satellite derived Arctic sea ice motion products have been released but few studies have conducted comparisons of these products. In this study, eleven satellite sea ice motion products from the Ocean and Sea Ice Satellite Application Facility (OSI SAF), the National Snow and Ice Data Center (NSIDC), and the French Research Institute for the Exploitation of the Seas (Ifremer) were systematically evaluated and compared based on buoys from the International Arctic Buoy Program (IABP) and the Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) over 2018–2020. The results show that the mean absolute errors (MAEs) of ice speed for these products are 1.15–2.26 km/d and the MAEs of ice motion angle are 14.93–23.19 ◦ . Among all products, Ifremer_AMSR2 achieves the best accuracy in terms of speed error, NSIDC_Pathfinder shows the lowest angle error and OSI-405-c_Merged performs best in sea-ice drift trajectory reconstruction. Moreover, season, region, data source, ice drift tracking algorithm, and time interval all influence the accuracy of these products: (1) The sea ice motion bias in the freezing season (1.04–1.96 km/d and 11.93–22.41 ◦ ) is smaller than that in the melting season (1.13–3.90 km/d and 14.41–27.41 ◦ ) for most of the products. (2) Most products perform worst in East Greenland, where ice movements are fast and complex. (3) The accuracies of the products derived from AMSR-2 remotely sensed data are better than those from other data sources. (4) The continuous maximum cross-correlation (CMCC) algorithm outperforms the maximum cross-correlation (MCC) algorithm in sea ice drift retrieval. (5) The MAEs of sea ice motion with longer time interval are relatively smaller. Overall, the results indicate that the eleven remote sensing Arctic sea ice drift products are of practical use for data assimilation and model validation if uncertainties are appropriately considered. Furthermore, this study provides some improvement directions for sea ice drift retrieval from satellite data. and and Introduction In recent decades, Arctic sea ice has been changing rapidly in the context of global warming and Arctic Amplification, with important implications for global climate [1][2][3][4][5][6]. Sea ice drift is an important kinetic parameter of sea ice change, which not only affects the heat and momentum transfer between the ocean and the atmosphere, but also has an impact on resource development and navigation in polar regions [7][8][9]. Since the early 1980s, satellite remote sensing technology has gradually become an important tool for sea ice drift monitoring in polar regions which is complementary to limited in situ observations. The basic principle of sea ice drift retrieval based on remotely sensed data is to track sea ice in sequential images over the same area. In recent years, the EUMETSAT Ocean and Sea Ice Satellite Application Facility (OSI SAF), French Research [11] A few studies have compared several sea ice motion products [26][27][28][29][30]. The authors of [26] evaluated sea ice drift products from OSI SAF, Ifremer, and the product generated from Advanced Synthetic Aperture Radar (ASAR) in the Laptev Sea using Acoustic Doppler Current Profilers (ADCPs) buoys from 2007 to 2008. It was found that the statistical correlation coefficients between different products and field data ranged from 0.56 to 0.86. Additionally, the correlation coefficient calculated from the product retrieved by AMSR-3 of 23 E data published by Ifremer was higher than that from the OSI SAF product using the same sensor data. The authors of [27] validated three low-resolution and three mediumresolution sea ice drift products from OSI SAF and Ifremer with high-precision Ice-Tethered Profiler (ITP) buoys during 2008-2010 throughout the whole Arctic region. It was found that the accuracy of the product retrieved by the CMCC algorithm was significantly better than that of the MCC algorithm in the sea ice region with low drift speed. Low-resolution sea ice drift products usually underestimated the sea ice drift speed, especially in the melting season. The accuracy of sea ice drift products retrieved by SAR data was the best, while there was no obvious difference in the accuracy between products retrieved by scatterometer and radiometer data. Furthermore, the error of sea ice drift products was closely related to their time interval. The authors of [28] compared 2002-2006 sea ice products from NSIDC, OSI SAF, Ifremer, and KIMURA using International Arctic Buoy Program (IABP) buoy data in the Arctic Ocean. The results showed that the NSIDC product and OSI SAF product were more accurate than the Ifremer product; this phenomenon was more significant in regions with lower sea ice concentrations and slower sea ice drift speeds. Furthermore, the uncertainty of the sea ice drift product would increase with increases of drift speed. The authors of [29] verified the NSIDC_Pathfinder and OSI-405-c_Merged products of 2014 and 2016 with ice drifters deployed by the Chinese National Arctic Research Expedition (CHINARE) in the western Arctic Ocean. It was found that NSIDC_Pathfinder tended to underestimate the sea ice velocity, while OSI-405-c_Merged tended to overestimate it. Compared with OSI-405-c_Merged, NSIDC_Pathfinder had relatively lower error and higher temporal and spatial resolution, and thus, was more suitable for estimating sea ice deformation. Taking the IABP buoy data from 2009 to 2017 as the verification set, the authors of [30] evaluated and compared some mainstream satellite sea ice products from OSI SAF, NSIDC, and Ifremer in the whole Arctic. The results showed that the product accuracy was related with the spatial resolution of its source data, ice tracking algorithm, and drift merging method. It was found that most previous studies which compared products related to satellite derived sea ice drift only evaluated the bias of speed and ignored the error of ice motion angle and the ability of the product to reconstruct sea ice drift trajectory. Moreover, most studies only verified the whole Arctic or a specific region. The inconsistency of ice drifts in different regions was ignored. Furthermore, as some sea ice motion products have been updated recently (e.g., NSIDC published a new version of Pathfinder in 2019), the conclusions from previous comparative studies may not be applicable for new products. The objective of this study is to evaluate and compare eleven satellite products from OSI SAF, NSIDC, and Ifremer with high-precision buoys of IABP and Multidisciplinary drifting Observatory for the Study of Arctic Climate (MOSAiC) over 2018-2020. Speed error was used for validation, as well as angle error and the ability to reconstruct sea ice drift trajectory. Furthermore, the influence of season, region, data source, ice drift tracking algorithm, and time interval on the accuracy of the products was analyzed. Sea Ice Motion Products Eleven sea ice motion products for 2018-2020 were validated in this study, i.e., four products from OSI SAF, two from NSIDC, and five from Ifremer. Table 2 shows basic information regarding the eleven products. Sections 2.1.1-2.1.3. provide some details for the products from different institutions. Among the eleven products, most use polar stereographic projection with different projection centers and central meridians. As projection would cause length and angle distortion, which would affect accuracy assessments on drift speed and direction, in this study, the coordinate systems of all products were unified to the WGS-84 geocentric coordinate system. The four OSI SAF sea ice drift products validated in this study are OSI-405-c_Merged, OSI-405-c_AMSR2, OSI-405-c_ASCAT, and OSI-407-a [10,15]. OSI-405-c_Merged, OSI-405-c_AMSR2, and OSI-405-c_ASCAT are the low resolution sea ice drift products of OSI SAF with 62.5 km spatial resolution and two-day temporal resolution. The start and end time of the OSI-405-c series are centered at 12:00 UTC. OSI-405-c_AMSR2 and OSI-405-c_ASCAT are single sensor products derived based on the CMCC method from AMSR-2 and ASCAT, respectively. OSI SAF also provides another single sensor product which is retrieved from the SSMIS sensor. However, the OSI-405-c_SSMIS product is not validated in this study because this data set has a gap in 2018. Due to surface melting and moist atmosphere, it is difficult to track sea ice drift during summer based on passive microwave radiometers and active microwave scatterometers. Therefore, the single sensor products of OSI SAF only provide winter sea ice drift vectors from October to April. OSI-405-c_Merged is the multisensor product derived by merging the sea ice drift vectors retrieved from AMSR-2, SSMIS, and Metop-B ASCAT. OSI-405-c_Merged provides the whole year data. Details of the merging algorithm can be found in [15]. OSI-407-a is the medium resolution product with 20 km spatial resolution and daily temporal resolution, derived based on the MCC method. Band 4 (Infra-Red) of the whole year and Band 2 (visible) of May to September from the Advanced Very High Resolution Radiometer (AVHRR) data are used for derivation of OSI-407-a. OSI-407-a updates two times one day with start and end time centered on 6:00 or 18:00 UTC before 11 October 2018. After that, this product updates four times per day with start and end time centered on 6:00, 12:00, 18:00, and 24:00 UTC. Due to the cloud effect on optical satellite images, the spatial coverage of OSI-407-a is limited. NSIDC Sea Ice Motion Products The two sea ice drift products from NSIDC validated in this study are "Polar Pathfinder Daily 25 km EASE-Grid Sea Ice Motion Vectors" product (referred to as NSIDC_Pathfinder hereafter) and "AMSR-E/AMSR2 Unified L3 Daily 12.5 km Brightness Temperatures, Sea Ice Concentration, Motion & Snow Depth Polar Grids" product (referred to as NSIDC_AMSR2 hereafter) [16,18]. The latest version (version 4.1) of the NSIDC_Pathfinder product was used, which was updated on April 2019, including 25 km daily and weekly motion products. For 2018 to 2020, SSMIS data, IABP buoys, and NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis forecasts were used to generate daily estimates. SSMIS data were used to retrieve the initial motion vectors based on the MCC algorithm. Then, NCEP/NCAR wind-derived and buoy-derived motions were used to refine the initial vectors using the optimal interpolation method. More details about the algorithm for pathfinder ice motion retrieval could be found in [18,24]. The weekly motion is an average of all daily motion daily vectors retrieved in that week. As buoys are sparse points without sequential records at specific locations to generate average vectors, the weekly motion product could not be validated with buoys, and thus, was not included in this study. NSIDC_AMSR2 is a daily sea ice motion product generated from AMSR-2 data based on the MCC algorithm. The grid of NSIDC_AMSR2 is 12.5 km × 12.5 km. However, sea ice motion vectors are recorded every five grids, therefore the spatial resolution of NSIDC_AMSR2 is 75 km. Ifremer Sea Ice Motion Products This study validated five Ifremer sea ice motion products, namely, Ifremer_AMSR2, Ifremer_Merged, Ifremer_SSMIS_H, Ifremer SSMIS_V, Ifremer_ASCAT [11][12][13][14]. Ifremer_ AMSR2 series have three 31.25 km datasets (i.e., Ifremer_AMSR2_H, Ifremer_AMSR2_V, and Ifremer_AMSR2) with two-, three-, and six-day lags. Ifremer_AMSR2_H, Ifremer_ AMSR2_V are derived based on the MCC method from horizontally and vertically polarized 89GHz channels, respectively. Ifremer_AMSR2 is generated by combining Ifre-mer_AMSR2_H and Ifremer_AMSR2_V motion vectors. In this study, Ifremer_AMSR2_H, Ifremer_AMSR2_V were not used. Ifremer_AMSR2 with two-day lag was used to compare with other products. Ifremer_AMSR2 with different time intervals were intercompared to evaluate the effect of temporal resolution on the accuracy of derived motions. Ifre-mer_SSMIS_H, Ifremer SSMIS_V, and Ifremer_ASCAT are 62.5 km products with threeand six-day lags, deduced based on the MCC method from horizontally polarized 91GHz channel, vertically polarized 91 GHz channel, and Metop-B ASCAT, respectively. In this study, only three-day datasets of these products were validated. It should be noted that Ifremer also released a sea ice motion product from Metop-A ASCAT for 2018-2020. Given that OSI SAF only uses Metop-B ASCAT, the Ifremer product from Metop-A ASCAT was not validated in this study to keep consistency in the use of sensors. Ifremer_Merged is a 62.5 km product with time intervals of 3, 6, and 30 days. Three-and six-day lagged Ifre-mer_Merged datasets were generated by merging Ifremer_SSMIS_H, Ifremer SSMIS_V, and Ifremer_ASCAT (Metop-B). Details of the merging algorithm can be found in [14]. In this study, Ifermer_Merged with three-day lag was compared with other products. Three-and six-day Ifermer_Merged were intercompared to discuss the impact of temporal resolution on product accuracy. Monthly drifts are estimated by summing five consecutive six-day lagged drifts of the merged product starting on the first day of each month. Therefore, the monthly product could not be validated by buoys, and thus, was not used in this study. Buoy Data IABP and MOSAiC buoy data were used in this study to evaluate the accuracy of the sea ice drift products. The IABP succeeded the Arctic Ocean Buoy Project, established in 1978 [31]. Its objective is to maintain a network of automatic data buoys in the entire Arctic Ocean to monitor sea level pressure, surface air temperature, and sea ice motion [32][33][34]. The IABP dataset provides daily latitude and longitude locations of buoys at 0:00 UTC and 12:00 UTC. MOSAiC was the first year-round expedition into the central Arctic, drifting with the sea ice from September 2019 to October 2020 and conducting comprehensive measurements on the atmosphere, ocean, and sea ice. During the expedition period, various types of buoys were deployed on the sea ice in the vicinity of the icebreaker, such as Ice-Tethered Profiler (ITP) and Surface Velocity Profiler (SVP). The temporal resolution of the buoy released by MOSAiC ranges from 10 min to 3 h. In this study, 647 buoys comprising 476 from IABP and 171 from MOSAiC with continuous time records (i.e., buoys with record interruption of more than one month were filtered out) from 2018 to 2020 were used as the verification data set. Validation Methods Two methods were used for validation: error estimation of sea ice motion vectors and Lagrangian trajectory reconstruction. When conducting the two validation methods, buoy records that derive ice speeds higher than 60 km/d were filtered out from validation datasets. This filtering step referred to the product evaluation conducted by NSIDC [18]. The reason for the high buoy-derived speed may be the errors of the positioning system [25,35,36]. Furthermore, Ifremer_SSMIS_H, Ifremer_SSMIS_V, and Ifremer_ASCAT are attached with quality flags. According to the user's manual [11,12], negative flag values indicate the cases that the drift should be dismissed. Therefore, drift vectors with negative flag values were discarded before validation. Moreover, as the start and end time of the OSI-405-c series are centered on 12:00 UTC, in this study, it was assumed that products from NSIDC and Ifremer were also centered on 12:00 UTC. Therefore, only the 12:00 UTC latitude and longitude location records of buoys were used for validation. OSI-405-a product has two or four datasets centered on different times of one day. The different datasets were merged into one dataset centered on 12:00 UTC for validation based on a weighted mean method that can be given by where L 0 is the location of one grid center, D(L 0 ) is the merged vector at grid L 0 , D i (L 0 ) is the vector at grid L 0 from the dataset i, n is the total number of vectors at grid L 0 from the two or four datasets of the OSI-407-a product, w i is the weight associated with the vector D(L 0 ), and T C is the start tracking time (UTC) of vector D i (L 0 ). Error Estimation In the error estimation method, sea ice motion vectors of the product were first bilinearly interpolated to the start positions of buoys. Then, the speeds and angles of buoy-Remote Sens. 2022, 14, 1261 7 of 23 derived vectors and corresponding interpolated vectors were quantitatively compared using three indicators, which can be given by where V MAE and θ MAE are the mean absolute error (MAE) of ice speed and ice motion angle, respectively. S P and S B are the displacements of the interpolated vector from the products and the buoy at the same start location, respectively. θ P and θ B are the angles of interpolated and buoy-derived vectors at the same start location, respectively. ∆θ is the angle between the interpolated vector and the buoy-derived vector. T represents the temporal resolution of the product. N is the number of interpolated vectors. In addition, the correlation between the drift speed of buoys and the drift speed of the interpolated products, which is expressed by the Pearson correlation coefficient r V . r V , ranged from −1 to 1. The closer r V is to 1, the stronger the positive correlation of ice drift speeds between products and buoys. In this study, in order to evaluate the impact of season, region, data source, ice drift tracking algorithm, and time interval on the accuracy of the product, V MAE and θ MAE were estimated not only in the whole Arctic for 2018-2020, but were also calculated in different seasons and subregions. Furthermore, the V MAE and θ MAE of the products with different data sources, tracking algorithms, and time resolutions were compared separately. In seasonal variation analysis, errors of the freezing season (October to April) and the melting season (May to September) were compared. In order to make full use of data in three calendar years of 2018-2020, not only data of October 2018-April 2019 and October 2019-April 2020 were used to calculate errors in freezing season, but also data of January to April 2018 and October to December 2020 were employed. As the OSI-405-c_AMSR2 and OSI-405-c_ASCAT only provide data in the freezing season, they were only compared with other products in the freezing season. In spatial variation analysis, the Arctic was divided into eight subregions, i.e., Central Arctic (CA), Chukchi Beaufort Seas (CBS), Laptev/East Siberian Seas (LESS), Kara/Barents Seas (KBS), East Greenland (EG), Hudson/Baffin Bays (HBB), Canadian Arctic Archipelago (CAA) and Bering Sea (BS), referring to [37] (Figure 1). As buoy numbers are limited in HBB, CAA, and BS, sea ice drifts were separately evaluated in the other 5 subregions. In the evaluation of three factors (i.e., data source, tracking algorithm, and time interval) on the accuracy, the controlled experiment was used in which other two factors were identical among the compared products except for the factor being tested. Trajectory Reconstruction Lagrangian trajectories were reconstructed from products under the Universal Polar Stereographic North coordinate system, taking positions of buoys at the first days of trajectories as the start points. The vectors on start points or intermediate points of reconstructed trajectories were generated based on the bilinear interpolation method. The similarity between trajectories from buoys and products were quantitively evaluated with two indicators, which can be given by where D e and D c are Euclidean distance and cosine distance between endpoints of the product-derived and the buoy-derived trajectories, respectively. , and thus the range of cosine distance is [0, 2]. The closer the cosine distance is to 0, the smaller the angle between the two vectors, that is, the more similar trajectory estimated by the product is to the trajectory of the buoy. were estimated not only in the whole Arctic for 2018-2020, but were also calculated in different seasons and subregions. Furthermore, the V MAE and θ MAE of the products with different data sources, tracking algorithms, and time resolutions were compared separately. In seasonal variation analysis, errors of the freezing season (October to April) and the melting season (May to September) were compared. In order to make full use of data in three calendar years of 2018-2020, not only data of October 2018-April 2019 and October 2019-April 2020 were used to calculate errors in freezing season, but also data of January to April 2018 and October to December 2020 were employed. As the OSI-405-c_AMSR2 and OSI-405-c_ASCAT only provide data in the freezing season, they were only compared with other products in the freezing season. In spatial variation analysis, the Arctic was divided into eight subregions, i.e., Central Arctic (CA), Chukchi Beaufort Seas (CBS), Laptev/East Siberian Seas (LESS), Kara/Barents Seas (KBS), East Greenland (EG), Hudson/Baffin Bays (HBB), Canadian Arctic Archipelago (CAA) and Bering Sea (BS), referring to [37] (Figure 1). As buoy numbers are limited in HBB, CAA, and BS, sea ice drifts were separately evaluated in the other 5 subregions. In the evaluation of three factors (i.e., data source, tracking algorithm, and time interval) on the accuracy, the controlled experiment was used in which other two factors were identical among the compared products except for the factor being tested. Trajectory reconstruction requires products to have good performance in terms of spatial continuity. Most products showed poor spatial continuity in melting season [38]; therefore, in this study, trajectory reconstruction was only carried out in the frozen season (i.e., 1 October 2018-30 April 2019 and 1 October 2019-30 April 2020). Even in these periods, not all products had good spatial continuity. Therefore, the average daily spatial coverage areas (Formula (7)) of the products were calculated and compared. In this study, the products with large spatial coverage areas were used for trajectory reconstruction. where S is the average daily product space coverage area, n i is the number of sea ice drift vectors in the product of day i, L is the spatial resolution of the product, and N is the total number of days for trajectory reconstruction. As the records of some buoys are discontinuous in the periods for trajectory reconstruction, only 23 buoys were used for validation. Figure 2 shows all buoy trajectories for trajectory reconstruction. These buoys are evenly distributed, which ensures trajectory reconstruction can be carried out in most areas of the Arctic. where S is the average daily product space coverage area, n i is the number of sea ice drift vectors in the product of day i, L is the spatial resolution of the product, and N is the total number of days for trajectory reconstruction. As the records of some buoys are discontinuous in the periods for trajectory reconstruction, only 23 buoys were used for validation. Figure 2 shows all buoy trajectories for trajectory reconstruction. These buoys are evenly distributed, which ensures trajectory reconstruction can be carried out in most areas of the Arctic. Overall Error Analysis The nine annual sea ice drift products were compared in an overall error analysis. Table 3 shows the MAE results of sea ice drift speed and angle for the eleven annual products. Ifremer_AMSR2 with a two-day time interval and Ifremer_Merged with a threeday time interval were used for comparison. Table 3. Overall error estimation results of nine annual products. Table 3 shows that the speed MAEs of these nine products were 1.15-2.26 km/d. Ifremer_AMSR2 had the lowest speed MAE, i.e., 1.15 km/d while NSIDC_AMSR2 had the highest speed MAE, i.e., 2.26 km/d. Furthermore, the accuracies of the Ifremer products were very close to those of OSI SAF, i.e., all in the range of 1.0-1.5 km/d. Overall, products from NSIDC achieved low accuracy, i.e., 1.85-2.26 km/d, which may have been due to the one-day time interval which they use. As the spatial resolution of the data source is commensurate with or larger than the daily displacement of sea ice, the image noise would strongly affect the accuracy of the retrieved daily sea ice motion. In the two NSIDC products, NSIDC_Pathfinder achieved the greatest accuracy, which also reflects its advantages of exploiting multiple data sources including SSMIS, reanalysis wind, and buoy data. The angle MAEs of the nine annual products were 14.93-23.19 • . NSIDC_Pathifinder achieved the lowest angle MAE, i.e., 14.93 • , which may have been due to its satellitederived motion vectors being refined by NCEP/NCAR wind-derived and buoy-derived motions. OSI-405-c_Merged, which uses the CMCC algorithm, had the second-lowest angle MAE, i.e., 15.07 • . A possible reason for this is that CMCC is an improvement on MCC, which can retrieve displacement lengths smaller than the image pixel size and effectively suppress quantization noise. Of all products from Ifremer, Ifremer_AMSR2 had the best angle accuracy, i.e., 17.09 • ; the others had an angle MAEs of nearly 20 • , which may have been due to their different spatial resolutions of data sources and products. Compared with Ifremer_AMSR2, NSIDC_AMSR2, which is also derived using AMSR2, data had the worst accuracy in terms of angle. The reason for this may be that it has a shorter time interval and only uses the V-polarization band to extract ice motion. Figure 3 presents boxplots showing absolute errors of drift speed (a) and angle (b) for the nine annual products. According to the boxplots, the medians of speed error obtained from OSI SAF (i.e., OSI_M and OSI-a) and Ifremer (i.e., Ifrem_AM, Ifrem_M, Ifrem_S_H, Ifrem_S_V, and Ifrem_AS) were similar, and OSI-407-a (i.e., OSI-a) had the lowest median error of speed. However, from the upper quartile, maximum and average values, Ifremer_AMSR2 (i.e., Ifrem_AM) had the lowest speed error. The medians of angle error from all products were similar except for NSIDC_AMSR2 (i.e., NSIDC_A). OSI-405-c_Merged (i.e., OSI_M), NSIDC_Pathfinder (i.e., NSIDC_P) and Ifremer_AMSR2 (i.e., Ifrem_AM) achieved similar low error distributions in direction. However, the MAE of NSIDC_Pathfinder was lower than OSI-405-c_Merged and Ifremer_AMSR2, which indicated that it achieved the lowest angle error. The conclusions from Figure 3 agree with those drawn in Table 3. Figure 4 shows density scatter plots between ice speeds derived from ice drift products. It can be seen that the correlation r values of OSI-407-a NSIDC_AMSR2 (i.e., 0.54) were relatively small. The correlation coefficients ucts were all above 0.9, and Ifremer_AMSR2 achieved the highest r (i.e., 0.9 ure shows, the two daily products from NSIDC achieved more high-speed (s than 30 km/d) ice motions than other products. The possible reason for thi high speeds are averaged with low speeds over long time intervals in othe represents the number of pairs matched between buoy and product, which the spatial coverage of the product to a certain extent. The two products values are NSIDC_Pathfinder (i.e., 87,259) and OSI-405-c_Merged (i.e., 7 demonstrated that the usage of multisource data can increase the spatial c trieved sea ice motions. Figure 4 shows density scatter plots between ice speeds derived from buoys and sea ice drift products. It can be seen that the correlation r values of OSI-407-a (i.e., 0.83) and NSIDC_AMSR2 (i.e., 0.54) were relatively small. The correlation coefficients of other products were all above 0.9, and Ifremer_AMSR2 achieved the highest r (i.e., 0.95). As the figure shows, the two daily products from NSIDC achieved more high-speed (speeds greater than 30 km/d) ice motions than other products. The possible reason for this may be that high speeds are averaged with low speeds over long time intervals in other products. N represents the number of pairs matched between buoy and product, which can represent the spatial coverage of the product to a certain extent. The two products with larger N values are NSIDC_Pathfinder (i.e., 87,259) and OSI-405-c_Merged (i.e., 78,873), which demonstrated that the usage of multisource data can increase the spatial coverage of retrieved sea ice motions. Figure 5 shows scatter plots between buoy-derived ice speed and absolute error of sea ice speed from each product. In this figure, only the buoys with speeds below 25 km/d were used and divided into five equal intervals (as shown in Figure 4) that best matched buoys move below 25 km/d. From Figure 5, it may be seen that the speed error of all products increased with the increase of ice speed. Furthermore, there was a strong positive correlation between the speed error and speed, i.e., all r (correlation coefficient) values were higher than 0.86. With the increase of ice speed, Ifremer-AMSR2 showed the smallest error increase, which reflects its superior accuracy. Figure 5 shows scatter plots between buoy-derived ice speed and absolute error of sea ice speed from each product. In this figure, only the buoys with speeds below 25 km/d were used and divided into five equal intervals (as shown in Figure 4) that best matched buoys move below 25 km/d. From Figure 5, it may be seen that the speed error of all products increased with the increase of ice speed. Furthermore, there was a strong positive correlation between the speed error and speed, i.e., all r (correlation coefficient) values were higher than 0.86. With the increase of ice speed, Ifremer-AMSR2 showed the smallest error increase, which reflects its superior accuracy. Figure 6 shows the density scatter plots between buoy-based ice speed and angle absolute error of each product. It was found that among all products, the large angle errors were mostly gathered in the very low speed regime. Referring to [27], the angle errors after removing slow buoys (i.e., speed lower than 3 km/d) were recalculated; the results Figure 6 shows the density scatter plots between buoy-based ice speed and angle absolute error of each product. It was found that among all products, the large angle errors were mostly gathered in the very low speed regime. Referring to [27], the angle errors after removing slow buoys (i.e., speed lower than 3 km/d) were recalculated; the results are shown in Table 2. It can be seen that the angle errors of all products were reduced after deleting the buoy records with low speed. Moreover, the angle MAEs of NSIDC_AMSR2 and all products from Ifremer were greatly reduced (about 10 • ), but the decrease of the angle MAEs of OSI-405-c_Merged, OSI-407-a, and NSIDC_Pathfinder was small (lower than 5 • ), which may be due to the following reasons: (1) OSI-405-c_Merged adopts CMCC method which can extract sea ice drift at subpixel level, and thus performs better at areas with low ice speed; (2) The data source used by OSI-407-a is AVHRR with a resolution of 1.1 km, i.e., much higher than that of data sources used by other products, so the angle of low speed drift can be extracted more accurately; and (3) NSIDC_Pathfinder corrects the angle error with buoys and reanalysis data. Figure 7 show the average daily spatial coverage area results of eleven products. It may be seen that the NSIDC_Pathfinder achieved the largest average daily spatial coverage values in both freezing and melting seasons, which demonstrates its superior spatial continuity. OSI-407-a showed the worst spatial continuity as it uses optical data which are highly affected by clouds. Most products showed poor spatial continuity in the melting season. For trajectory construction in freezing season, the six products with highest daily spatial coverage values were used in this study, i.e., OSI-405-c_Merged, OSI-405-c_AMSR2, OSI-405-c_ASCAT, NSIDC_Pathfinder, Ifremer_AMSR2, Ifremer_Merged. It was found that most trajectories could not be reconstructed successfully by the other five products. Figure 7 show the average daily spatial coverage area results of eleven products. It may be seen that the NSIDC_Pathfinder achieved the largest average daily spatial coverage values in both freezing and melting seasons, which demonstrates its superior spatial continuity. OSI-407-a showed the worst spatial continuity as it uses optical data which are highly affected by clouds. Most products showed poor spatial continuity in the melting season. For trajectory construction in freezing season, the six products with highest daily spatial coverage values were used in this study, i.e., OSI-405-c_Merged, OSI-405-c_AMSR2, OSI-405-c_ASCAT, NSIDC_Pathfinder, Ifremer_AMSR2, Ifremer_Merged. It was found that most trajectories could not be reconstructed successfully by the other five products. Table 4. Average daily spatial coverage of each product (×10 6 km 2 ). Product The Figure 8 shows an example result of trajectory reconstruc and Figure 9 shows the time series of daily Euclidean distance tance (blue line) for reconstructed trajectories of the six product ple, all products showed good performance in trajectory recons terms of Euclidean distance, the accuracy of the reconstructe lower than that of other products. The accuracies of reconstructe in terms of cosine distance. However, the Euclidean distances NSIDC_Pathfinder were slightly larger than those of other prod Figure 8 shows an example result of trajectory reconstruction for the six products, and Figure 9 shows the time series of daily Euclidean distance (red line) and cosine distance (blue line) for reconstructed trajectories of the six products in Figure 8. In this example, all products showed good performance in trajectory reconstruction from Figure 8. In terms of Euclidean distance, the accuracy of the reconstructed trajectory was slightly lower than that of other products. The accuracies of reconstructed trajectories were similar in terms of cosine distance. However, the Euclidean distances of OSI-405-c-ASCAT and NSIDC_Pathfinder were slightly larger than those of other products. and Figure 9 shows the time series of daily Euclidean distance (red line) and cosine distance (blue line) for reconstructed trajectories of the six products in Figure 8. In this example, all products showed good performance in trajectory reconstruction from Figure 8. In terms of Euclidean distance, the accuracy of the reconstructed trajectory was slightly lower than that of other products. The accuracies of reconstructed trajectories were similar in terms of cosine distance. However, the Euclidean distances of OSI-405-c-ASCAT and NSIDC_Pathfinder were slightly larger than those of other products. Table 5 shows the mean Euclidean distances and mean cosine distances of the reconstructed trajectories from the six products in two freezing seasons. It is found that OSI-405-c_Merged obtained the lowest Euclidean distances (i.e., 10.6 km and 37.3 km) in both freezing seasons. Furthermore, OSI-405-c_Merged obtained the lowest cosine distance (i.e., 1.5 × 10 −4 ) in the 2018-2019 freezing season and comparable cosine distance (i.e., 9.7 × 10 −4 ) with the lowest one (i.e., 6.7 × 10 −4 ) in the 2019-2020 freezing season. Therefore, OSI-405-c_Merged showed the best ability in trajectory reconstruction among these six prod- Table 5 shows the mean Euclidean distances and mean cosine distances of the reconstructed trajectories from the six products in two freezing seasons. It is found that OSI-405-c_Merged obtained the lowest Euclidean distances (i.e., 10.6 km and 37.3 km) in both freezing seasons. Furthermore, OSI-405-c_Merged obtained the lowest cosine distance (i.e., 1.5 × 10 −4 ) in the 2018-2019 freezing season and comparable cosine distance (i.e., 9.7 × 10 −4 ) with the lowest one (i.e., 6.7 × 10 −4 ) in the 2019-2020 freezing season. Therefore, OSI-405-c_Merged showed the best ability in trajectory reconstruction among these six products. The reason for this may be that the accuracy of the reconstructed trajectory showed the combined error in speed and angle. From Table 3 and Figure 3 showing error distributions of the products, ifremer_merged achieved the lowest speed error, but its angle error was larger than that of OSI-405-c_merged and NSIDC_Pathfinder. For angle error, NSIDC_Pathfinder performed the best, but its speed error was large. OSI-405-c_merged had relatively low values in both speed and angle error, and thus, achieved good performance in trajectory reconstruction. Figures 10 and 11 show the spatial distribution of the Euclidean distance and cosine distance between the reconstructed trajectory of each product and the buoy trajectory in two freezing seasons, respectively. From Figures 10 and 11, it may be seen that almost all reconstructed trajectories of OSI-405-c_Merged and OSI-405-c_AMSR2 obtained low Euclidean distances and cosine distances from buoy trajectories, which demonstrated their good ability in trajectory reconstruction. This conclusion agrees with those drawn in Table 5. Furthermore, from Figure 10, it may be seen that the Euclidean distances of almost all products between the East Siberian Sea and the North Pole were larger than those near the Beaufort Sea and in the north of Ellesmere Island. However, from Figure 11, it may be seen that the cosine distances located between the East Siberian Sea and the North Pole were smaller than those near the Beaufort Sea and in the north of Ellesmere Island. Areas between the East Siberian Sea and the North Pole are mainly affected by transpolar drift. The Beaufort Sea and the north of Ellesmere Island are within the Beaufort Gyre. Therefore, the possible reason may be that the large Euclidean distance was due to the long ice displacement carried by transpolar drift. Moreover, the ices rotate in the Beaufort Gyre zone, which leads to high angle errors of retrieved ice motions and large cosine distances. Seasonal and Spatial Variations of The Product Accuracy As can be seen from Table 3 and Figure 3, the accuracies of Ifremer_SSMIS_H, Ifre-mer_SSMIS_V, and Ifremer_ASCAT were similar to that of Ifremer_Merged. Therefore, these three single-sensor Ifremer products are not included in the present seasonal and spatial variation analysis. Table 6 shows the seasonal errors of the other six annual products. It was found that the freezing season accuracy of most products retrieved by microwave data was significantly better than their melting season accuracy. This may be because the melting ice has a significant impact on the acquisition of bright temperature data and backscattering data, which makes sea ice tracking more difficult. OSI-407-a retrieved by AVHRR performed better in the melting season than the freezing season. This may be because optical remote sensing images are less affected by melting ice. Furthermore, the visible and infrared bands of AVHRR were both used in the melting season, while only the infrared band of AVHRR was used for the freezing season. 5. Furthermore, from Figure 10, it may be seen that the Euclidean distances of almost all products between the East Siberian Sea and the North Pole were larger than those near the Beaufort Sea and in the north of Ellesmere Island. However, from Figure 11, it may be seen that the cosine distances located between the East Siberian Sea and the North Pole were smaller than those near the Beaufort Sea and in the north of Ellesmere Island. Areas between the East Siberian Sea and the North Pole are mainly affected by transpolar drift. The Beaufort Sea and the north of Ellesmere Island are within the Beaufort Gyre. Therefore, the possible reason may be that the large Euclidean distance was due to the long ice displacement carried by transpolar drift. Moreover, the ices rotate in the Beaufort Gyre zone, which leads to high angle errors of retrieved ice motions and large cosine distances. Seasonal and Spatial Variations of The Product Accuracy As can be seen from Table 3 and Figure 3, the accuracies of Ifremer_SSMIS_H, Ifremer_SSMIS_V, and Ifremer_ASCAT were similar to that of Ifremer_Merged. Therefore, these three single-sensor Ifremer products are not included in the present seasonal and spatial variation analysis. Table 6 shows the seasonal errors of the other six annual Impact of Data Source, Retrieval Algorithm, and Time Interval on Accuracy In order to compare the products with different data sources, two compared groups of eight products derived from the same algorithms with the same time intervals were selected. Table 9 shows the accuracy of the two compared groups. In the OSI SAF compared groups, OSI-405-c-merged in freezing season were compared to keep the consistency in the data period. It can be found from the two groups that products derived from AMSR2 were better than those derived from ASCAT when using either algorithm. The results of the Ifremer group show that the order of ice drift errors is AMSR2 < ASCAT < SSMIS_V < SSMIS_H < merged. This indicates that the merging algorithm can increase spatial coverage, albeit with a decrease in accuracy. Table 6. Seasonal errors of six annual products. Product The Freezing Season The Melting Season Among all eleven products, OSI-405-c_AMSR2 and Ifremer_AMSR2 (freezing season) apply different algorithms but use the same time intervals, data sources, and data periods. From Tables 5 and 9, OSI-405-c_AMSR2 has a speed MAE of 1.01 km/d and an angle MAE of 11.91 • , while Ifremer_AMSR2 has a speed MAE of 1.15 km/d and an angle MAE of 17.29 • . This shows that the accuracy of OSI-405-c_AMSR2 is higher than that of Ifremer_AMSR2 in terms of speed and angle MAEs, especially in angle. This result demonstrates that CMCC outperforms MCC. The reason is that MCC adopted by Ifremer_AMSR2 is a block-based drift retrieval method that searches the best correlation between two sub images at the pixel level. CMCC adopted by OSI-405-c_AMSR2 searches the candidate vector in a continuous region of the image plane, which can derive the ice drift at the subpixel scale and reduce quantization noise. In order to fully evaluate the impact of time interval on product accuracy, products with different temporal resolutions derived by the same algorithms and the same data sources should be compared. Ifremer_Merged has products with 3 d and 6 d time intervals, and Ifremer_AMSR2 has 2 d, 3 d, and 6 d products. However, all daily products have no corresponding products with multiday time lags. Therefore, in this study, the two-day products were calculated based on the corresponding daily products by using the trajectory reconstruction method described in Section 2.3.2. In the three daily products evaluated in this study, NSIDC_Pathfinder and NSIDC_AMSR2 were used. OSI-407-a was not used as it has poor spatial continuity and thus cannot be employed in trajectory reconstruction. The estimation results of Ifremer_AMSR2, Ifremer_Merged, NSIDC_Pathfinder, and NSIDC_AMSR2 with different time intervals are shown in Table 10. It is obvious that the error of long-time resolution products is lower than that of short-time resolution products, regardless of angle or speed. The reason may be that the impact of noise in images on derived motions decreases with the increase of time interval. Furthermore, by comparing Tables 3 and 10, it is found that the speed MAEs of NSIDC_Pathfinder (two-day) and NSIDC_AMSR2 (two-day) are 1.44 km/d and 2.15 km/d, still higher than that of OSI-405-c_Merged (i.e., 1.38 km/d) and Ifremer_AMSR2 (i.e., 1.15 km/d) with two-day lags. Moreover, the angle MAE of two-day NSIDC_AMSR2 (i.e., 18.71 • ) is higher than that of two-day OSI-405-c_Merged (i.e., 15.07 • ) and two-day Ifremer_AMSR2 (i.e., 17.09 • ). The possible reason may be that NSIDC_Pathfinder does not include AMSR2 as its data source and NSIDC_AMSR2 only use one channel (i.e., 36.5GHz V pol.) of AMSR2 to retrieve sea ice motion vectors. Discussion The above results show that the absolute error of the ice drift speed is positively related with drift speed. Given that sea ice drift is greatly affected by wind forcing [39], this study further analyzed the relationship between the speed MAE of ice drift and the wind speed. OSI-405-c_Merged, NSIDC_Pathfinder, and Ifremer_AMSR2 were selected as the sea ice drift datasets because they performed better than other products based on the above results. Hourly 10 m wind data of ERA5 were used as the wind speed dataset to generate the wind speed information with the same temporal resolution as that used in sea ice drift datasets. The ERA5 is a widely used reanalysis dataset, produced by the European Centre for Medium-Range Weather Forecasts (ECWMF) [36,40]. Figure 12 shows scatter plots between the ERA5 10 m wind speed and absolute error of sea ice speed from OSI-405-c_Merged, NSIDC_Pathfinder, and Ifremer_AMSR2. The figure shows a strong positive correlation between wind speed and the speed error of ice drift, as expected, which indicates the high reliability of the three sea ice drift products. Furthermore, it was found that with an increase of wind speed, NSIDC_Pathfinder showed the largest error increase. The reason for this may be that NSIDC_Pathfinder employs NCEP/NCAR wind forecasts to refine satellite-derived sea ice motions, which means that its ice drift speed and speed errors are strongly related to wind speed. speed. OSI-405-c_Merged, NSIDC_Pathfinder, and Ifremer_AMSR2 were selected as the sea ice drift datasets because they performed better than other products based on the above results. Hourly 10 m wind data of ERA5 were used as the wind speed dataset to generate the wind speed information with the same temporal resolution as that used in sea ice drift datasets. The ERA5 is a widely used reanalysis dataset, produced by the European Centre for Medium-Range Weather Forecasts (ECWMF) [36,40]. Figure 12 shows scatter plots between the ERA5 10 m wind speed and absolute error of sea ice speed from OSI-405-c_Merged, NSIDC_Pathfinder, and Ifremer_AMSR2. The figure shows a strong positive correlation between wind speed and the speed error of ice drift, as expected, which indicates the high reliability of the three sea ice drift products. Furthermore, it was found that with an increase of wind speed, NSIDC_Pathfinder showed the largest error increase. The reason for this may be that NSIDC_Pathfinder employs NCEP/NCAR wind forecasts to refine satellite-derived sea ice motions, which means that its ice drift speed and speed errors are strongly related to wind speed. According to the research on the general user needs for satellite sea ice data, more than 91% of users indicated that satellite-derived sea ice drift information is useful for their research [41,42]. The main focuses of their research are model validation and data assimilation [41,42]. As model validation and simple data assimilation (e.g., nudging) generally require high accuracy of the sea ice drift information [28], OSI-405-c_Merged and Ifremer_AMSR2 are recommended for such applications because the overall accuracies of these products are high. For advanced data assimilation techniques which employ errors to adjust the weighting coefficients, all the products evaluated in this study could be used as long as the errors of the products are appropriately taken into account [28]. However, NSIDC Pathfinder, with large spatial coverage, is preferable in such cases. Regarding the requirements of satellite-derived sea ice drift products, the World Meteorological Organization states that the target accuracy is 1km/d on a weekly basis, and the spatial resolution should be 5 km [43,44]. From the above results, currently, the studied products achieved accuracies of 1km/d on a weekly basis. However, there are still many ways to further improve the accuracy of sea ice motions, such as by merging vectors retrieved based on the CMCC algorithm using 89 GHz channels of AMSR2. Moreover, there is still a great demand for improving the spatial resolution of satellite-derived sea ice drift products. One possible solution is to develop more applicable methods for high resolution SAR and optical datasets. Conclusions In this paper, eleven satellite-derived sea ice motion products, namely, four from OSI SAF, two from NSIDC, and five from Ifremer, have been evaluated by using 647 buoys with high positioning accuracy in the Arctic from 2018 to 2020. The conclusions are as follows: (1) Among the eleven products, NSIDC_Pathfinder, Ifremer_AMSR2, and OSI-405-c_Merged performed well. NSIDC_Pathfinder had the highest angle accuracy among all products but its speed MAE was large. Compared with other products, it had higher temporal (i.e., daily) and spatial resolution (i.e., 25 km) and wider spatial coverage. Ifremer_ASMR2 had the highest speed accuracy, but its angle accuracy and trajectory reconstruction results performed relatively poorly. The spatial resolution of this product is 31.25 km. OSI-405-c_Merged showed the best ability in trajectory reconstruction. Its angle MAE is about 0.1 • larger than that of NSIDC_Pathfinder and its speed MAE is 0.2 km/d greater than that of Ifremer_AMSR2. The spatial resolution of this product is 62.5 km, i.e., the lowest of these three products. (2) In terms of seasonal and spatial variations of the product accuracy, the accuracy of the freezing season was significantly better than that of the melting season for most products derived by microwave sensors. However, the freezing season accuracy of the optical-sensor product (i.e., OSI-407-a) was slightly lower than its melting season accuracy. In addition, the speed MAEs in regions where ice moves faster (i.e., KBS and EG) were greater. The angle MAEs of CBS and EG were higher as ice movements are complex in these regions. Overall, most products performed worst in EG. (3) Product accuracy can be affected by the data sources, extraction algorithms, and time intervals. The accuracy achieved by different data sources, from good to bad, may be ordered as follows: AMSR2 > ASCAT > SSMIS_V > SSMIS_H. Merging from singlesensor products may not improve accuracy. Moreover, CMCC is superior to MCC in speed and angle accuracy, especially in angle. Furthermore, the accuracies of the products with long-time resolution are better than those with short-time resolution. Overall, the results of this study indicate that all products are of practical value for model verification and data assimilation if uncertainties and errors are given in an appropriate way. In addition, from the analyses on how product accuracy is affected by season, region, data source, retrieval algorithm, and time interval, it was found that there are many possibilities for the enhancement of sea ice drift monitoring based on remotely sensed data. This paper provides some references for future improvements of these and similar products.
2022-03-09T16:06:35.966Z
2022-03-04T00:00:00.000
{ "year": 2022, "sha1": "09514a510185486c5dc2d1c9668d2f2a68077f83", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/14/5/1261/pdf?version=1646787597", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f5d4e40564c1c2b845fbe8ea703ced4fad062dff", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
252391756
pes2o/s2orc
v3-fos-license
Drepmel—A Multi-Omics Melanoma Drug Repurposing Resource for Prioritizing Drug Combinations and Understanding Tumor Microenvironment Although substantial progress has been made in treating patients with advanced melanoma with targeted and immuno-therapies, de novo and acquired resistance is commonplace. After treatment failure, therapeutic options are very limited and novel strategies are urgently needed. Combination therapies are often more effective than single agents and are now widely used in clinical practice. Thus, there is a strong need for a comprehensive computational resource to define rational combination therapies. We developed a Shiny app, DRepMel to provide rational combination treatment predictions for melanoma patients from seventy-three thousand combinations based on a multi-omics drug repurposing computational approach using whole exome sequencing and RNA-seq data in bulk samples from two independent patient cohorts. DRepMel provides robust predictions as a resource and also identifies potential treatment effects on the tumor microenvironment (TME) using single-cell RNA-seq data from melanoma patients. Availability: DRepMel is accessible online. Introduction Cutaneous melanoma is the deadliest form of skin cancer, with a tendency to aggressively metastasize to multiple organs [1]. Melanoma has long been a poster child for personalized medicine with targeted therapies such as the BRAF inhibitors and the BRAF-MEK inhibitor combination being highly effective against the 50% of melanomas with activating BRAF mutations. Despite this, targeted therapies are lacking against NRAS mutant melanoma and the approximately 25% of melanomas that have no identified oncogenic driver mutations. Recently, immunotherapies such as anti-PD1 and anti-CTLA4 have shown great promise to improve patient outcomes. However, after treatment failure, limited treatment options are available. Drug combinations have been developed and approved for overcoming treatment resistant to targeted and immunotherapies, yet computational methods and resources have been limited for predicting drug combinations. Here, we developed three multi-omics approaches to predicting effective combination therapies using two independent cohorts of melanoma patient data, and the results are displayed through a user-friendly Shiny app DRepMel. Machine and deep learning approaches have been shown to be powerful for predicting anti-cancer combination therapies using cancer cell line data [2][3][4]. To assess potential treatment effects of targeted and immunotherapies, we used omics data from melanoma patients which consist of both tumor and immune/stromal cells instead of cancer cell line datasets. Furthermore, targeted tumor mune/stromal cells instead of cancer cell line datasets. Furthermore, target mune microenvironment (TME) derived from scRNA-seq data of melanoma ples [5] was included in the app for understanding the potential combinatio pact on TME. Methods We developed an integrative approach for drug repurposing and pred nation therapies for melanoma patients and applied them to two independe patient cohorts with matching Whole Exome Sequence (WES) and RNA-seq the same patients: the TCGA (N = 459) and Moffitt Melanoma cohorts (N = 1 The Moffitt Melanoma Cohort (N = 135) was described in our previous wo this study (MCC# 19147) was conducted in accordance with recognized ethi (e.g., Declaration of Helsinki, CIOMS, Belmont Report, U.S. Common Rule proved by Chesapeake Institutional Review Board (IRB). A waiver of consen by Chesapeake IRB. We include additional information on Whole Exome Se yses (WES) and RNA-seq analyses here. Summary information is included WES and RNA-Seq Sequence Analyses WES data has been generated for tumors and matched normal samples, with a depth of coverage averaging around 100×. Sequence reads were aligned to the reference human genome (hs37d5) with the Burrows-Wheeler Aligner (BWA) [7], and insertion/deletion realignment and quality score recalibration were performed with the Genome Analysis ToolKit (GATK) [8]. Tumor-specific mutations were identified with Strelka [9] and Mu-Tect [10], and were annotated to determine genic context (i.e., non-synonymous, missense, splicing) using ANNOVAR [11]. Additional contextual information was incorporated, including allele frequency in other studies such as 1000 Genomes and the NHLBI Exome Sequence Project, in silico function impact predictions, and observed impacts from databases like ClinVar (http://www.ncbi.nlm.nih.gov/clinvar/) (accessed on 1 March 2016), the Collection Of Somatic Mutations In Cancer (COSMIC), and The Cancer Genome Atlas (TCGA). Mutation signatures (alterations across possible trinucleotide sequences) were counted and derived as described in Alexandrow et al. [12] as implemented by decon-structSigs [13]. WES quality control includes read metrics following each analysis step (fraction duplicate reads, fraction mapped reads), depth of coverage assessment across the targeted regions, and common genotype comparisons across samples to ensure proper sample matching. Mutations were counted as follows: observed in Strelka-specific OR (MuTect AND Strelka-sensitive), predicted to be protein altering, and <1% frequency in 1000 Genomes. RNA-seq data has also been generated on the same tumor samples. Sequence reads were aligned to the human reference genome in a splice-aware fashion using Tophat2 [14], allowing for accurate alignments of sequences across introns. Aligned sequences were assigned to exons using the HTseq package [15] to generate initial counts by region. Normalization, expression modeling, and difference testing were performed using DESeq [16]. RNAseq quality control includes in-house scripts and RSeqC [17] to examine read count metrics, alignment fraction, chromosomal alignment counts, expression distribution measures, and principal components analysis and hierarchical clustering to ensure sample data represents experiment design grouping. TCGA whole exome data was downloaded from the NIH TCGA website in March 2016 [18]. The MAF file was converted to VCF, and then annotated as described for the TCC data. Mutations were counted as follows: predicted to be protein-altering and <1% frequency in 1000 Genomes. Level 3 of RNA-seq data was used in this study. RNA-seq expression data was de-batched between the TCGA and Moffitt cohorts using the Combat function in the sva package in R [19]. Doublet Combination Therapy Candidates For treatment predictions, a total of 5894 treatments and their target genes from Drug SIGnatures DataBase DSigDB, [20], selleckchem.com (accessed on 1 March 2016), and commonly known immune checkpoint therapies were included [20] as therapy can- didates. Among these, 5845 drugs were from DsigDB, 38 HDAC inhibitors were from selleckchem.com, and 11 drugs were known immune checkpoint therapies. For initial screening, single therapy analyses were performed for each candidate therapy to identify plausible "seed" therapies to form the pool of doublet combination candidates. A single therapy analysis consists of three parts and was summarized using Fisher's Product method (FPM). To assess the potential efficacy of every single therapy, the analyses via mutation, expression, and patients' overall survival (OS) were used as a surrogate for clinical outcome. RNAseq and mutation status of target genes of a therapy were the primary independent variables in respective Cox PH models, adjusting for age and IPI/NIVO and BRAF treatment. The single therapy models to evaluate the association between mutation and OS is defined as S MUT (t) = β M x + β a age + β B BRAF + β I IPI/N IVO, where x is an indicator of mutations in the target genes of the candidate drug. BRAF is an indicator of BRAF inhibitor treatment and IPI/NIVO is an indicator of any check point inhibitors. To evaluate the association between the gene expression data and survival the model is S PC (t) = β E x PC + β a age + β B BRAF + β I IPI/N IVO, where x pC is the first principal component (PC1) of the gene expression data of the target genes of the candidate drug. Since a drug often targets multiple genes, a principal component analysis was used to summarize and reduce the dimensionality of the gene expression data for genes targeted by a drug. PC1 explains the maximal amount of variance of the expression data from a drug set and was used in the survival and eQTL analyses. The eQTL analysis was performed using the Wilcoxon rank sum test using the PC1 of the expression values of the target genes and the mutation indicators in the target genes. When there is a mutation detected in at least one of the target genes of a drug (set) for a patient, the mutation status for this patient is coded as 1 (in the binary 0/1 coding) for the eQTL analysis. When no mutation is found in the target genes from a drug (set), then it is coded as zero. PC1 was used in the Wilcoxon rank sum test for evaluating the potential association between the mutations and expression of the target genes within a drug set. FPM was first used to synthesize the results from three analyses for each candidate drug within each cohort. Then, FPM was used to generate a summary of the results from the two cohorts. p-values associated with the Chi-squared statistic from the FPM were used to prioritize the single treatment. A false discovery rate (FDR) was used to adjust for multiple comparisons. The treatments with FDR < 0.05 were selected as "seed" therapies to pair with each of the remaining treatments to form the doublet candidate pool. There are 37 therapies with FDR < 0.05: 26 FDA-approved Kinases, 7 Immuno drugs, and 4 HDAC drugs. We also included two clinically used treatments for melanoma patients: Panobinostat and Trametinib as part of 39 seed treatments to formulate the doublet pool. Pairing each of the seed treatments with each of the remaining treatments from the 5848 candidates results in a total of 73,007 combinations in the doublet pool. Drug Repurposing Models for Doublets To assess the potential treatment effects of each doublet therapy candidate, the association between mutation, expression, and patients' overall survival, used as a surrogate for clinical outcome, was examined. RNAseq and mutation status of target genes within a therapy were the primary independent variables in respective Cox PH models, adjusting for age and IPI/NIVO treatment and BRAF treatment (Equations (1) and (2)). The treatment interaction was also included in the model. An expression quantitative trait loci (eQTL) analysis was performed to assess the potential transcriptional impact of the mutations using a Wilcoxon test (Equation (3)). Since actual mechanisms of treatment action through DNA, RNA or their interaction is uncertain, three models/methods were formulated based on different assumptions. Method 1 (Equation (4)) combines all evidence from (Equations (1)- (3)). Method 2 evaluates the evidence from gene expression (Equation (5)) while Method 3 evaluates the most significant evidence with a minimum p-value among the 3 sets of evidence (Equations (1)-(3)) within each cohort (Equation (6)). Details are described below. Fisher's Product method was used to combine evidence from two cohorts. Further filtering (p < 0.05) for each cohort was performed for Method 2 and summarized in the Shiny App DRepMel. Subset analyses were performed for patients with BRAF, NRAS mutations, or Triple WT cohorts. To evaluate the association between patients' overall survival and somatic mutation (MUT), gene expression using the PC1, and the potential mutation impact on target genes, the following three equations were formulated. To evaluate patients' survival with a mutation in target genes of each doublet: (1) where x 1 and x 2 are indicators of mutations in the target genes of drugs 1 and 2, respectively. BRAF is an indicator of BRAF inhibitor treatment and IPI/NIVO is an indicator of any checkpoint inhibitors. To evaluate patients' survival with expression in target genes of each doublet: where x 1 and x 2 are the first principal component of the gene expression data of the target genes of drugs 1 and 2, respectively. To assess the potential functional impact of mutations on gene expression, the eQTL is performed with the Wilcoxon rank sum test. The PC1 of the expression values of the target genes was used with the mutation indicators in the target genes of both drugs. To enhance the robustness of the inference, analyses were performed using two independent melanoma patient cohorts: TCGA (N = 459) and Moffitt Melanoma cohorts (N = 135). Fisher's Product method was used to synthesize the results from each of the analyses above (Equations (1)-(3)) with the following notation: p(β) is the p-value of the coefficient β in equations 1 or 2 or p-value of eQTL analysis (3). M = "Mutation" model, E = Expression model. "1" = drug 1, "2" = drug 2 "t" = TCGA cohort, "m" = Moffitt cohort Since the actual mechanisms of treatment action through DNA, RNA or their interaction is uncertain, three models were formulated based on different assumptions. Method 3 combines the minimum p-value of the tests between two cohorts. For example, if Min p-value is from the same model, say the S MUT (t) models in TCGA and Moffitt cohorts, then the interaction term p-values are used. χ 2 2k ∼ −2 ∑ (ln(P(β M3t )) + ln(P(β M3m ))) (6) Potential TME Targeted by the Predicted Doublet Therapies Potential TME targeting by the predicted doublet therapies was inferred using the scRNA-seq data of 28,078 single cells from 43 patient samples [5]. All analyses and the Shiny App were performed and implemented using R. Web Application and Results The predicted doublets by Method 2 of version 1.0 of the Drepmel are available for visualization using the Shiny application at http://drepmel.moffitt.org/, which will be maintained for at least 3 years (contact zachary.thompson@moffitt.org or ann.chen@moffitt.org with technical issues). The R code defining the server-side logic of the Shiny application is available in Supplemental File S1. The R code controlling the layout and appearance of the application is available in Supplemental File S2. The input for the app includes two drop-down menus of treatments and radio buttons to choose the (sub-) group of patients corresponding to the major melanoma genotypes (All, NRAS, BRAF, Triple WT). Two additional drop-down menus provide target genes to select in each treatment for single gene expression heatmaps to understand the potential treatment effect in the TME. The Shiny application includes a tab for introduction, method, and the following results tabs: • Tables of top doublet combinations summarized the overall results and those for each of the major melanoma genotype groups. • TME: Heatmap and Violin Plot Highlight Potential Targeted Cells Populations by Each Therapy • The mutation and survival tab displays Kaplan-Meier plots of overall survival based on mutation status in the target genes of the selected doublets in each cohort. • The PC1 and survival tab shows the tables of genes and PC1 loadings in the target gene sets of each treatment for each cohort along with the KM plots of PC1 and overall survival for each treatment in both cohorts. The PC1 values are dichotomized at the median. • The eQTL tab displays the box plots of gene expression in both cohorts by mutation status in the target genes. It also displays the summary statistics of the de-batched expression on a log scale. The results from Methods 1, 2, and 3 are included in Supplemental Files S3-S5. The top combinations include plausible candidates. For the overall analyses, the top combinations include known effective treatments (anti-PD1), plausible ones (Lag3, nilotinib), and additional treatment combinations which could be further investigated (Supplemental File S4). Robust findings between the TCGA and Moffitt cohorts for patients with limited treatment options (NRAS or triple WT patients) provides a short list of candidates for further investigation. The 52 predicted combinations for the NRAS subgroup contain some interesting candidates. The top candidate combining LAG3 and clioquinol show consistent finding between two patient cohorts ( Figure 2). This combination, while unexpected, could offer some novel avenues for melanoma therapy. Clioquinol has effects on the proteasome as well as copper and zinc metabolism, and has the potential to alter transcriptional activity in both cancer cells and immune cells [21,22]. It is possible that these broadly targeted effects on the tumor transcriptional state could increase sensitivity to more broadly used immunotherapies, such as the anti-LAG3 antibody. Our group has already demonstrated that immunotherapies can be used in sequence with targeted therapies to deliver long-term anti-tumor effects in mouse melanoma models. In this instance, these effects are driven both by modulation of signaling in the tumor and reprogramming of the immune microenvironment [23]. Rigorous pre-clinical evaluation of the drug combinations selected from tools such DRepMel could lead to a robust pipeline of repurposed drug combinations for future clinical evaluation. The TME results indicate that each therapy is likely targeting different cell populations: lymphoid and myeloid, respectively. This provides insight into how the candidate combinations might work. Robust predictions are also provided for the BRAF subgroup of melanoma patients (Figure 3). The tool DRepMel provides a useful computational resource with robust findings for hypothesis generation. It also yields insights on potential treatment impacts on the TME for further investigation. Cells 2022, 11, x FOR PEER REVIEW 7 of 10 myeloid, respectively. This provides insight into how the candidate combinations might work. Robust predictions are also provided for the BRAF subgroup of melanoma patients ( Figure 3). The tool DRepMel provides a useful computational resource with robust findings for hypothesis generation. It also yields insights on potential treatment impacts on the TME for further investigation.
2022-09-21T15:23:20.535Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "fbf7f988fa139f63575bf182bc2f39a8a5ad7dc0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/11/18/2894/pdf?version=1663555565", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e83df02b4e2cfc3dede6697806037b54b4e6e61", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52240135
pes2o/s2orc
v3-fos-license
Chip-scale broadband spectroscopic chemical sensing using an integrated supercontinuum source in a chalcogenide glass waveguide QINGYANG DU, ZHENGQIAN LUO,* HUIKAI ZHONG, YIFEI ZHANG, YIZHONG HUANG, TUANJIE DU, WEI ZHANG, TIAN GU, AND JUEJUN HU Department of Materials Science and Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Department of Electronic Engineering, Xiamen University, Xiamen 361005, China College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China Key Laboratory of Photoelectric Materials and Devices of Zhejiang Province, Ningbo University, Ningbo 315211, China *Corresponding author: zqluo@xmu.edu.cn INTRODUCTION Infrared (IR) spectroscopy is often considered as a gold standard in analytical chemistry given its ability to unequivocally identify chemical species via "fingerprinting" the molecular vibrational modes.Traditionally, IR spectroscopy relies on benchtop instruments only available in a dedicated laboratory setting.In recent years, integrated photonics has emerged as a promising solution to liberate the technology from the constraint by potentially enabling sensor integration on chip-scale platforms [1][2][3][4][5][6][7][8].These early demonstrations make use of tunable lasers to perform wavelength interrogation and identify spectral signatures of target molecules.However, the use of tunable lasers, which are bulky instruments involving complex mechanical moving parts, is counterproductive to compact sensing system integration.Moreover, the laser tuning range is bound by gain bandwidth of the lasing medium, which is often merely a small fraction of an octave.Using current or temperature ramping for laser wavelength tuning offers a viable option for miniaturized light sources [9], although the accessible spectral domain using these techniques is small.Consequently, such sensors are limited to the detection of one single species and can be prone to interferences. In this paper, we report, to the best of our knowledge, the first demonstration of an on-chip spectroscopic chemical sensor with a monolithically integrated supercontinuum (SC) light source.Unlike traditional broadband blackbody sources used in benchtop IR spectrophotometers, waveguide SC sources feature high spatial coherency essential for efficient light coupling and manipulation on a photonic chip.Compared to tunable lasers, SC offers superior bandwidth coverage: for instance, waveguide SC spanning three octaves has been experimentally realized [10].The broadband nature of SC facilitates access to wavelengths difficult to cover using semiconductor lasers and thereby significantly expands the identifiable molecule repertoire of spectroscopic sensors.In our experiment we use chalcogenide glass (ChG) as the waveguide material for both SC generation and evanescent wave sensing.ChGs are known for their broadband infrared transparency, large Kerr nonlinearity, and low two photon absorption (TPA), ideal characteristics for our application [11,12].Indeed, ChG waveguides have been separately applied to broadband SC generation [13][14][15][16][17] and IR spectroscopic sensing [18][19][20][21][22][23][24].Here we combine for the first time both functions in a single chip-scale platform, allowing the on-chip photonic sensor to interrogate a broad spectral region from 1.38 to 2.05 μm not accessible with a single tunable laser.In addition, unlike previous SC generation experiments in ChGs where bulky pulsed pump lasers were used, we employed a home-built, palm-sized femtosecond laser as the pump source.The laser uses a graphene saturable absorber in an all-fiber system to realize passive mode-locking, and the entire laser can be integrated in a small module of a few centimeters in size [25].Our work here therefore envisions a standalone, compact spectroscopic sensing system once coupled with miniaturized chip-scale spectrometers we recently developed [26][27][28]. EXPERIMENTAL DESIGNS AND SETUP A. Designs and Fabrication of GeSbSe Waveguides Films of 400 nm thick Ge 22 Sb 18 Se 60 (GeSbSe) were thermally evaporated onto 4" silicon wafers with 3 μm thermal oxide as an undercladding from GeSbSe glass powder (prepared by melt quenching in a quartz ampoule).Stoichiometry of the film was confirmed by wavelength dispersion X-ray spectroscopy (JEOL-JXA-8200 Superprobe WDS) at 5 different locations on each sample to confirm its compositional uniformity.We choose this glass composition given its large optical nonlinearity (nonlinear index n 2 5.1 × 10 −18 m 2 ∕W) and low TPA (TPA coefficient β 4.0 × 10 −13 m∕W), both measured using the Z-scan technique at 1550 nm wavelength.The GeSbSe glass therefore exhibits a nonlinear figure of merit (defined as n 2 ∕βλ, where λ is the wavelength) of 8.3, over one order of magnitude larger than that of silicon at the same wavelength [29].Refractive index dispersion of the glass film was characterized using Woollam V-VASE32 ellipsometry and plotted in Fig. 1(a).The data were then used to compute the group velocity dispersion (GVD) of the fundamental quasi-TE mode in GeSbSe waveguides with varying widths [Fig.1(b)].As seen in Fig. 1(b), as waveguide width increases from 0.6 to 1.05 μm, the zero-dispersion wavelength progressively shifts towards longer wavelength from 1.35 to 1.68 μm.To efficiently excite SC in a waveguide, the pump wavelength should be located near the zero-dispersion wavelength.Therefore, the optimal GeSbSe waveguide dimensions are 0.95 μm W × 0.4 μm H with a zero-dispersion wavelength at 1.56 μm, our pump central wavelength. GeSbSe waveguides with varying widths were fabricated using our previously established protocols [30].In the process, a 350-nm-thick ZEP resist layer was spun onto the substrate followed by exposure on an Elionix ELS-F125 tool at a beam current of 10 nA.The resist pattern was then developed by immersing in ZED-N50 developer for 1 min.Reactive ion etching was performed in a PlasmaTherm etcher to transfer the resist pattern to the glass layer.The etching process used a gas mixture of CHF 3 and CF 4 at 3:1 ratio and 5 mTorr total pressure.The incident radio frequency (RF) power was fixed at B. Experimental Setup for SC Generation and Sensing The waveguides were tested for SC generation using a setup schematically illustrated in Fig. 2(c).The pump source is a home-built, palm-sized femtosecond laser module [Fig.2(d)] with a central wavelength of 1560 nm, a repetition rate of 8.1 MHz, and a pulse duration of 800 fs [25].The laser is assembled on an all-fiber platform and passively mode-locked using a graphene saturable absorber synthesized in-house [31].The femtosecond seed laser was then amplified by a homemade erbium-doped fiber amplifier (EDFA) to boost the average power from 0.2 mW to a maximum of 5.5 mW, producing a peak power of approximately 0.8 kW after amplification.The fibers used in our experiment can be easily spooled to a centimeter-scale radius with negligible bending loss.Therefore, the all-fiber construction of the laser and amplifier potentially allows the light source module to be further downscaled to an ultra-compact package of a few centimeters in size.The TE-polarized amplified pulses were coupled into and out of the GeSbSe waveguide devices via taper lensed fibers with a coupling loss of approximately 7 dB per facet.An optical spectrum analyzer (OSA, Yokogawa AQ6375B, covering 1.2-2.4μm wavelength range) was used to spectrally resolve output light from the chip.By replacing the OSA with an on-chip spectrometer (for example, the digital Fourier transform spectrometer we recently developed [8]), we may realize a compact handheld sensing system. EXPERIMENTAL RESULTS AND DISCUSSIONS Next we investigated the influence of waveguide geometry, waveguide length, and pump power on the SC spectra to elucidate the SC generation mechanism and understand sensor device design trade-offs.Figure 3(a) presents the SC spectra in GeSbSe waveguides of different widths.All the waveguides have the same core thickness of 0.4 μm and a uniform length of 21 mm.SC generated by the waveguide with a 0.95 μm width, whose zero-dispersion point aligns with the pump wavelength, exhibits the maximum bandwidth consistent with our GVD simulations.For waveguides with widths W 0.6 μm and 0.8 μm, the pump wavelength is largely away from their zerodispersion wavelengths.In this regime, SC is formed through initial self-phase modulation followed by self-steepening and other high-order nonlinear effects contributing to spectral broadening.In contrast, for waveguides with W 0.95 μm and 1.05 μm, the pump wavelength is located near the zerodispersion point.In this case, a broad SC spectrum results from soliton fission, self-frequency shift, and dispersive wave emission [13].To further validate the SC generation mechanism, we compute the nonlinear length (L NL 1∕P 0 γ, where P 0 and γ denote the pump peak power and waveguide nonlinear parameter, respectively) to be 0.29 mm, which is almost one order of magnitude smaller than the waveguide length.Therefore, we conclude that the SC generation mechanism in our device is dominated by high-order soliton fission from various kinds of nonlinear optical effects. Figure 3(b) plots the SC spectra in GeSbSe waveguides with the different lengths and the optimal dimensions (W 0.95 μm, H 0.4 μm).As indicated in the figure, the SC bandwidth extends to over half an octave, albeit with decreased total output power when the waveguide length increases to 21 mm.This power attenuation is attributed to the GeSbSe waveguide propagation loss, measured using the cut-back method to be ∼4 dB∕cm.This trade-off between SC spectral coverage and power can be mitigated with reduced waveguide losses.SC spectra from the 21-mm-long waveguide (W 0.95 μm, H 0.4 μm) are shown in Fig. 3(c) for several pump power levels.Clearly, higher pump power produces SC with an increased bandwidth.The maximum SC spectral span we obtained in our experiment is 1380 to 2050 nm (gauged at 20 dB flatness), primarily limited by the optical power available from our compact pump source.If desired, higher pump power and hence even wider SC spectral coverage can be obtained by adding more amplification stages, albeit at the expense of the compactness of the system. In the sensing experiment, the GeSbSe waveguide was immersed in carbon tetrachloride (CCl 4 ) solutions containing varying concentrations of chloroform (CHCl 3 ).The CCl 4 solvent is optically transparent across the near-IR [19]), whereas the C-H bond in chloroform leads to an overtone absorption peak centering at 1695 nm, a wavelength outside the standard telecommunication bands.Here we use the C-H overtone absorption to quantify the sensing performance of our device.SC spectra near the chloroform absorption peak obtained with GeSbSe waveguides of different lengths or solutions of different concentrations are presented in Figs.4(a) and 4(b), respectively.The data were normalized to the background (collected in pure CCl 4 ) and the raw spectra are furnished in the inset.Figure 4(c) plots the absorption at 1695 nm versus waveguide length, indicating that the classical Lambert's law is obeyed in the new SC-enabled sensing mechanism.The optical absorption coefficient α (in dB/cm) of chloroform at 1695 nm was also quantified using a benchtop UV-Vis spectrophotometer, which is used to project the absorption A (in decibel, dB) measured from the waveguide sensor [marked with a triangle in Fig. 4(a)] following A ΓαL: Here Γ denotes the waveguide modal confinement factor in the solution, which is 6.8% computed using a finite difference mode solver [shown in Fig. 4(c) inset].The agreement between the two techniques suggests that the waveguide sensor can be applied to quantitative analysis of absorption coefficients in chemical samples. CONCLUSION In conclusion, we demonstrated in this work an on-chip spectroscopic sensor where a chalcogenide glass waveguide served as both the broadband SC light source and the evanescent sensing element.By incorporating highly nonlinear GeSbSe glass in a dispersion engineered waveguide design, SC spanning over half of an octave was achieved using a compact femtosecond laser pumping source.We validated the sensing performance of the device through quantifying the C-H bond overtone absorption of chloroform at 1695 nm wavelength.This prototype envisages a handheld spectroscopic sensing platform with broadband interrogation capability suitable for field-deployed applications. Fig. 1 . Fig. 1.(a) Refractive index dispersion of the Ge 22 Sb 18 Se 60 glass film measured using ellipsometry; inset schematically depicts the waveguide structure.(b) Simulated GVD of GeSbSe waveguides with varying widths (W ) and a fixed core thickness H 400 nm. Fig. 3 . Fig. 3. SC spectra in GeSbSe waveguides: (a) SC spectra from waveguides with different widths W ; when W 0.95 μm, the zero-dispersion point of the waveguide coincides with the pump wavelength; (b) SC generation of GeSbSe waveguides with the optimal geometry (W 0.95 μm, H 0.4 μm) and varying lengths; (c) SC spectra from a 21 mm long GeSbSe waveguide (W 0.95 μm, H 0.4 μm) at different pump power levels.The power quoted here represents the average optical power coupled into the waveguide. Fig. 4 . Fig. 4. (a) SC spectra measured on GeSbSe waveguides of different lengths L when immersed in chloroform; the triangle marks the optical absorption at 1695 nm calibrated using a benchtop UV-Vis spectrometer for an equivalent waveguide path length L 21 mm; (b) SC spectra taken on a 21 mm long GeSbSe waveguide immersed in CHCl 3 -CCl 4 solutions of varying volume concentration ratios; (c) measured peak absorption at 1695 nm versus the GeSbSe waveguide length used in the experiment.The linear relation indicates that the classical Lambert's law is obeyed; the inset shows the mode profile simulated by finite difference method.
2018-08-29T07:49:10.201Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "80fcb2f636c920476ee034b7d39ad1946c4c29c2", "oa_license": "CCBYNC", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/134943.2/4/prj-6-6-506.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "2301be656cae2a02b989183d2df8085bdcc58c72", "s2fieldsofstudy": [ "Physics", "Chemistry", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
250592441
pes2o/s2orc
v3-fos-license
Early Warning of Enterprise Financial Risk Based on Decision Tree Algorithm To improve enterprise financial early warning, we propose an algorithm based on a decision tree. According to the shortcomings and defects of the classical algorithm and the traditional decision tree algorithm, in the ordinary decision tree improved algorithm based on PCA, there is a problem that the representativeness of the data after dimensionality reduction processing are not high, resulting in the fact that the accuracy of the algorithm can be improved slightly after multiple data runs. Based on the classical algorithm, attribute eigenvalues before classification are extracted twice, and the amount of data to be classified is calculated. That is, the most important attributes of the original data are selected. After the subtree is established, the dimension reduction and merging selection of the data are performed, and the improved algorithm is verified by using three data sets in the UCI database. The results show that the average accuracy in the three datasets is 94.6%, which is improved by 1.6% and 0.6% for the traditional classical algorithm and the ordinary PCA decision tree optimization algorithm, respectively. PCA-based decision tree algorithms can improve the accuracy of the results to some extent, which is of practical importance. In the future, a classic algorithm improved for secondary modeling will be used to obtain a more efficient decision tree model. The decision tree algorithm has been proven to recognize an early warning of an enterprise's financial risks, which enhances the effectiveness of an enterprise's early financial warning. Introduction In the post-financial crisis period, enterprises are in an unpredictable external environment, and different forms of risks emerge one after another. e survival and development of enterprises are inseparable from the external environment. At the same time, the uncertainty of the external environment has a nonnegligible impact on the daily operation of enterprises. Additionally, the negligence or change of any link in the daily operation of enterprises, if not identified and controlled in time, is likely to lead to financial risks. With the increasing degree of economic globalization, the complex and changeable market environment makes enterprises face more financial risks different from the past [1]. Additionally, through the tree algorithm, the problems in decision-making and control in enterprise operation and management activities, such as the difficult recovery of accounts receivable, improper authority setting, wrong financing decision-making, and investment decision-making, will lead to financial risks and bring huge economic losses to the enterprise if they cannot be identified and handled in time. Under the background of the contemporary market economy, any enterprise is in an extremely complex living environment and cannot obtain all the information to make it grow, which determines that the enterprise will face certain financial risks. Recently, the outbreak of the financial crisis has shifted the focus of attention to the risk management of enterprises, especially the importance of financial risk management. At present, in the daily operation and management of enterprises, financial risk management is still ignored by most Chinese enterprises, and financial risk still exists in enterprises. For example, the setting of financial control right is unreasonable, and the financial governance structure is imperfect [2]. e financial control mode lags, and the financial early warning analysis and control mechanism are not perfect. e concept of financial risk management is backward, and the concept of risk value is insufficient. Internal audit exists in name only, and internal financial supervision is lacking. Recently, from the various financial failure cases of Chinese enterprises, people have deeply realized that further research on enterprise financial risk management has a theoretical and practical significance that cannot be ignored. is kind of information is defined as an internal report. e internal report provides managers at different levels with the information they need for business decision-making and control, to timely adjust the business management strategy and make corresponding decisions. Additionally, compared with the external reports, not only do the internal reports have the functions of precontrol and in-process control after reporting and presentation but also attract more and more attention from enterprise management. However, compared with the development of financial accounting, the development and attention of management accounting are still backward. At the same time, the theory of management accounting is divorced from practice, and a complete theoretical framework that can guide and apply to management accounting practice has not been formed [3,4]. Among them, there is less research on the theory and system of internal reports, and there is no unified concept of internal reports. Currently, a few internal reports in enterprises are mostly prepared for the final completion of external reports, which cannot give full play to the value of internal reports. e main goal of this paper is to optimize the decision tree algorithm and apply it to enterprise cost control. erefore, the basic technology and theory based on this research mainly include data mining technology and decision tree algorithm. Common data mining technologies include the following main contents in the implementation process. From front to back, these contents are to ask questions according to needs, make necessary data preparation, preprocess massive data, build data mining models, and evaluate and explain the models. e model system of the whole data mining is shown in Figure 1. Related Work Kumar and Ragava Reddy sorted out and analyzed the reasons for the operation failure of small and medium-sized enterprises in Finland with reference to the evaluation method of Bank of America on the operation and financial status of small and medium-sized enterprises [5]. e research found that the lack of management ability of middlelevel managers, the lack of information provided by the accounting system, and the attitude of enterprises towards employees have an important impact on the operation failure of enterprises. is research result provides a new direction for evaluating the financial and operating status of enterprises. Xu et al. used a variety of mixed financial risk early warning models, that is, different combinations of multivariable discriminant analysis, logistic regression, and BP neural network methods to conduct an empirical analysis of the sample data. rough research, it was found that the prediction accuracy of the mixed early warning model was significantly higher than that of a single early warning model [6]. Clarin conducted in-depth research on the safety management problems and countermeasures of foreign students in colleges and universities, expounded on the importance of early warning mechanisms, and established early warning mechanisms for emergencies, cultural conflicts, accidental injuries, and other problems endangering the personal safety of foreign students during their stay in China. e early warning mechanism can discover the causes of contradictions in advance, and managers can take measures in advance to reduce the impact scope of safety events and even avoid the occurrence of safety events, so as to achieve the transformation from remedying the occurrence of events to early warning to avoid the occurrence of events [7]. When exploring the management strategy of foreign students in China in the new era, Hammed and Jumoke mentioned the need to establish a prevention mechanism for emergencies of foreign students [8,9]. Hassan et al. conducted an in-depth study on the construction of the emergency early warning and prevention mechanism for foreign students in China and expounded the necessity of the safety early warning mechanism for foreign students. In terms of the purpose and effect of safety event management, "plan ahead" is far better than "make up for the lost." Li et al. was the first scholar to use regression analysis to study financial risk early warning. e emergence of this research made discriminant analysis replaced by regression analysis and became the mainstream method in the field of financial crisis early warning in the 1980s [10]. Zhang et al. referring to the evaluation method of Bank of America on the operation and financial status of small and medium-sized enterprises, sorted, and analyzed the reasons for the operation failure of small and medium-sized enterprises in Finland. e research found that the lack of management ability of middle-level managers, the lack of information provided by the accounting system, and the attitude of enterprises towards waiting for employees have an important impact on the operation failure of enterprises. is research result provides a new direction for evaluating the financial and operating status of enterprises [11]. Bian and Wang empirically analyzed the sample data by using a variety of mixed financial risk early warning models, that is, different combinations of multivariate discriminant analysis, logistic regression, and neural network methods. e research found that the prediction accuracy of the mixed early warning model is significantly higher than that of a single early warning model [12]. When conducting univariate analysis on the financial status of sample enterprises, Oujdi et al. selected a financial ratio, including asset liability ratio, return on net assets, return on total assets, and current ratio. e research results show that the asset liability ratio and current ratio have the lowest misjudgment rate on the financial status [13,14]. Nguyen et al. studied the financial risk early warning model based on fuzzy neural network, selected variables from three alternative financial indicators through stepwise regression method, and finally locked the indicators of long-term liabilities, shareholders' equity, profit growth rate, current ratio, total working capital assets, asset turnover, and asset return as the final analysis variables. 2 Computational Intelligence and Neuroscience Based on existing research, a tree-based algorithm is being developed. Algorithm improvement depends on the inadequacy of classical algorithms and always deciding tree algorithms. Data research has shown that there are many ways to select the determination of classical algorithms. erefore, this paper presents an algorithm for optimizing tree decisions based on PCA. In an algorithm for improving PCA-based decision making, the problem is that the representation data is not very high after the reduction in size. As a result, the accuracy of the algorithm improves slightly after data recovery. According to the classical algorithm, this dataset counts the value of the identity of the behavior before the two groups and counts how much data to divide, for example, selecting the most important objects of the data in old papers, creating a subtree, reducing the cost of data, and growing and consolidating information. e development algorithm has been validated using our dataset in the UCI database. Method Some research on specific issues requires the collection of data and the use of data, and classical procedures for this data often have many differences, and the relationship between differences are difficult to find intuitively. However, there are several factors that often play an important role in the integration of classical algorithms. erefore, we need to pay attention to the hidden connection and learn more about the relationship of the relationship matrix of the first difference in the data structure of the classical algorithm or internal structure of the covariance matrix and identify the algorithm of linear combinations of the first variables of the data algorithm. e first classical algorithm, obtained by many incomplete, we call it the main component. e classical algorithm generally maintains the following relationships between the main components and the base components obtained after calculating the original variables. (1) e main idea is the composition of the source material. (2) e number of critical points is less than the number of original objects. (3) Most of the information contained in the initial variables is stored in each principle. (4) Independence of the two priorities is established. After the analysis, we can see the key points from the differences of the first ones that contain the basic characteristics, so that when we are faced with a lot of data, we can reach a numerical analysis and then study and understand the relationship between the initial differences. e internal rules of product research and the results are studied at a deeper level. e steps of principal component analysis are as follows: let the thing we want to study contain P indexes, expressed as x 1 .x 2 . . . x p , and the p-dimensional random vector that can be formed by this index is (x 1 .x 2 . . . x p ). Let the mean of random vector X be u and the covariance matrix be e [15,16]. Next, we perform linear transformation on X. after this step, we can get a new comprehensive variable Y. In other words, the new comprehensive variable can be described linearly by the original variable; that is, it meets the following requirements: (1) Because we can make the above linear changes to the original variables, the statistical characteristics of the comprehensive variable Y will change with different linear transformations. erefore, to obtain better results, the variance of y i � μ , i x should be kept as large as possible, and each y i should be independent, because Given any constant C, you get erefore, if P is not limited, var(Y i ) can be increased at will, which makes the problem meaningless [17]. Linear transformation constraints need to be carried out under the following principles: Computational Intelligence and Neuroscience (2) Y i and Y j are not related to each other (i ≠ j, i, j � 1, 2 . . . , p). (3) Y 1 is the maximum term of the variance obtained in the linear combination of X 1 , X 2 , . . . , X p satisfying principle 1. Y 2 is the largest term of variance among all linear combinations of X 1 , X 2 , . . . , X p independent of Y 1 ; . . . Y p is the largest term of variance among all linear combinations of . . Y P determined under the previously mentioned three principles is called the 1,2, . . ., p-th principal component of the original variable, and the proportion of each comprehensive variable in all variance sum is decreasing. In our research, we usually only select the first few components, to simplify the model. In the traditional ID3 algorithm, the attributes of different individuals are different at the specific value level, and the subset constructed by the algorithm is the training set divided based on this different value. erefore, the number of subsets finally constructed by ID3 algorithm is the same as the number of types of individual attributes. On this basis, when building the decision tree, the subset corresponds to the branches of the decision tree, and the end points of these branches become leaf nodes, resulting in corresponding decision rules [18]. In this case, if there are too many kinds of attribute values, it will obviously directly increase the leaf nodes, decision paths and finally formed decision rules of the decision tree. at is, it will increase the overall scale of the decision tree, and even lead to the multivalue tendency of decision attributes, resulting in the low accuracy of decision rules. en, the optimization of ID3 algorithm proposed in this paper is to solve such problems. e specific optimization ideas are as follows. (1) Select training to determine classification attribute AK. (2) Establish a node n. e algorithm is initially based on the original ID3 decision tree algorithm. e main idea is as follows. Step 1: select the file, first compress the file using PCA algorithm, and select the key operations. During this time, the data is compressed using the PCA algorithm. Currently, there are many differences in the process of tree design decisions, and the relationship between the differences shows that there are many controls. We use this data to analyze and study the internal structure of the bond matrix or the covariance matrix of the original data exchange. For example, coefficient matrix can be created as follows [19]. Step 2: initialize the data we need to process to construct a complete data matrix. X m * n , m represents the number of records, and n represents the dimension of data records. Step 3: for data standardization, the average of the data is 0, and the standard deviation is 1. at is, data with different dimensions is entered into the same matrix. e mathematical models for normalizing data are as follows: Step 4: solve the differences between the matrix files. e purpose of problem solving is to measure the relationship between two differences, and the models are as follows: When we encounter an n-dimensional matrix data, we can obtain the difference between the two datasets. us, this n-dimensional data can be obtained from the n * n covariance matrix [20]. cov X n * n � C ij , C ij � cov dim i , dim j . If X i represents the ith attribute of the matrix, it can be expressed as follows: en, we use the linear combination of the original variables to get some comprehensive indicators, which is what we call the principal component. Experimental Results and Discussion To verify the optimization algorithm, the author selected the data from UCI machine learning database for the experiment. e first selected is the wine dataset, which contains 178 samples and 13 attributes. After substituting into the original ID3 algorithm and the operation comparison of the optimization algorithm, as shown in Figure 2. In Figure 2, the original ID3 algorithm had five conflicts and only two conflicts after improvements, confirming that the optimization accuracy was higher than the original algorithm [21]. Additionally, experiments were performed on adult data and vehicle datasets and compared with the same PCA as the ID3 algorithm. e results are shown in Figure 3. In Figure 3, in the comparison data, the algorithm is more accurate than the original ID3 algorithm and PCA hybrid algorithm. ere is reason to believe that the algorithm has improved. For clarity, the results are shown in Figure 4. After double compression of the test samples, the accuracy of the optimization was improved to some extent. e accuracy of the optimization algorithm is 2.2% higher than that of the traditional ID3 algorithm and 1% higher than that of the traditional PCA sheep algorithm. e accuracy of the adult data is 1.1% higher than the standard ID3 standard and 0.6% higher than the PCA melting algorithm. e accuracy of the optimization algorithm of vehicle dataset is 1.4% higher than the traditional ID3 algorithm and 0.1% higher than the PCA integration algorithm. Additionally, two data mining algorithms, KNN and naïve Bayes, were tested and integrated into the adult data set for comparison and use. Of these, the accuracy of the KNN algorithm is 87.5% and the accuracy of the Naive Bayes algorithm is 93.75%, which is lower than the performance of this form factor. e algorithms described in this paper have been proven in practice to be practical. e new algorithm, optimized for the ID3 algorithm, has the following advantages. (1) e optimization algorithm effectively solves the problem of multivalue tendency of traditional ID3 algorithm in decision attributes. at is, through principal component analysis, select more representative decision attributes, so as to avoid the multivalue tendency after the calculation of decision attribute information entropy and the reduction of decision attributes, which will inevitably reduce the overall time of decision tree modeling and improve the efficiency of decision tree modeling. (2) After the establishment of subtree, PCA algorithm is used to compress again, and the merged branches are marked with nodes, and then, they are continuously split, which not only effectively avoids the limitations of first pruning but also makes effective use of all datasets. To further verify the advantages of the optimization algorithm, this paper is also based on the dataset of an agricultural material enterprise in the previous paper. In Windows 10 operating system, Python is used to rewrite the optimization algorithm, and the rewritten algorithm is simulated. at is, the secondary modeling of the cost control of an agricultural material enterprise is implemented to obtain a new decision tree model as shown in Figure 5. Computational Intelligence and Neuroscience Additionally, according to the new decision tree, some rules related to the cost control decision of an enterprise can be obtained. ese rules are different from the rules obtained by ID3 algorithm. e specific rules are as follows. (1) If main business cost control � excellent, then cost control � excellent. (2) If main business cost control � qualified and management cost control � excellent, then cost control � excellent. (3) If main business cost control � qualified and management cost control � qualified and sales cost control � unqualified, then cost control � unqualified. From the previously mentioned experimental results, we can find that when using the improved algorithm in this paper, only three cost items are retained through PCA algorithm. To include the production cost for analysis, the overall tree view is more streamlined [22]. Moreover, according to the evaluation results after the completion of decision tree modeling, the accuracy of decision tree modeling using optimization algorithm is higher, up to 95.1%. e modeling time is shorter, which is 8.2 seconds. Additionally, we also substituted the data into the ordinary PCA fusion algorithm, and its accuracy rate reached 94.2, but the time spent is less than the optimization algorithm in this paper, which is 7.8 seconds. According to the decision tree modeling of the traditional ID3 algorithm and the modeling results of the new optimized algorithm, there is no small difference between them. From the perspective of decision tree modeling time, the newly optimized algorithm improves the accuracy of the algorithm to a certain extent due to the compression and dimensionality reduction of decision attributes, and the operation time is also less than that of the original ID3 algorithm. e comparison results of the algorithm before and after optimization in decision tree modeling time are shown in Figure 6. ere is no doubt that the optimized algorithm is faster in decision tree modeling, and the modeling time is 4.4 seconds shorter than the traditional ID3 algorithm. Additionally, from the accuracy level of decision tree model, there are also differences in ID3 algorithm before and after optimization. e accuracy of modeling is compared, and the results are shown in Figure 7. According to Figure 6, it is not difficult to find that the optimized algorithm has higher accuracy in decision tree modeling than the traditional ID3 algorithm and ordinary PCA combined algorithm, which shows that the optimized algorithm is more accurate and can be used in decision tree modeling. In other words, after comparing ID3 algorithm with the optimization algorithm in this paper, the specific comparison data in modeling time, decision tree size, prediction rules, and accuracy can be obtained, as shown in Figure 8. We can see that the optimized algorithm has obvious advantages in terms of modeling time, total number of nodes, number of leaf nodes and number of decision rules, which is more practical for the construction of enterprise cost control decision tree. In early warning, we should first make a preliminary analysis of the company's annual reports in recent years to teenagers, judge which link of the current company's financial situation is in the early warning chain, and then determine the early warning path and early warning target. en, according to the early warning link, determine the index system, select and calculate relevant financial indicators, make up for the missing data, and then standardize. e comprehensive indicators are calculated using factor analysis and other methods. Finally, several early warning models are used to warn the financial situation in the next period. rough the traditional multipurpose regression model for financial risk early warning, the dependent variable of the general multiple linear regression model is a continuous variable. When the dependent variable is a binary value or variable, if the classical regression model is established, the model is likely to no longer be effective. erefore, for the data with dependent variable the regression model should be used, and its function form is as follows. Let Π represent the probability of y � 1, which is recorded as P(y � 1) � Π, and the probability of y � 0 is p (y � 0) 1 − Π. It is assumed that there are n explanatory variables x 1 , x 2 . . . x n ,, which are recorded as (x 1 , x 2 . . . x n , ) , by vector X. different from the linear model. e logistic model does not study the relationship between the dependent variable value and the explanatory variable, but the relationship between the dependent variable value probability Π and the explanatory variable [23]. In fact, the relationship between probability Π and explanatory variables is not a simple linear relationship, but an "s" curve. At this time, the logical curve model is expressed as Logit transforms the previously mentioned formula to obtain Logit(y) � In π 1 − π � β 0 + β 1 x 1 + · · · + β n x n � Xβ. (12) e previously mentioned formula is the logistic regression model, in which β 0 , β 1 . . . β n is the parameter to be estimated. β 0 , β 1 . . . β n can be estimated by maximum likelihood and solved by Newton-Raphson iteration. For m samples, there are P � (y i � 1) � π i and P � (y i � 0) � 1 − π i , where i � 1,2, . . ., m. en, the joint probability density function of y i is e detection rate is defined as the possibility of risk samples found by the early warning model. e detection rate focuses on whether it can be found effectively. e definition accuracy rate is the accuracy of early warning discrimination after the early warning results come out. e accuracy rate focuses on whether the early warning results are accurate. Calculate the risk value of training samples according to the early warning model and verify the early warning effect of the model. e results are summarized in Table 1. In terms of detection ability, there are training samples marked as in the original data and the number of successful predictions by the early warning model. Among them, there are samples that are misjudged as normal, and the detection rate of risk samples is as high as 76.1%. ere are samples marked as in the original data, and there are samples that are successfully predicted, among which there are samples that are misjudged as risk samples, and the detection rate of normal samples is 68.2%, nearly 70%, and the recognition rate of normal samples is slightly low. e average detection rate of the model is 72.1%. e overall detection effect of the established logistic early warning model is good, which can basically be used for actual early warning. e probability of early warning is 72.5%, while the probability of accurate early warning is 29.5% when the sample is normal, and the probability of accurate early warning is 3.5%. e detection rate and accuracy of logistic early warning model are equivalent. Taking the total early warning effect as the arithmetic mean of average detection rate and average accuracy, the total early warning effect of logistic model is 72.2%. Conclusion is paper studies the specific application of decision tree algorithm in enterprise cost control. e application of data mining technology first needs to clarify the data mining requirements, which is the prerequisite before each data mining task is established. Only by clarifying the data mining requirements, can we further determine what kind of data to choose and what algorithm to use for data mining, to make the realization goal of data mining more targeted. From the previously mentioned experimental results, we can find that when using the improved algorithm in this paper, only three cost items are retained through PCA algorithm. To include the production cost for analysis, the overall tree view is more streamlined. Moreover, according to the evaluation results after the decision tree modeling is completed, the accuracy of decision tree modeling using the optimization algorithm is higher, up to 95.1%, and the modeling time is shorter, 8.2 seconds. Additionally, we also substituted the data into the ordinary PCA fusion algorithm, and its accuracy reaches 94.2, but the time spent is less than the optimization algorithm in this paper, which is 7.8 seconds. Data Availability e data used to support the findings of this study are included within the article.
2022-07-17T15:20:47.059Z
2022-07-14T00:00:00.000
{ "year": 2022, "sha1": "4a759ca4060b3e845811d7de8d03ff5420eaa92c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2022/9182099", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b7d5ad4f496a62b41b953e8d5cfb59c87208afa8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
201101666
pes2o/s2orc
v3-fos-license
Type X strains of Toxoplasma gondii are virulent for southern sea otters (Enhydra lutris nereis) and present in felids from nearby watersheds Why some Toxoplasma gondii-infected southern sea otters (Enhydra lutris nereis) develop fatal toxoplasmosis while others have incidental or mild chronic infections has long puzzled the scientific community. We assessed robust datasets on T. gondii molecular characterization in relation to detailed necropsy and histopathology results to evaluate whether parasite genotype influences pathological outcomes in sea otters that stranded along the central California coast. Genotypes isolated from sea otters were also compared with T. gondii strains circulating in felids from nearby coastal regions to assess land-to-sea parasite transmission. The predominant T. gondii genotypes isolated from 135 necropsied sea otters were atypical Type X and Type X variants (79%), with the remainder (21%) belonging to Type II or Type II/X recombinants. All sea otters that died due to T. gondii as a primary cause of death were infected with Type X or X-variant T. gondii strains. The same atypical T. gondii strains were detected in sea otters with fatal toxoplasmosis and terrestrial felids from watersheds bordering the sea otter range. Our results confirm a land–sea connection for virulent T. gondii genotypes and highlight how faecal contamination can deliver lethal pathogens to coastal waters, leading to detrimental impacts on marine wildlife. Introduction A large proportion of wild southern sea otters (Enhydra lutris nereis) are infected with the protozoan parasite Toxoplasma gondii, with up to 70% of live-captured animals exposed in high-risk locations such as Monterey Bay, California [1]. Among sea otter carcasses examined by pathologists between 1998 and 2001, T. gondii was determined to be the primary cause of death for 17% of otters, and the parasite contributed to mortality for an additional 12% [2]. While the relative proportion of sea otter mortalities that are attributed to T. gondii varies annually, ongoing investigations suggest that T. gondii is still an important cause of southern sea otter morbidity and mortality [3,4]. Virtually, all warm-blooded vertebrates are susceptible to T. gondii as intermediate hosts, including wildlife and humans [5]. However, only wild and domestic felids serve as definitive hosts, with sexual replication of T. gondii in the gut resulting in faecal shedding of hundreds of millions of environmentally resistant oocysts [6]. Parasite transmission can occur via ingestion of oocysts in contaminated food or water, or through consumption of tissue cysts in raw or undercooked meat. Sea otters do not typically prey on warmblooded intermediate hosts of T. gondii (e.g. mammals and birds) and are likely infected via ingestion of oocysts that accumulate in coastal habitats receiving contaminated freshwater run-off [7]. Although most T. gondii infections in healthy people and animals are subclinical or manifest with mild flu-like symptoms, in sea otters, the parasite can cause mortality directly via development of meningoencephalitis. Sublethal infection may reduce fitness and enhance the risk of developing fatal disease following infection by other protozoa, such as Sarcocystis neurona [8,9]. In humans, factors proposed to contribute to a fatal outcome following infection with T. gondii include immune system dysfunction, infective stage (i.e. ingestion of either oocysts or tissue cysts) and parasite genotype [10]. However, associations between strain type, lesion patterns and clinical outcome have not been reported in wildlife [11]. To clarify T. gondii transmission pathways from felid hosts to marine mammals, several studies investigated the transport of T. gondii oocysts from felid faeces deposited on land to marine environments. These studies demonstrated that oocysts are likely to accumulate in habitats where sea otters live due to biophysical mechanisms that promote the concentration of oocysts in kelp forests, followed by acquisition of T. gondii by marine snails, an important sea otter prey item [12,13]. Far less well characterized is the pathophysiology of T. gondii infection following ingestion by sea otters, including potential strain-specific impacts on animal health and survival. The T. gondii genotypes previously isolated from infected southern sea otter carcasses, Type II and Type X (Haplogroup 12) [14,15], exist throughout North America, with Type II detected primarily in domestic animals and Type X in wildlife [16]. In California watersheds bordering the sea otter range, evidence supports separate, but overlapping domestic (Type II) and wild (Type X) transmission cycles [17,18]. Type X infection was more common in wild felids but occurred in 22% of domestic cats. However, to date, the distribution of T. gondii genotypes has not been fully investigated for California sea otters. The primary objectives of this research were to (i) determine if T. gondii-associated mortality is related to the parasite genotype infecting sea otters; (ii) investigate finerscale associations between the isolated T. gondii genotype and observed lesion patterns (e.g. the severity of brain inflammation) in sea otters; and (iii) compare T. gondii genotypes infecting sea otters with those from nearby domestic and wild felids. The study included comprehensive investigation of T. gondii-associated lesion patterns, primary and contributing causes of death, and T. gondii genotype characterization for greater than 100 stranded southern sea otters that have been examined by pathologists over an 18-year period (1998-2015). The spatial relationship between T. gondii genotypes in sea otters and previously characterized terrestrial felids from nearby watersheds was evaluated to investigate specific geographical areas or felid populations associated with the most virulent strains in contaminated coastal habitats. (b) Histopathology, immunohistochemistry and cause of death determination Formalin-fixed tissues were trimmed and paraffin-embedded, and 5 µm thick sections were cut and stained with haematoxylin and eosin. Tissue sections were reviewed under a light microscope for abnormalities and evidence of T. gondii infection. Data collected to assess infection status and severity included the relative concentration (none, low, medium or high) and protozoal stages (e.g. tissue cysts or zoites) in the brain, myocardium and skeletal muscle on histopathology. Observed protozoa were identified using established morphological criteria, with immunohistochemistry performed to confirm parasite identity as needed [9]. In addition, the type ( predominantly lymphoplasmacytic or mixed inflammatory infiltrate) and relative severity of the brain and myocardial inflammation (none, mild, moderate or severe) were assessed; lymphoplasmacytic inflammation typically dominates in tissues of T. gondii-infected southern sea otters [9]. Because of the high frequency of sublethal T. gondii infection in southern sea otters [9], and because sublethal infections are often accompanied by chronic lymphoplasmacytic meningitis and perivascular cuffing in the meninges and brain parenchyma without significant parenchyma inflammation, T. gondii was considered a primary or contributing cause of death only when parasite-associated inflammation was moderate or severe in the brain parenchyma and/or myocardium, in addition to any observed meningeal or perivascular inflammatory infiltrate. Final ranking of T. gondii as a primary or contributing cause of sea otter death was based on the relative significance of all abnormalities identified through gross necropsy, histopathology (including the degree of T. gondii-associated inflammation and tissue damage in the brain, heart or multiple tissues) and additional diagnostic tests (e.g. immunohistochemistry). A primary cause of death and up to three contributing causes of death were possible for each animal. The primary cause of death was the most severe and immediately life-threatening process that was identified through extensive case review. Contributing cause(s) of death were additional independent processes that were considered moderate to severe at the time of death. Systematic tissue scoring on histopathology and cause of death determination was performed by a veterinary pathologist (M.M.) with no knowledge of the T. gondii genotype isolated from each enrolled sea otter. (c) Isolation of Toxoplasma gondii via cell culture Brain tissue collected aseptically during necropsy was processed for protozoal parasite isolation in cell culture as previously described [9]. Briefly, fresh sections (4-8 g) of sea otter brain were placed in antibiotic saline, homogenized, added to 10 ml trypsin-EDTA (0.25%) and incubated at 37°C for 1 h. Samples were centrifuged and a 1 ml tissue pellet added to MA-104 (monkey kidney) feeder layer cells and incubated for 2 h at 37°C and 5% CO 2 . After incubation, media and tissue were discarded and fresh Dulbecco's medium supplemented with 10% fetal bovine serum was added. Cultures were incubated at 37°C and observed daily for evidence of parasite growth. (ii) Multi-locus polymerase chain reaction A subset of T. gondii isolates (n = 29) were initially used to assess genetic variability. Extracted DNA was amplified via polymerase chain reaction (PCR) for 13 polymorphic loci including B1 [19], SAG1, 3 0 -SAG2, 5 0 -SAG2 alt, SAG2, SAG3, BTUB, GRA6, C22-8, C29-2, L358, PK1 and Apico [20]. As these samples constituted DNA from parasite cultures with relatively high nucleic acid concentrations, single (instead of nested) PCR assays were performed using the internal primers for each locus as described by Su et al. [20] and Grigg & Boothroyd [19]. Thermocycler conditions and mastermix reagents were previously described [3] and included forward and reverse primer sets for each locus (electronic supplementary material, table S1). Based on initial results, six loci were selected for genotyping all remaining (n = 106) isolates: SAG1, GRA6, BTUB, L358, PK1 and B1 (electronic supplementary material, table S1). Non-selected loci were omitted due to the absence of observed variability, and inability to discriminate between Types X and II (electronic supplementary material, data S1). Neither T. gondii genotype I nor III were detected during initial T. gondii diversity assessment. (iii) Sequence analysis: virtual restriction fragment length polymorphism and multi-locus sequence typing Amplified PCR products were purified using the QIAquick Gel Extraction kit (Qiagen Inc., Chatsworth, CA, USA) following the manufacturer's instructions, and sequenced at the UC Davis core DNA Sequencing Facility. Forward and reverse DNA sequences were aligned using Geneious software (Biomatters, Auckland, New Zealand), ends were trimmed and the consensus sequences manually examined for mismatches or ambiguous base pairs. For each locus, contig sequences were aligned and compared with sequences from well-characterized strains of T. gondii-Type I (RH), Type II (ME49), Type III (CTG) and Type X (a previously described Type X-infected bobcat (number 4) identified by VanWormer et al. [17]). Two different classification systems were used to differentiate strain types. First, restriction enzymes were virtually applied to each contig sequence to identify SNPs that would produce distinct cleaving patterns [19,20]. Resulting cleaving patterns were compared with reference strains, and a restriction fragment length polymorphism (RFLP) genotype was assigned at each locus. The RFLP data from all loci were used to derive a ToxoDB genotype number (http://toxodb.org/toxo/) for each animal. In addition, a multi-locus sequence typing (MLST) approach was used to identify all additional SNPs (not included in the RFLP analysis) when compared with reference strains. Each sea otter isolate was thus provided with two strain classifications: RFLP data (Types II, X or Atypical mixed II/X alleles) were categorized into genotypes using the ToxoDB classification scheme (RFLP Genotype no. 1-231) and MLST strain types were determined based on SNP data. Unique MLST strain types were classified as variants of the two reference strains that were dominant in this population (Types II and X) or their mixtures (table 1; electronic supplementary material, data S2). As the molecular characterization relied on T. gondii isolates from cell culture, a single strain was obtained for each animal; infection with more than one T. gondii strain could be missed and thus, mixed infections are not addressed in this investigation. Table 1. Genotypes of T. gondii isolates obtained from southern sea otters in California (1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015). Genotyping was performed using RFLP and classification into ToxoDB types, as well as MLST. MLST strains in italics were isolated from sea otters that died from T. gondii as a primary cause of death. The prevalence of T. gondii isolate Types using the two genotyping classifications (RFLP/ToxoDB and MLST) was calculated for all sea otters for which genotyping was completed (n = 135). Genotype prevalence was also assessed in relation to each mortality outcome for sea otters that received detailed necropsy with histopathology (n = 116). Univariable and multivariable bias-reduced logistic regression models were used to investigate associations between (i) otters with T. gondii as the primary cause of death and isolated T. gondii genotype (RFLP Type X versus other genotypes); and (ii) T. gondii genotype and pathology variables (e.g. degree of inflammation in the brain and heart). Associations with seasonal, temporal (year of sampling) and demographic (e.g. age, sex) variables were also examined for each outcome. Only RFLP genotype classifications were used in regression analyses, as power was not sufficient to evaluate MLST genotypes. Variables with p < 0.20 in univariable models (electronic supplementary material, tables S3 and S4) were evaluated in multivariable logistic regression models. A purposeful selection model-building strategy [21] was used and variables were retained in the model when p ≤ 0.05. Potential confounding variables were assessed in the multivariable models including age and sex, which were significantly associated with protozoalassociated mortalities in sea otters in previous studies [2,7]. Akaike's information criterion was used to select a parsimonious multivariable model for each outcome. Regression analyses were performed using the brglm package [22] in R v. 3.5.0 [23]. (f ) Spatial analysis Latitude and longitude coordinates were assigned to each sea otter based on the centre point of the ATOS (As-The-Otter-Swims) polygon where the carcass was collected. Following conversion to cartesian coordinates, geographical clustering of T. gondii genotypes in sea otters was assessed using a Bernoulli model elliptical scanning window with a medium non-compactness penalty in SaTScan v. 9.6 [24]. A maximum spatial cluster size of 50% of the population at risk was used, and overlapping clusters were not permitted. As previously sampled felids [17] were predominantly collected near Monterey Bay rather than along the entire sea otter range, felid genotypes were not included in the SatScan analysis. Spatial relationships between sea otters infected with virulent genotypes of T. gondii and identical strains in felids were assessed after cluster analysis. Sea otter locations and significant geographical clusters of genotypes, felid locations (from [17]) and coastal watershed boundaries were mapped using QGIS v. 3.2.0 [25]. 1a). Within the latter, the most prevalent genotype was ToxoDB 5 (Type X: 76% of isolates), followed by ToxoDB 1 (Type II: 22% of isolates) and two mixed II/X genotypes (1% each), which were classified as ToxoDB 4 (MLST II/X B) or Unique (MLST II/X C). Results Within the 103 isolates classified as ToxoDB 5, six MLST types were obtained through identification of SNPs (table 1). The most prevalent MLST strain was Type X (n = 45). The second most prevalent strain (n = 31) was a closely related Type X variant distinguished by a single SNP at the B1 locus relative to the Type X reference strain (electronic supplementary material, table S5). Twenty-two isolates were classified as MLST X/II variant C, a genotype that was previously isolated from an aborted sea otter pup from central California [3]. Within the 30 isolates classified as ToxoDB 1 (Type II), four different MLST strains were identified: 26 were identical with the Type II reference strain; two strains (MLST type II/X A) had a mixed II/X genotype with the Type II sequence at all loci except at the B1 gene where the strains were identical with Type X; and one isolate each had a unique SNP that differentiated it from Type II at either the SAG1 (MLST II variant A) or PK1 (MLST II variant B) genes, respectively. (c) Toxoplasma gondii genotype distribution among mortality classification groups For 116 animals with available T. gondii genotype and detailed pathological data, similar genotype distributions of ToxoDB 1 and 5 were identified for T. gondii-infected sea otters where infection was not associated with death and those with toxoplasmosis as a contributing cause of death (figure 1b). By contrast, 100% of 12 sea otters with toxoplasmosis as the primary cause of death were infected with ToxoDB 5 (Type X). Using MLST, we found 10 discrete strains in sea otters with incidental T. gondii infections (n = 83); six MLST strains in sea otters with T. gondii as a contributing cause of death (n = 21); and four MLST strains in sea otters with T. gondii as the primary cause of death (n = 12). The four MLST strains in this latter group were the Type X variant (42%), Type X (33%), the Type X/II variant (17%) described in the aborted sea otter pup [3] and an X/II mixed genotype (8%). (e) Genetic and spatial associations between T. gondii genotypes in sea otters and felids (d) Association between genotype and toxoplasmosis as a primary cause of death To assess land-sea parasite transmission, T. gondii genotypes from sea otters were genetically and spatially compared with strains reported from terrestrial felids sampled along the central California coast during a similar time period (2006-2009) [17]. A significant geographical cluster of sea otters infected with the ToxoDB 5 (Type X) genotype was identified in the central portion of the sea otter range ( p < 0.01; figure 2). No significant geographical clusters of the Type X variant or X/II variant C were detected. Genetic and spatial comparisons of T. gondii genotypes in sea otters and felids focused on watersheds bordering Monterey Bay in the northern portion of the sea otter range, the predominant felid sampling area in previous studies [17]. RFLP analysis demonstrated identical cleaving patterns among two sea otter strains (TgSoUS3587 and TgSoUS3950) and a feral domestic cat (Felis catus; FC 49) that exhibited an atypical II/X mixed genotype corresponding with MLST II/X A (table 2 and figure 3a). The Type X variant strain isolated from five (42%) sea otters that died from toxoplasmosis as a primary cause of death was identified in three felids: two domestic feral cats and a bobcat (Lynx rufus) (figure 3b, electronic supplementary material, table S5). Additional MLST typing on felid tissues at other loci was successful for T. gondi from one feral cat (FC 29), which had 100% sequence identity across all five loci (B1, SAG1, GRA6, PK1 and L358) with the MLST X variant genotype isolated from sea otters. Discussion The severity of disease following natural T. gondii infection varies in intermediate hosts, and linking virulence to parasite genotype is particularly challenging in wild animals where detailed necropsy and histopathology data for large samples of T. gondii-infected animals are rare [11]. Unique circumstances in coastal California enabled close surveillance of federally listed threatened southern sea otters, a population where 20-70% of animals are infected with T. gondii [1,26]. This study uniquely integrates high-resolution molecular characterization and detailed pathological findings to evaluate T. gondii genotype in relation to disease outcome. Our discovery of the same atypical T. gondii genotypes in domestic and wild felids, and in sea otters living just offshore that died from T. gondii encephalitis, underscores the detrimental outcome of terrestrially derived pathogens for sensitive marine species. While 11 different T. gondii strains from sea otters were characterized via MLST, only four were found in animals that died due to toxoplasmosis as a primary cause of death. royalsocietypublishing.org/journal/rspb Proc. R. Soc. B 286: 20191334 These MLST strains (Type X, X variants or mixed X/II strains) were all classified within the ToxoDB 5 (Type X) genotype (figure 1b). As our statistical power was limited due to the small sample size, we were not able to evaluate associations between MLST strains and toxoplasmosis as a primary cause of death. However, sea otters infected with the Type X genotype (Type X, X variants or mixed X/II strains) were significantly more likely to die of toxoplasmosis than those infected with non-Type X genotypes. The Type X genotype was recently grouped into haplotype 12 that has been proposed as a fourth clonal lineage in North America, occurring predominately in wildlife (e.g. foxes, wild rodents, wolves and deer [27]) and occasionally, humans [28]. This genotype was also detected in shellfish from nearshore waters in California where sea otters live [18,29]. The identification of strain-associated pathogenicity in wildlife populations is a fundamentally important finding that illustrates how genetic diversity of a single species impacts pathogen-host dynamics in nature. Laboratory studies and investigations of disease outbreaks identified linkages between T. gondii genotype and virulence in domestic animals and humans, respectively (reviewed by Robert-Gangneux et al. [10]). Exposure studies using laboratory mice have demonstrated that strains possessing predominantly Type I alleles exhibit higher virulence, when compared with Types II and III [30]. For humans, disease outcome following T. gondii infection may be more complex, although some investigations have linked specific T. gondii genotypes with more severe disease. In a study focusing on immunocompromised humans, T. gondii genotype did not predict clinical outcome [31], with the authors concluding that immune status and host factors were more important predictors of disease severity. By contrast, Sibley & Boothroyd [30] reported that T. gondii infections of the Type I clonal lineage resulted in more virulent toxoplasmosis in diverse hosts, including human AIDS patients [30]. Severe toxoplasmosis and, occasionally, death were documented in immunocompetent adult humans infected with atypical T. gondii strains in South America [32]. Other reports have also noted associations between infection by atypical T. gondii genotypes and more severe illness, characterized by ocular disease [33], pneumonia [34], multi-visceral toxoplasmosis and occasionally death in immunocompetent adults and neonates [35]. In contrast with laboratory animals and humans, studies investigating the relationship between T. gondii genotype and disease outcome are scarce for wildlife populations. Gibson et al. [8] reported no statistical association between T. gondii genotype and parasite-induced pathological changes in several marine mammal species from the Pacific Northwest. In a study that included 39 sea otter isolates from California and Washington, Sundar et al. [11] described six T. gondii genotypes using RFLP and found diversity of parasite strains similar to the current investigation. However, in their study, T. gondii infection was considered an incidental finding for most otters, and a contributing cause of death for only two animals. Interestingly, this latter study demonstrated two mouse-virulent isolates that were derived from sea otters where T. gondii was an incidental finding [11]. Verma et al. [37] also described the virulence of T. gondii isolates from northern sea otters in knock-out mice that died or became clinically ill, while all Swiss Webster mice survived. However, no data were available regarding observed lesions or pathological outcomes for the corresponding sea otter hosts. Our data illustrate connections between T. gondii genotypes infecting terrestrial and marine hosts. The X variant MLST strain was detected via sequence analysis at the B1 gene in two feral domestic cats (FC 29 and FC 30) and one bobcat (Bobcat 6) that were previously classified as Type X based on RFLP analysis [17]. In addition, Miller et al. [15] described the same SNP in two sea otters for which the B1 gene was sequenced. The data in the present study are the first to describe this strain in sea otters where T. gondii was implicated as the primary cause of death. The presence of T. gondii strains with an identical Type X variant SNP in both wild and domestic felids inhabiting coastal watersheds, and sea otters residing in adjacent nearshore marine habitat, is a strong indication that virulent strains are linked from source (felids) to host (sea otters) across the land-sea interface in California. While some oocysts may be carried long distances by ocean currents, biophysical studies suggest that oocysts from contaminated freshwater run-off can become preferentially concentrated in nearby coastal habitats [12]. Additionally, T. gondii infections and oocyst transport are associated with local landscape features including coastal development [1,38]. Therefore, infections in domestic and wild felids from watersheds bordering the sea otter range are relevant to T. gondii land-sea transmission and infections in marine mammals. Geographical clustering of T. gondii genotypes in previous studies of California terrestrial and marine hosts and similar clusters for sea otters in this study further supports local land-sea transmission [15,17]. Morro Bay has been previously identified as a high-risk region for T. gondii exposure and morbidity in sea otters [2,7,36], and Miller et al. [15] reported spatial clustering of the Type X (ToxoDB 5) genotype in sea otters near Morro Bay. Data from the current study support these findings, with a significant geographical cluster of the ToxoDB 5 genotype observed along the Big Sur coast and Morro Bay (figure 2). Limited terrestrial felid data in the southern portion of the sea otter range preclude precise assessment of potential land-sea connections in this region. Further studies on T. gondii oocyst genotypes shed by domestic and wild felids would provide additional insight on sources of sea otter infection. While Type X infections occur in both domestic and wild felids in watersheds bordering the sea otter range, genotype data are needed for the oocysts shed by these felids. In experimental studies, the prevalence of oocyst shedding varied with T. gondii strain. Greater levels of shedding were observed in wild felids exposed to atypical 'wild' strains and in domestic cats exposed to archetypal 'domestic' strains (e.g. Types I, II or III) [39,40], but only limited genotypes were tested. One of six domestic cats experimentally infected with an atypical strain shed similar numbers of oocysts (2 × 10 8 ) as cats infected with domestic strains [40]. To our knowledge, shedding of Type X oocysts by a domestic cat has only been reported for one clinically ill animal [41]. Field studies are therefore needed to clarify levels of shedding by domestic cats infected with Type X under natural conditions. Table 2. RFLP digestion patterns of T. gondii at six selected loci for reference strains, and four southern sea otter isolates that displayed atypical, mixed (II/X) genotypes. Of these, two sea otter isolates (3587-01 and 3950-03) shared identical RFLP and sequence data among three loci (B1, GRA6 and SAG1) with a feral domestic cat (FC 49 previously reported by VanWormer et al. [17]). Italicized text corresponds to the locus where the X allele was detected, with other loci consistent with the Type II genotype. ToxoDB type RFLP type MLST strain B1 a GRA6 BTUB L358 PK1 SAG1 reference strains Type I (RH) 10 I I I I I X/I I I Type II (ME49) 1 II II II/III II/X II/X II II/X II/III Type III (CTG) 2 III III I/III III III III III II/ Importantly, although Type X infections are more prevalent in wild felids in coastal California, 22% of domestic cats were infected with this genotype [17]. Population sizes of domestic cats in coastal California are much larger than those of wild felids [42]. Domestic cats also inhabit developed landscapes with impervious surfaces (e.g. concrete) that facilitate pathogen run-off and they have higher relative contributions to environmental oocyst load along many areas of the sea otter range [38]. As sea otters have evolved in close proximity to wild felids, it is interesting that a wild-associated T. gondii genotype (Type X) is linked to sea otter mortality, whereas the type more commonly associated with domestic cats (Type II) appears less virulent. It is possible that Type X has been more recently introduced to sea otters, or that the previously mentioned coastal habitat changes have increased the numbers of Type X oocysts to which otters are exposed. Taken collectively, these questions emphasize the importance of linked marine and terrestrial T. gondii studies to understand parasite transmission and virulence. Conclusion The current study provides the first robust analysis for comparing T. gondii isolate genotype with the severity of toxoplasmosis in wild animals. The association between infection with strains that possess predominately Type X alleles and fatal T. gondii-mediated encephalitis in sea otters is highly suggestive that parasite strain is an important determinant of outcome following parasite exposure. Additional factors, including exposure to chemical pollutants, co-infection with other pathogens (e.g. S. neurona [8]), and immunosuppression, should also be considered for further insight on evaluating determinants of T. gondii pathology in wildlife [26]. The molecular identity of atypical T. gondii strains in sea otters that died due to toxoplasmosis and nearby feral domestic cats and a bobcat demonstrate how land-to-sea flow of lethal pathogens from domestic and wild animals can impact wildlife health in coastal ecosystems. In addition to detrimental health impacts in sea otters, T. gondii can infect and kill other marine wildlife, including critically endangered Hawaiian monk seals (Neomonachus schauinslandi) [43] and Maui's dolphins (Cephalorhynchus hectori mauii) [44]. As each of these species represent different hosts that inhabit unique marine niches, species-and regional-specific studies will be required to elucidate T. gondii strain virulence and transmission patterns in these populations. Data accessibility. The datasets supporting this article have been uploaded as part of the electronic supplementary material. Unique T. gondii strain sequences have been deposited in GenBank (MK988572 and MK988573). These sequences will be made publicly available at the time of publication.
2019-08-21T13:01:49.208Z
2019-08-21T00:00:00.000
{ "year": 2019, "sha1": "9b59c84104895cdc4adbe0fd15b22cb14cee72d2", "oa_license": "CCBY", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2019.1334", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "024d829307eed32b8431730f2afe8d44f2e88fa2", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
49576369
pes2o/s2orc
v3-fos-license
Phasic and sustained interactions of multisensory interplay and temporal expectation Every moment organisms are confronted with complex streams of information which they use to generate a reliable mental model of the world. There is converging evidence for several optimization mechanisms instrumental in integrating (or segregating) incoming information; among them are multisensory interplay (MSI) and temporal expectation (TE). Both mechanisms can account for enhanced perceptual sensitivity and are well studied in isolation; how these two mechanisms interact is currently less well-known. Here, we tested in a series of four psychophysical experiments for TE effects in uni- and multisensory contexts with different levels of modality-related and spatial uncertainty. We found that TE enhanced perceptual sensitivity for the multisensory relative to the best unisensory condition (i.e. multisensory facilitation according to the max-criterion). In the latter TE effects even vanished if stimulus-related spatial uncertainty was increased. Accordingly, computational modelling indicated that TE, modality-related and spatial uncertainty predict multisensory facilitation. Finally, the analysis of stimulus history revealed that matching expectation at trial n-1 selectively improves multisensory performance irrespective of stimulus-related uncertainty. Together, our results indicate that benefits of multisensory stimulation are enhanced by TE especially in noisy environments, which allows for more robust information extraction to boost performance on both short and sustained time ranges. Supplement I: Beta estimates for all models Supplementary Figure 1: Here we show all beta estimates for the d' models (top) and the RT models (bottom) reported in our publication. Beta estimates are shown for each factor (columns) as well as the intercept. Error bars depict standard errors. Significance values are indicated as follows: *** p<.001, ** p<.01, * p<.05, n.s. implies p>.1. mean RT in ms However, we did not expect that the interaction of TE and modality would result in a large effect size. A large interaction effect size would imply that almost all participants show multisensory enhancement and a larger enhancement in the expect condition across all experiments. Given the manipulation of experimental context (i.e. spatial and target uncertainty) across experiments this is highly unlikely. For example, in the low target and low spatial uncertainty experiment (Exp.1), task difficulty was minimised compared to all other experiments. Hence, multisensory stimulation might be less beneficial. In accord with this notion, 14 out of 30 participants consistently did not show multisensory facilitation there. Furthermore, 7 participants in this experiment showed multisensory enhancement only in the unexpected condition, i.e. the more difficult condition due to missing preparedness. In all 4 experiments combined, 24 of 120 participants showed no sign of multisensory facilitation and 16 participants showed no improvement due to TE (4 people did neither show TE nor multisensory enhancement). For a total of 15 out of 120 participants, multisensory enhancement was restricted to unexpected trials. However, the majority of 81 participants showed multisensory facilitation in the expected condition especially in the high spatial uncertainty experiments which gave rise to the reported interaction effect size. Based on our previous report, we could have restricted our analysis to those experiments with a robust overall interaction (i.e. experiments with high spatial uncertainty) as this interaction is the main focus of our manuscript. A reduced ANOVA approach would have indeed resulted in a highly significant 'TE*modality' interactions with a pointedly larger effect size (i.e. η 2 = .12). However, we chosein the interest of the readershipto report the full scope of our results, thereby reducing effect sizes. Finally, effect sizes might have also been affected by the ratio of early and late target trials in the 'expect early' and 'expect late' blocks (86%-14% and 43%-57%, respectively). The reason behind our decision to use more early trials in the expect-late blocks was to have a more robust estimate of unexpected early trials. Performance in the expected early condition is based on 144 trials and in the unexpected early condition on 72 trials. If we would fully reverse probabilities, unexpected early performance would be based on 24 trials which significantly lowers reliability of the performance measure. However, it most likely would have increased the overall TE effect (see Exp. 5 1 ) and could have also affected the interaction term. Supplement V: Late targets As for early targets, the auditory modality was the preferred modality when targets were presented late (dʹ: 72 of 120, RT: 66 of 120). The graphs below illustrate that under high uncertainty, RTs increased in the best unisensory condition (Supplementary Supplementary Figure 2 As in our previous report 1 , the late target results support the notion that late targets are always expected and that TE effects are restricted to scenarios with temporal uncertainty. Yet audiovisual stimulation can still enhance target perception (as indicated by the d' effect). However, the pattern of results found for RTs rather indicates that differences were driven by decision processes. If e.g. perceptual latency was truly shortened, we would expect that reaction times decrease in the expected condition (as for AV trials, see Supplementary Figure 2 right). However, here RTs decreased in the unexpected condition indicating that participants possibly lowered their response threshold in the unexpected unisensory condition. This condition was most likely less often perceived than the audiovisual condition, hence, participants guessed more often fast than slow.
2018-07-06T13:12:36.392Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "a727ac6185dc74707233b58353975ca5ade3805c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-28495-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a727ac6185dc74707233b58353975ca5ade3805c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
228939087
pes2o/s2orc
v3-fos-license
The E ff ect of Enterprise Architecture Deployment Practices on Organizational Benefits: A Dynamic Capability Perspective : In recent years, the literature has emphasized theory building in the context of Enterprise Architecture (EA) research. Specifically, scholars tend to focus on EA-based capabilities that organize and deploy organization-specific resources to align strategic objectives with the technology’s particular use. Despite the growth in EA studies, substantial gaps remain in the literature. The most substantial gaps are that the conceptualization of EA-based capabilities still lacks a firm base in theory and that there is limited empirical evidence on how EA-based capabilities drive business transformation and deliver benefits to the firm. Therefore, this study focuses on EA-based capabilities, using the dynamic capabilities view as a theoretical foundation, and develops and tests a new research model that explains how dynamic enterprise architecture capabilities lead to organizational benefits. The research model’s hypotheses are tested using a dataset that contains responses from 299 CIO’s, IT managers, and lead architects. Based on this study’s outcomes, we contend that dynamic enterprise architecture capabilities positively enhance firms’ process innovation and business–IT alignment. These mediating forces are both positively associated with organizational benefits. The firms’ EA resources and specifically EA deployment practices are essential in cultivating dynamic enterprise architecture capabilities. This study advances our understanding of how to e ffi caciously de-lineate dynamic enterprise architecture capabilities in delivering benefits to the organization. Introduction Global technology trends like big data, the Internet of Things, and the rise of artificial intelligence are making firms' ability to change and adapt their organizations' structure, architecture, and people as crucial as their competitive strategy. These external forces and technology advances enact massive transformational changes within firms' business ecosystems, business units, and functions, and provide an opportunity to build capabilities in parallel with implementing a new strategic direction. Hence, firms need to accelerate the development of adaptive capabilities to ensure that the business can meet the needs of an increasingly complex environment. Moreover, firms need to embrace the business transformation journey to become top performers in the digital economy that are "future-ready" [1]. The increased frequency and speed of business-driven and information technology (IT)-driven change opportunities stress the importance of close alignment of IT resources, assets, and capabilities with business processes [2][3][4]. Enterprise Architecture (EA) can be considered a representation of a Resources-Based Theories and EA Deployment Practices Much of the current IT-business value scholarship base their conceptualizations and arguments on the RBV [30,31]. The RBV is considered by many to be an influential theory in the IS that explains how firms can realize and maintain a competitive edge by using the firms' IT and business resources [30,32]. This particular theory seems to be a fitting 'lens' when investigating firms that try to leverage EA resources and capabilities to enhance operational capabilities, innovation, and competitive performance. The extant IS and management literature make a clear distinction between the process of deploying resources and capability-building. These are the two core elements of the RBV [31,33]. Amit and Schoemaker [34]) define a firm's resources as stocks of assets owned or controlled by the firms. On the contrary, capabilities are considered firm-specific capacity to deploy these particular resources, typically together with other organizational capabilities to achieve specific goals [32,34]. Syntheses from IS and management studies that use the RBV show that firms that use particular resources that can be considered valuable, rare, inimitable, and also non-substitutable (VRIN) ought to perform better in terms of competitive advantage. So, drawing from the RBV logic, this study argues that the specific deployment of a firms' EA resources and capabilities will result in the firm's operational and strategic benefits [31,35,36]. Hence, firms that do not actively invest in their (VRIN) EA resource portfolio may cause the deprivation of resources and the capabilities that build on these resources [37]. These insights are crucial for firms that want to excel with their EA practice to invest conscientiously. In that regard, the RBV acknowledges that the single investment in EA is not a sufficient condition for operational efficiencies and enhancing the firm's competitive nature. It can be deduced that it is thus more pertinent to identify the organizational capabilities that EA should be targeted in enabling or strengthening [38,39]. It seems that the literature requires a new theoretical perspective from which pathways to operational benefits can be systematically examined. In recent years, the literature shows a wide variety of empirical work involving surveys, case research, and expert perspectives, demonstrating the reach and range of EA use in an organization's strategy implementation processes [6,40]. As such, the literature has emphasized deploying EA resources and assets so that they can be leveraged for business transformation [4,35]. EA resources primarily aim at developing and deploying EA artifacts. These particular artifacts can be considered unique documents that collectively describe various aspects of the entire EA within the organization [10,41]. This study conceives EA resources as the EA deployment practices (or routines) that enable firms' capacity to benefit from EA's use. Hence, these EA resources are an essential antecedent of EA-based capabilities, i.e., capabilities that are enabled by EA's use, so that they can actively share assets and reconfigure and renew organizational resources [35]. Dynamic Enterprise Architecture Capabilities The DCV extends the RBV and attempts to explain the processes through which a firm evolves in changing environments and maintains a competitive edge [42,43]. Due to conditions of high environmental uncertainty, market volatility, and frequent change, scholars have raised questions regarding the rate to which traditional operational and existing 'resource-based' capabilities erode and cease to provide competitive gains [27]. Dynamic capabilities are generally considered as the ability of organizations to integrate, reconfigure, gain, and release resources to match and even create market change [28,42]. In the context of strategic management and IS literature, recently, some researchers argue that EA-based capabilities are valuable to firms in the process of using, deployment, and diffusion of EA in decision-making processes and the organizational routines that drive IT and business capabilities [2,4,14]. Moreover, Shanks et al. [4] argue that EA-based capabilities are essential to leveraging EA advisory services. Likewise, Hazen et al. [2], following the DCV, provide foundational work that shows that EA-based capabilities can enhance organizational agility and indirectly enhance organizational performance. These outcomes are consistent with work by Foorthuis et al. [44] that demonstrates the importance of intermediate EA-enabled outcomes that contribute to the achievement of particular business goals and objectives. Hence, recent EA scholarship argues that complementary EA capabilities enable firms to leverage their EA effectively [2,5], contribute to IT efficiency and IT flexibility [45], and can drive alignment between business and IT [12]. This study concurs with this EA-based capability view. It considers dynamic enterprise architecture capabilities as a dynamic capability that helps organizations identify and implement new business and IT initiatives to ensure that the organizations' assets and resources are current with the business's needs. Following the tenets of the DCV, this study argues that it is likely that the extent to which EAs are leveraged successfully within the organization depends on the dynamic capabilities that collectively use the EA to sense environmental threats and business opportunities while simultaneously implementing new strategic directions. This study conceives dynamic enterprise architecture capabilities as the firm's ability to exploit its EA to share assets, and recompose and renew organizational resources under rapidly changing internal and external conditions to accomplish strategic objectives and the desired end state [20]. Starting from the conceptualization of dynamic capabilities by [42], and subsequently building on previous EA-based capability studies, this study synthesizes the reach and range of dynamic enterprise architecture capabilities through three related, but distinct, capabilities, i.e., EA sensing capability, EA mobilizing capability, and EA transformation capability. An EA sensing capability highlights EA's role in firms' deliberate posture toward sensing and identifying new business opportunities or potential threats and developing a greater reactive and proactive strength in the business domain [4,15]. An EA mobilizing capability refers to an organizations' capability to use EA in the process of evaluating, prioritizing, and selecting potential solutions and mobilizing firm resources in line with a potential solution [4,23,24]. Finally, an EA transforming capability can be considered the ability to use the EA to successfully reconfigure business processes and the technology landscape, to engage in resource recombination, and to adjust for and respond to unexpected changes [4,27,36,46]. Figure 1 shows the proposed research model that contains four key constructs and the accompanying hypotheses. All the model's constructs and definitions are summarized in Table 1. Research Model and Hypotheses Development Sustainability 2020, 12, x FOR PEER REVIEW 4 of 21 enable firms to leverage their EA effectively [2,5], contribute to IT efficiency and IT flexibility [45], and can drive alignment between business and IT [12]. This study concurs with this EA-based capability view. It considers dynamic enterprise architecture capabilities as a dynamic capability that helps organizations identify and implement new business and IT initiatives to ensure that the organizations' assets and resources are current with the business's needs. Following the tenets of the DCV, this study argues that it is likely that the extent to which EAs are leveraged successfully within the organization depends on the dynamic capabilities that collectively use the EA to sense environmental threats and business opportunities while simultaneously implementing new strategic directions. This study conceives dynamic enterprise architecture capabilities as the firm's ability to exploit its EA to share assets, and recompose and renew organizational resources under rapidly changing internal and external conditions to accomplish strategic objectives and the desired end state [20]. Starting from the conceptualization of dynamic capabilities by [42], and subsequently building on previous EA-based capability studies, this study synthesizes the reach and range of dynamic enterprise architecture capabilities through three related, but distinct, capabilities, i.e., EA sensing capability, EA mobilizing capability, and EA transformation capability. An EA sensing capability highlights EA's role in firms' deliberate posture toward sensing and identifying new business opportunities or potential threats and developing a greater reactive and proactive strength in the business domain [4,15]. An EA mobilizing capability refers to an organizations' capability to use EA in the process of evaluating, prioritizing, and selecting potential solutions and mobilizing firm resources in line with a potential solution [4,23,24]. Finally, an EA transforming capability can be considered the ability to use the EA to successfully reconfigure business processes and the technology landscape, to engage in resource recombination, and to adjust for and respond to unexpected changes [4,27,36,46]. Figure 1 shows the proposed research model that contains four key constructs and the accompanying hypotheses. All the model's constructs and definitions are summarized in Table 1. Construct Definition Key Resource(s) Enterprise Architecture (EA) deployment practices EA practices (or routines) that deliberately use EA principles and deployment approaches for the strategic usage of the firm IS/IT (information technology) and business resources across the enterprise and foster the development of context-relevant enterprise architectural artifacts (e.g., models, business/IT mappings) across various architectural layers (e.g., business, information, and infrastructure layer). [ [47][48][49][50] Dynamic enterprise architecture capabilities A firm's ability to leverage its EA for asset sharing and to recompose and renew organizational resources, together with guidance to proactively address the rapidly changing internal and external business environment and achieve the organization's desirable state. Own definition Process innovation The process view of the business with the application of innovation to the firm's business processes. [51,52] Business-IT alignment The extent to which the firms' business and IT plans, priorities, and strategies are aligned. [53,54] Organizational benefits The extent to which a firm has a higher competitive advantage than its competitor(s), increased value for customers, and the ability to detect and respond to opportunities and threats with ease, speed, and dexterity. EA Deployment Practices and Dynamic Enterprise Architecture Capabilities By building upon the RBV, it can be argued that competence in leveraging EA resources and deployment practices, by an EA-based capability, together with other complementary firm resources will likely result in competitive advantages [32,36]. Wade and Hulland [31] argued, however, that firms should actively invest in all the necessary resources so that they can cultivate potent EA resources. The literature contends that EA resources are an essential antecedent of EA-based capabilities and dynamic enterprise architecture capabilities [4,22]. The literature contends that leveraging effective EA deployment practices using EA methods and principles for the strategic usage of the firm's business and IS/IT resources [47,48] will enable EA-based capabilities [4,59,60]. EA deployment practices use context-relevant EA artifacts (e.g., state and data diagrams, business process models, roadmaps, and frameworks) to represent the current (and future, to-be) business and IT across various architectural layers (e.g., business, information, and infrastructure layer) [50]. These artifacts can enhance the relationships and communication between various Business/IT stakeholders in the firm [10]. Using EA deployment practices, firms can facilitate processes to identify business problems and opportunities and various inefficiencies associated with current business processes and IT and prioritize the various improvement opportunities [15,25]. Given the above, it is likely that EA deployment practices will drive dynamic enterprise architecture capabilities. EA deployment practices are, thus, crucial in the process of achieving intermediate and also intangible EA-driven results and business value [22,61]. Hence, we expect that EA deployment practices will help develop dynamic enterprise architecture capabilities. Given the above, the current study proposes the first hypothesis: H1. EA deployment practices have a positive effect on the firm's dynamic enterprise architecture capabilities. Dynamic Enterprise Architecture Capabilities and business-IT alignment Both in scientific literature and practice, it is a well-known fact that achieving a state of business-IT alignment (this research focuses on the measurement of alignment at a single point in time, rather than focusing on a process that evolves over time) is essential to leverage the maximum potential organizational benefits [62,63]. Business-IT alignment typically refers to the degree to which the IT strategies, objectives, and priorities appropriately and harmoniously support business strategies, objectives, and priorities [53,64,65]. As such, this research focuses on the antecedents and drivers of business-IT alignment and, hence, the content of alignment, i.e., the match between realized business and IT strategy [63]. This alignment dimension is classified in the literature as 'intellectual alignment' [55,63,66]. Having a clear overview of the EA resources (e.g., EA content, EA standards, services, and other artifacts), and thus architectural transparency and a planned architectural design, can facilitate the process of integrating IT assets, resources, business processes, and services across various architectural layers [44,67]. EA can be leveraged to bridge the communication gap between business and IT stakeholders, facilitate cross-organizational dialogue and input [5], and improve business-IT-alignment [10,26]. Following the above discussion, it can be argued that dynamic enterprise architecture capabilities allow firms to continuously sense the on-going change within the organizations' internal and external business and IS/IT landscape and adequately respond by mobilizing firm resources to support business processes, specific user needs, and requirements using the EA [5]. Hence, this particular ability to cultivate the EA to successfully reconfigure the business and the IS/IT landscape, recombine resources, and adjust for and respond to unexpected changes is an essential driver for business-IT alignment. Hence, as firms proactively invest more in their dynamic enterprise architecture capabilities, one of the results is better business-IT-alignment [25,26]. Hence, the following is conjectured: H2. Dynamic enterprise architecture capabilities have a positive effect on business-IT alignment. Business-IT Alignment and Organizational Benefits Achieving a state of alignment comes with many organization benefit gains, including market growth, cost control, financial performance, increasing customer satisfaction levels, and augmented reputation [53,55,63,65,66]. Moreover, prior studies suggest that aligning the IT strategy with the business strategy will likely impact process agility and, thus, the way firms can easily and quickly reshape their business processes in turbulent business environments [55,68]. Although EA facilitates decision-making processes and brings the business and IT investment decisions in closer alignment to the organizational goals [5], EA by itself does not create any value for the firm [2,4]. Instead, IT and business managers can drive enterprise-wide transformational changes and provide the firm with various opportunities to build and deploy and capabilities while actively practicing its new strategic direction using the EA. Previous EA-based capabilities scholarship shows that many of EA's benefits are intangible, and value is achieved indirectly [4,44]. Therefore, this study theorizes that the business-IT alignment mediates the relation between dynamic enterprise architecture capabilities and organizational benefits. Thus, business-IT alignment is a crucial mediating force in the particular chain of EA value creation [38] and, therefore, a crucial antecedent of organizational benefits. Therefore, the following hypothesis is defined: H3. Business-IT alignment mediates the relationship between dynamic enterprise architecture capabilities and organizational benefits. Dynamic Enterprise Architecture Capabilities and Process Innovation Given the unique nature of the dynamic enterprise architecture capabilities-in terms of their reach and range-and their hypothesized relationship with organizational benefits, it is likely that dynamic enterprise architecture capabilities have a positive impact not only on business-IT alignment, but also on the process innovativeness of the firm. Various scholars argue that process innovation is the outcome of organizational learning and EA resource orchestration for which the roots can be traced back to dynamic capabilities [69][70][71]. There are many other forms of innovation (e.g., business model, leadership) that relate mainly to process innovation [72]. This study focuses on process innovation (or 'process innovativeness'), as it has a central place in the extant literature, and this type of innovation requires firms to (re)deploy IS/IT and other technologies to enhance the efficiency of new product development and commercialization [29]. Teece et al. [73] concur with this view, as they note that strong dynamic capabilities are required for fostering the organizational agility and associated requirements necessary for innovation. It is crucial for firms to systematically re-allocate resources and improve service and production operation methods through technological advancements to drive process innovation [29]. EA-based sensing capabilities facilitate firms in their process to spot, interpret, and pursue new IS/IT and technological innovations (e.g., cloud, IoT, big data analytics, AI, business intelligence), business, and process opportunities or identify potential threats [23,74]. These capabilities help firms align their EA services with key stakeholders' demands, wishes, and needs, thereby positioning EA deployment practices so that targeted efforts for process innovation can be initiated. Moreover, EA-based capabilities foster organizational learning by designing both IT and business facets of the enterprise and its relationship with the business ecosystems to enable innovation and its ability to adapt in conjunction with the business environment [25,75,76]. Once technological and business opportunities are first glimpsed, they must be addressed through maintaining and improving technological competences and complementary firm assets [21,36]. Hence, an EA-mobilizing capability allows firms to consciously direct investments in the firm's adaptiveness, use EA in the process of evaluating, prioritize and select potential IT and business solutions, and mobilize firm resources accordingly [4,23,24,76]. Thus, an EA-mobilizing capability is an essential ingredient for firms that want to adapt their resources and assets to the continually evolving customer wishes, demands and market, and technology trends, and shape their environment through innovation [21]. Finally, an EA transforming capability allows firms to engage in recombination and re-deployment of resources, change collaboration within the enterprise, and adjust for and respond to unexpected changes and the need for innovation [4,27,36,77]. Thus, dynamic enterprise architecture capabilities allow firms to use EA in decision-making processes and support competences to change the position of IS/IT and other firm resources through process innovation [4,25,75,78]. Hence, by these EA-based capabilities, firms can gain access to previously unavailable EA resources and sets of decision options, which can ultimately enhance their ability to innovate using EA and contribute to organizational benefits [27,28]. Hence, this study proposes the following hypothesis: H4. Dynamic enterprise architecture capabilities have a positive effect on firms' level of process innovation. Process Innovation and Organizational Benefits The level of process innovation tends to not rely on individual resources. Instead, it seems that process innovation is based on unique combinations of complementary resources and the cooperation of cohesive units governed by EA-based capabilities [5,9,45]. The literature claims that EA-based capabilities are a precursor for process innovation. Process innovation enabled by dynamic enterprise architecture capabilities, in turn, influences organizational benefits in several ways, as previously documented in the literature [29,51]. Specifically, process innovation leads to better financial and operational results (e.g., return on investment, market growth, cost reduction) [79], enhanced levels of productivity, process efficiencies, and effectiveness [80], as well as enhanced levels of customers' perceived value [29]. Hence, the following hypothesis is defined: H5. Process innovation will mediate the relationship between dynamic enterprise architecture capabilities and organizational benefits. Research Method and Design We selected a survey as the most appropriate method to test the research model and the proposed hypothesis. This study embraces a deductive approach that guides the study's design by focusing on predicting the key outcome construct in this study. Henceforth, claims are grounded in the existing body of knowledge, and this study also focuses on the development of persuasive arguments to substantiate these claims. As such, the current study required an extensive cross-sectional sample to test the model and the associated hypotheses. Data Collection and Sample Description A questionnaire was developed that included 38 main questions covering all relevant constructs in the research model (Table 2). Thirteen individuals pretested the survey, including IS scholars, enterprise architects, IT/business practitioners, and Master students, to enhance the survey items' content and face validity. The Netherlands belongs to the top European countries that deliver substantial economic impact using IT. Dutch firms are currently in a very proficient position to make use of the various economic and social opportunities created by digitalization, according to the Dutch Digitalization Strategy [81]. Please choose the appropriate response for each item (1-strongly disagree, 7-strongly agree). During the last 2 or 3 years, we performed much better than our main competitors in the same industry in: CA1 Growth in market share 0. 86 Students of an advanced Business and IT Master course on Enterprise Architecture and organizational capabilities of a Dutch University were asked to participate in this survey. These particular Master students are experienced business or IT managers, consultants, and senior practitioners, and therefore, can represent the firm-wide view. They are most likely familiar with the strategic role of EA within the firm. They were voluntarily asked to fill in the survey from the organization's particular perspective where they currently work. Nonetheless, to ensure a collective and firm-wide view, the respondents were also invited to consult their managers (or any other colleague) if they were unsure about a particular survey item. Additionally, all students (n = 235) had to distribute this survey to two knowledgeable domain experts from other organizations (e.g., CIOs, IT managers, and lead enterprise architects) following a snowball method. The survey was put to a rigorous pretesting procedure, enhancing both the reliability and validity. Additionally, construct definitions were provided to the respondents and the survey followed a logical structure. Respondents were offered a research report with the most important outcomes of this study. Anonymity was guaranteed, and respondents could withdraw their scores if they wanted to. The final survey was used to collect data as part of a field study. During the data collection, various controls were also built so that every organization completed the survey only once. The data collection phase started on the 17th of October 2018 and ended on the 16th of November 2018. A total of 669 unique respondents from different organizations participated in the survey. After removing cases with either (partly) incomplete (n = 290) or unreliable values (n = 80), this study includes a total of 299 usable questionnaires for the analyses. The majority of respondents operate in the private sector (i.e., 57%) and the public sector (i.e., 36%). Only a small percentage (i.e., 7%) comes from other categories such as private-public partnerships and non-governmental organizations. The dataset can roughly be classified into small to medium-size firms (i.e., 41%, no. of employees <1000) and large enterprises (i.e., 59%, no. of employees > 1000). The majority of responses come from high to executive managers, i.e., CEOs, CIOs, and IT management (approximately 70%). Approximately 60% of the respondents had more than 11 years of working experience. Of all the respondents, 40% had even more than 20 years of experience. As this research targets single respondents, there is a possibility that bias might exists. Hence, possible method bias was proactively accounted for to mitigate possible methods effects following guidelines by Podsakoff et al. [82]. Moreover, this study accounts for possible common method variance (CMV) per suggestions of Podsakoff [82]. T-tests group analyses for early (first two weeks) and later responses (final two weeks) for each research construct showed no significant differences, showing that possible non-response bias is not present. Finally, Harman's single factor test was performed using IBM SPSS Statistics™ v24 on the study constructs. Hence, the construct variables were all loaded on a single construct in an Exploratory Factor Analysis (EFA). Outcomes of this analysis showed that no single factor attributes to the majority of the variance; the sample is not affected by CMV [82]. Constructs and Measurement Items This study attempted to include existing validated measures where possible. EA deployment practices are a multidimensional construct that highlights the significance of using particular EA methods and deliberate deployment approaches to have projects comply with norms [49]. These practices also highlight the significance of EA principles for the strategic usage of the firm IS/IT and business resources across the enterprise [47,48]. Moreover, EA deployment practices foster firms to develop context-relevant enterprise architectural artifacts (e.g., models, business/IT mappings) across various architectural layers (e.g., business, information, and infrastructure layer) [50]. Hence, this study proposes these three measures (DP1-3) as a minimum baseline for EA deployment practices based on past empirical work and conceptual work. This study newly conceptualizes the EA-based capability as a dynamic capability. Hence, this study adopts three elementary routines for dynamic capabilities, i.e.,: (1) EA-sensing capability (EAS), (2) EA-mobilizing capability (EAM), (3) EA-transforming capability (EAT) [20,22]. These underlying capabilities collectively form the dynamic enterprise architecture capabilities construct. A rigorous conceptual and theoretical development should precede the development of this construct. For this research, we employed an incremental approach in the development of this new multi-item scale following established guidelines [83]. First, measurement items were directed from either previously cited or implied by extant conceptual and empirical work [4,23,24,27,74,77,84]. The first pool of scale items was developed using a seven-point Likert-type scale, ranging from "strongly disagree" to "strongly agree". Two sub-phases of scale development and purification followed based on previously outlined recommendations [83], i.e., item-sorting analysis and expert reviews. The item-to-construct sorting approach was employed to establish tentative item reliability and validity [85], while expert reviews once more evaluated all the established scale items and offered improvement suggestions [86]. These two sub-phases enhanced the reliability and construct validity of dynamic enterprise architecture capabilities at a pre-testing stage. The results of these intensive phases are omitted for the sake of brevity. This second-order construct is modeled using the reflective-formative type II model [87,88]. Business/IT-alignment (BA) is measured as a reflective first-order construct following [53,54], containing three existing items to capture the firms' IT strategic alignment between the business and IT plans, priorities, and strategies. Process innovation (PI) is likewise modeled as a reflective first-order construct following [52]. Relevant aspects include the extent to which firms have technological competitiveness and the novelty of technology used in critical processes. This study follows Shanks et al. [4] for the multi-dimensional nature of organizational benefits. Hence, the current study considers organizational benefits to be the long-term firm benefits resulting from intermediate-capabilities and IT-business benefits. Organizational benefits are conceptualized as a second-order factor using the reflective-formative type II model and contains three underlying first-order benefits factors, i.e., process agility (PA) [55], competitive advantage (CA) [56,57], and increased value (VL) [58]. Process agility concerns the firms' "ability to detect and respond to opportunities and threats with ease, speed, and dexterity" [55]. This study used five validated items from Tallon and Pinsonneault [55]. Competitive advantage has several dimensions, including a higher return on investment than competitors, better market share growth than competitors, and better profitability. Finally, increased value is measured through customer satisfaction, customer loyalty, and business brand and image compared to competitors. This study controlled for possible confounding relationships by adding several widely-used control variables in IS research, i.e., firm size and age. Table 2 shows all included measurement items, their respective item-to-construct loadings (λ), mean values (µ), and the standard deviations (std.). Each item in the final survey was measured using a seven-point Likert scale (1: Strongly disagree to 7: Strongly agree). Model Estimations This study relied on the use of SmartPLS version 3.2.7. [89], which is a Structural Equation Modeling (SEM) application using Partial Least Squares (PLS) to estimate the research model and run parameter estimates. PLS-SEM is variance-based and is considered a better alternative than covariance-based modeling techniques (e.g., LISREL, AMOS) when the emphasis is on prediction, since PLS tries to maximize the explained variance in the dependent construct [90,91]. Additionally, PLS readily handles both reflective and formative measures [92,93], as is the case in this research, and PLS provides researchers with a greater ability to predict and understand the role and formation of latent constructs and their relationships among each other [90][91][92]. Analyses make use of the path weighing scheme within SmartPLS. Additionally, a non-parametric bootstrapping procedure was employed to compute the significance of the regression coefficients running from the first-order constructs to the second-order construct. In this process, 5000 replications were used to obtain stable results and to interpret their significance. Finally, the 299 organizations in the dataset far exceed all minimum requirements to run the SEM analyses [93,94]. Evaluation of the Outer Model This study subjects the research model's constructs to internal consistency reliability, convergent validity, and discriminant validity test through SmartPLS [89]. At the construct level, a composite reliability (CR) assessment and the classic Cronbach's alpha measure (CA) was employed [90]. Typically, CA and CR values should be above 0.70, as is the case in this research (see Table 3). Additionally, this study assessed construct-to-item loadings. None of the included items had to be removed, as all loadings were above 0.70 [95] (only one measurement item in the survey had a loading of 0.68; this is still in the range of acceptable item loadings). Next, this study assessed convergent and discriminant validity [90,93]. Hence, convergent validity was assessed by examining if the average variance extracted (AVE) is above the generally accepted lower limit of 0.50 [96]. All the obtained AVE values exceed the minimum threshold value. In a subsequent step, this study assessed the discriminant validity through three different but related tests. First, the data were assessed to detect high-loadings on the hypothesized constructs and low cross-loadings (i.e., correlations) on other constructs [97]. The data showed that all items load more strongly on their intended latent constructs than they correlate on other constructs. Second, the Fornell-Larcker criterion was assessed. In doing so, PLS was used to investigate if the square root of the AVEs of all constructs was larger than the cross-correlation (see the diagonal entries in bold in Table 3). All square root values are higher than the constructs' shared variances with other constructs in the model [93]. Finally, this study found additional evidence for discriminant validity by employing the relatively newly developed heterotrait-monotrait (HTMT) metric [98]. All values showed acceptable outcomes far below the conservative 0.90 upper bound. As shown in Table 3, the first-order reflective measures are valid and reliable. All first-order constructs demonstrate a significant relationship with their respective higher-order construct (i.e., dynamic enterprise architecture capabilities, and organizational benefits). Additionally, the assessed variance inflation factors (VIFs) are well below a conservative critical value of 3.5. In addition to the absence of non-significant relations between first-order capabilities and the second-order constructs, these outcomes indicate that no multicollinearity exists within our model [99]. Evaluation of the Inner Model and Hypotheses Testing The literature proposes the Standardized Root Mean Square Residual (SRMR) as a new model fit index. It calculates the difference between observed correlations and the model's implied correlations matrix [90,100]. Hence, this study checks the model by assessing the model fit before further assessing the structural model and associated hypotheses. However, current model fit indices should be interpreted with caution, as these metrics are not fully established PLS-SEM evaluation criteria. The obtained 0.060 is below the conservative 0.08 mark that is proposed by [100]. As a final step, the model's predictive relevance is calculated using the Q 2 of our endogenous constructs (i.e., using Stone-Geisser's test). All Q 2 values are above the threshold value of zero, thereby indicating the overall model's predictive relevance. The structural model and the hypothesized relationships among the model's constructs can now be assessed. The research models' fit, predictive relevance, and the structural model can now be evaluated. The structural model explains 29% of the variance for organizational benefits (R 2 = 0.29) after removing all non-significant relationships from the model. This outcome is considered a moderate effect [91]. Additionally, dynamic enterprise architecture capabilities explain 22.0% of the variance in business-IT alignment (R 2 = 0.22) and 12% (i.e., R 2 = 0.13) of the variance in process innovation. Finally, EA deployment practices explain 45% of the variance in dynamic enterprise architecture capabilities (R 2 = 0.45). Overall, these coefficients of determination support the research model's explanatory power, next to the model fit indices and the obtained significant path coefficients (p < 0.0001). Table 4 summarizes the structural model assessment findings and additionally shows the estimated effect sizes (with effect sizes, the specific contribution of particular exogenous constructs to an endogenous latent constructs R 2 can be determined) (f 2 ) and the confidence intervals (Lower bound, 0.5%-Upper bound, 99.5%) of the structural model analyses. Mediation Analyses This study followed specific guidelines by [90,101,102] for multiple mediation analysis procedures to address the imposed mediation effects within the research model specifically. First, dynamic enterprise architecture capabilities' direct effect on organizational benefits is both positive and significant (β = 0.31, t = 5.398, p ≤ 0.0001). Hence, this fulfills the first mediation condition, as suggested by Kenny [101]. Next, the significance of the indirect effects (i.e., mediating paths) was integrally established (i.e., simultaneous consideration of all mediating constructs) through a bootstrapping approach using a non-parametric resampling procedure [90,102]. Then, the included direct path (dynamic enterprise architecture capabilities (DEAC) → Organizational benefits (OB)) showed a non-significant relationship (β = 0.06, t = 1.112, p = 0.26). The specific indirect effects (DEAC → BA → OB and DEAC → PI → OB, see Table 4) should be interpreted as the indirect effect of DEAC on OB through a given mediation construct (i.e., BA or PI) while controlling for the other mediating constructs. Additionally, we tested for the direct effect of EA deployment practices on BA and PI. The bootstrapping results were negative. This particular outcome implies that, indeed, dynamic enterprise architecture capabilities are the key enabler of BIA and PI, and that dynamic enterprise architecture capabilities fully mediated the effect of deployment practices. This study concludes that full mediation characterizes this current structural model. Therefore, this study finds support for the five hypotheses, while all included control variables showed non-significant (n.s.) effects (see Table 4). Theoretical Contributions Motivated by the call to provide empirical evidence on how EA-based capabilities drive business transformation and deliver benefits, this study shows how dynamic enterprise architecture capabilities benefit the firm using data from 299 Dutch-speaking firms. In doing so, this current study makes various substantial contributions to the IS literature. First, this study's outcomes extend the current knowledge on which organizational benefits can be achieved with EA resources, practices, and capabilities. Using our work and gleaned insights, scholars can now conduct more foundational analyses on EA's use and deployment in organizations. More specifically, they can now systematically link a firm's EA deployment practice efforts to dynamic capabilities and use them to efficiently exploit organizational resources and explain how the firm's innovativeness, alignment of business and IT, and organizational benefits can be achieved [103,104]. Second, with these outcomes, the present study also extends previous empirical studies that focus on project contributions from effective EA deployment, see, for instance, [4]. Third, this study constructed and validated a comprehensive EA-based capability and treated it as a dynamic capability. Using the 16 measurement items across three dimensions (i.e., EA sensing, mobilizing, and transforming capability), this study helps researchers conduct more systematic analyses on the organization's EA-based capabilities. Fourth, this study empirically showed that dynamic enterprise architecture capabilities, enabled by EA deployment practices, are crucial to achieving high business-IT alignment and process innovation levels. Furthermore, the latter two fully mediate the effect of dynamic enterprise architecture capabilities on organizational benefits. In doing so, this study expands upon previous EA-based capabilities and IT-enabled capabilities studies [2,4,57]. Specifically, the identified mechanisms, and thus the mediating forces, through which benefits are achieved have theoretical relevance since a substantial amount of scholarship work under the assumption that the sole development of EA's with associated artifacts is a sufficient condition to enable business transformation and attain organizational benefits [6,10]. This study shows that organizational benefits resulting from EA-based capabilities can be achieved through intermediate-capabilities and IT-business benefits. These current findings might explain why firms still encounter organizational and externally imposed obstacles with realizing EA's intended business outcomes [14,105]. The unfolded indirect effect of dynamic enterprise architecture capabilities on organizational benefits is also consistent with previous work on dynamic capabilities and their indirect effect on firm performance, see, for example, [84,106]. Our work could guide new areas of IS and EA research that focuses on dynamic enterprise architecture capabilities and their contribution to organizational value and firm innovativeness. Practical Contributions This study provides business and IT managers with a potent source of value. The literature paid considerable attention to EA artifacts and framework's key role as sufficient conditions to enable business and transformation and attain organizational benefits. However, this current work, building upon empirical evidence, emphasizes a broader dynamic capabilities perspective regarding EA practice deployment in firms. Firms should focus on dynamic enterprise architecture capabilities as an effective mechanism for promoting business-IT alignment and thus provide a better understanding of business processes and IS/IT, their interdependencies, and possible synergies. Dynamic enterprise architecture capabilities help cultivate the EA to reconfigure the business successfully and the IS/IT landscape, recombine resources, and to adjust for and respond to unexpected changes, and can thus be considered an essential driver for business-IT alignment. Another important managerial implication of this work is that firms can enable process innovation by deploying unique combinations of complementary resources and cooperation of cohesive units governed by dynamic enterprise architecture capabilities. Hence, dynamic enterprise architecture capabilities need to be positioned within the firm to enable both alignment and process innovation, thereby using EA-based capabilities to their full potential. This study developed a comprehensive survey that on the item-level can be used as a diagnostic and (self)assessment tool, grounded in theory, so that managers can now open up the value-creating black box of EA and justify the investments made in the EA practice and 'EA as a strategy' [3]. We argue that decision-makers should embrace that EA investments cannot be cultivated instantly through direct performance benefits. Instead, this research unfolds the different and related value paths through which operational benefits and competitive firm performance gains can be achieved. Using dynamic enterprise architecture capabilities, business and IT managers can now leverage previously unavailable EA assets, resources, and sets of decision options. They can use them to enhance their firms' ability to change process innovativeness, orchestrate business processes with technologies, and enhance benefits and a competitive edge. Limitations and Future Work Several study limitations guide future work, despite the developed model's attractiveness and the assessments using reliable cross-sectional data. First, this study used self-reported data to test the hypothesized relationships in the research model. In doing so, it uses a similar approach as the studies of [2,4,14,45], as objective measures are difficult to obtain. Although considerable time and effort were undertaken to account for possible measurement errors and bias, CMV may still be a concern, as both the dependent and focal explanatory variables are perceptual measures derived from the same respondent (i.e., single informant). Including multiple respondents from a single organization could further strengthen the inter-rater validity and improve the internal validity. This study also did not triangulate the self-reported measures with, for example, potentially available archival data from public sources. Including these additional data (e.g., financial measures) could further validate the empirical outcomes' overall validity, as perceptual data are strongly correlated to objective measures [4,63]. This research encourages further research avenues. First, it would be valuable to look at the possible conditioning role of environmental turbulence, as previous studies have demonstrated its impact on organizational benefits [55,57,74,107]. Finally, this study concurs with Shanks et al. [4] in that longitudinal research could lead to an enhanced understanding of dynamic enterprise architecture capabilities and the process of obtaining organizational benefits.
2020-10-29T09:07:59.748Z
2020-10-27T00:00:00.000
{ "year": 2020, "sha1": "cd76bb90f746d02f9316455132c5b7b39997d3c4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/21/8902/pdf?version=1603791578", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "baf5856c4e687696e570f909f854dc5c664d6898", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Business" ] }
236277648
pes2o/s2orc
v3-fos-license
A Nonlinear Fuzzy Controller Design using Lyapunov Functions for an Intelligent Greenhouse Management in Agriculture The importance of agronomists in large-scale production of food crops under considerate environmental weather conditions cannot be overemphasized. However, emerging global warming is a threat to food security due to its effect on soil depletion and ecosystem degradation. In this work, the design of the proposed intelligent context is to observe, model and simulate greenhouse control system activity towards the management of the farm crop growth as the affected salient environmental parameters. Characteristically, temperature and humidity are the major factors that determine the crop yield in a greenhouse but the case of a dry air environment or beyond 300C−350C of high air humidity will affect crop growth and productivity. A Mamdani technique of fuzzy logic controller with non-linear consequent is used for intelligent greenhouse design in the LABVIEW virtual environment. This approach is used to mimic the human thought process in the system control by setting some logical rules that guide the greenhouse functions. For the system stabilization achievement, a direct method of Lyapunov functions was proposed. The simulation model result shows that, the average temperature of 18.50C and humidity 65% is achieved for a decent environment of crop growth and development during winter. However, the average temperature and humidity achieved during summer is 27.50C&70% respectively. For every season that is beyond 30.50Cand75% of temperature and humidity will require automation of roof opening and water spilled. Introduction Agriculture is an important aspect of any nation's development which usually requires appropriate seasoning irrigation and fertilization to produce a quantity of food products [1]. The seasoning control application of fertigation (fertilizer and irrigation) techniques has proven efficient in plant growth, development, and yield large crop production [2]. Computers and electronics play a significance role in the development and mechanization of agricultural products through the recent applications of ubiquitous technology of Internet of Things (IoT). This advancement and dynamic methods of control theories application helps improving the agricultural equipment (mechanization) and the processes. The recent integration of artificial intelligence (AI) and computational intelligence (CI) into the agromechanical machine and mechatronics system (embedded sensors and robotic) shaped the agricultural technology and their commercialization. So, the studies have indicated a strong link between agriculture and economic growth as the backbone of national sources of income and commercial development [3]. The increasing demand in food consumption nationwide as resulting from increased daily population explosion that necessitated the provision of precision agriculture monitoring [4] and to ease the farming process as well as abnormalities in the farm environment. Although, farming is an essential means of increasing food production, recently its cultivation is decreasing and becoming inversely proportional to the rising population. This is partly due to the phenomenon of global warming [5]. As a result of changes in climate conditions and its threat of conservatory, the need arises for an agricultural development control system to manage this condition for high yield of crop production [6][7][8]. The changes in climate condition increases the tropical storm intensity and frequency due to rising temperatures and climate pattern that change mutually. Whereas, the warming of temperatures ocean and sea levels rising escalate the disaster storms growth with excess heat trapped in the atmosphere. It is observed that dissolving of heat energy and excess of carbon dioxide gas has significant damage on the ocean. Like oceanic acidification that affect reproduction and formation of animal shells, oceanic heat waves affect coral reefs with the frustration of fish migration, and oceanic dead zones created as a result of deoxygenation process [9,10]. Consequently, there is a need to prevent and manage common emission releasing into the atmosphere with effect on agricultural crop growth and environmental degradation effect. This threat of emission releasing contributes immensely to the effect of climate change which affects the successful cultivation of farm crops [11]. The United Nations' World Meteorological Order (WMO) confirmed that the world planet is about 1.1°C warmer, and is forecasting an increase from 4-5°C towards end of the century. Others factors of agronomy-house sustenance depend on the environmental weather condition which includes temperature, humidity, winds, light intensity, and solar radiation. The statistical overview of primary sources of greenhouse gas emission is given in Figure 1, which include industry, transportation, building, agriculture, forestry, electricity and heat production [12]. While the sources of releasing those gases, are Methane (NH 4 ), Nitrous Oxide (NO), and Carbon Dioxide (CO) from the industrial processes, fossil fuel, bush burning, forestry, sewage disposal and other land use [13]. A Greenhouse is a controlled place where plants are grown under control conditions of ambient temperature, humidity, water vapor, light intensity, and carbon (iv) oxide [14]. The environmental conditions for greenhouses can be varied according to the plants need to get most out of the plants and for high efficiency. Since the environmental conditions of the greenhouse need to be adjusted for optimal growth, the size and cost of labor increase proportionally to the size of the greenhouse and the number of plants [15,16]. A greenhouse is a structure designed with glass walls or transparent material and a glass/translucent roof used to grow food crops and plant cultivation (such as tomatoes and tropical flowers under controlled environmental conditions [17,18]. The efficient management and monitoring of greenhouse plants condition require the integration of an artificial intelligence system (AI) or automated control system (ACS) based on context-aware software design (CASD). Therefore, a Greenhouse Development Rights (GDR) framework is proposed in [19] to safeguards the right of development as a possible global solution to the climate change challenges. It is shown that GDR approach is an international context for (China and the USA) which provide funding for the development mechanism of a greenhouse as an approach to address global climate change challenges. The GDR is a foundation for future evolution and industrialized developed countries. In another approach of control and keeping hothouse cool is the development of a smart controller for grid stabilization using optoelectronic system [20]. This book chapter contribution aims at presenting an intelligent greenhouse control based on the non-linear consequent fuzzy logic controller using LabVIEW for agricultural technology. The system helps to monitoring greenhouse parameters and acts based on the specified fuzzy rules to control the system environmental condition without or with little human intervention. Thus, there is a need for an intelligent greenhouse control system in agricultural technology that can reduce human labor costs, increase productivity, and reduce human intervention. Related works The global warming crisis necessitated the development of a real-time monitoring and control system for managing change in environmental temperature conditions. This temperature change plays an important role in the soil contents of farm crops. Therefore, the use of computer technology approach (such as embedded systems and AI) has been newly adopted in realizing the design of automation control and monitoring system of ambient temperature in greenhouse management. The greenhouse control system is developed using LabVIEW simulation software for data collection and analysis of conservation in [21]. The work mainly focuses on adjusting the temperature environment using a thermostat and sensor to detect and control the hotness of the greenhouse. The process was simulated and implemented in the developing system platform of Labview software. An optimized sprinkler irrigation system for predicting use of budding land based on soil features using fuzzy logic decision approach in [22]. The significance of adopting this fuzzy logic in land evaluation is a suitable approach for the continuous nature of soil properties and provide an accurate distribution index for predicting land use. An optimized method of cultivation in the greenhouse automated system with smart environments using an embedded system development approach in [23]. This industrial automated greenhouse model is developed for plant experimentation at Title Strength Limitation Design of an Intelligent Management System for Agricultural greenhouses based on the Internet of Things [24]. Successfully developed a remote monitoring system for greenhouses using ZigBee protocols. Users can remotely control and manage greenhouse parameters such as temperature and humidity. Absence of an intelligent technique. Although the control method is remote, it is also manual. Smart greenhouse monitoring using Internet of Things [25]. A system capable of remotely monitoring greenhouse parameters via a web application. No intelligent technique presents. Lack of control mechanism. Research on the control system of the intelligent greenhouse of IoT based on ZigBee [26]. Successfully developed a ZigBee based system capable of remotely monitoring and controlling greenhouse parameters Absence of intelligent technique. Control is manual. Internet of Things based smart greenhouse: remote monitoring and automatic control [27]. Implemented a smart greenhouse using GSM/GPRS for remotely monitoring and controlling greenhouse parameters. The system is capable of automatically controlling the parameters if they are out of the specified range. Absence of an intelligent technique for the control of parameters. Intelligent greenhouse design based on Internet of Things (IoT) [28]. Developed an intelligent greenhouse using Cloud service for remotely monitoring greenhouse variables. The system is also capable of automatically controlling the parameters if they fall below or above specified values. Absence of intelligent control technique. Smart greenhouse using IoT and cloud computing [16]. Successfully developed a monitoring interface for greenhouse parameters using IoT and cloud computing Absence of intelligence and control technique. Design and implementation of a smart greenhouse [18]. Successfully developed a smart greenhouse control system to monitor and control the parameters in a tomato farm. The system automatically controlled actuators to regulate greenhouse variables. Absence of intelligent technique. Intelligent Monitoring Device for Agricultural Greenhouse Using IoT [29]. The author proposes a monitoring system for greenhouses using wireless sensor networks and IoT. The proposed system incorporates a microcontroller that transmits information that can be monitored with an Android Application. Absence of intelligent technique. No control technique specified. the University of Alicante to control air-conditioning, soil condition, and irrigation in the system. The optimization services integrated into this system model designed help in the detection and prediction of agricultural production of smart environments. But the optimized smart environment greenhouse does not consider controlling the system conditions during rainfall, summer, and winter. Other authors that contribute to the development of automated and intelligence-based greenhouse control and monitoring system is analyzed in Table 1. From these literatures, it is observed that the limitation is on the part of intelligence incorporated into the system with linearized fuzzy model improvement. Also, the season management of crop cultivated area in the greenhouse with automatic control technique are not studied. Hence, this book chapter aims to fill those gaps by implementing a non-linear consequent fuzzy logic controller system for the decision-making process and automatic control of the greenhouse system with an approach of context-aware software design ontology. This book chapter is organized into 5 sections. The introductory part discussed the general background of study in Section 1, Section 2 presented the related works. The research methodology is presented in Section 3, while sub-Section 3.1 mathematical modeling of the greenhouse control system in sub-Section 3.1, sub-Section 3.2 presented a linearize and non-linear consequent fuzzy controller design for greenhouse control. Sub-Section 3.3 contained Lyapunov function for stabilization of non-linear consequent fuzzy controller. Sub-Section 3.4 presented simulation and implementation of nonlinear consequent fuzzy controller based-greenhouse design in LabVIEW. The results and discussion are presented in Section 4, sub-Section 4.1 contained Intelligent greenhouse management based nonlinear control simulation results. Sub-Section 4.2 presented simulation results of a Lyapunov stability of nonlinear control system. Section 5 gives the conclusion and recommendations for future works. Methodology Context-aware systems are software systems designed with the ability to sense (sensor) and adapt to the environmental conditions for the solution required to the problem design [30,31] through a fuzzy controller. This design involves determining what the system needs to sense, make adaptations, and respond to sensor information. It requires sensing temperature and humidity, and then adapt to the environmental condition for the greenhouse system control and management using nonlinear fuzzy controller system with direct method of Lyapunov functions to achieved stabilization. The system modeling and design need a focus value or parameter to influence the designed value such that it can sense the elements and manipulate them in case of irregularities. So that it can make the element relevant to the purpose of the design and the designer focus. An overview of the approach design for an intelligent greenhouse control system includes practical problem identification, insight to the context, elements components required in the sense and adaptation, and logical reasoning rules for information as illustrated in Figure 2. The fuzzy logic controller architecture [32] consisting of crisp input rules, Fuzzification (knowledge-based or linguistic rules), fuzzy inference engine (logic rules), and Defuzzification (output crisp values). The input of a fuzzy control system parameter can be adjusted to improve the system (fuzzy mechanism) performance using the Eqs. (1) and (2). where, θ n ð Þ is define as a set of input parameter to adjust at time t, T n and P n is the parameter collected at a time T n . The non-homogeneity consequent of the fuzzy logic controller system technique is adopted in the design to sense the greenhouse environment and adapted for a unique solution of a design problem. A visual graphical programming systemdesign platform and software development environment called Laboratory Virtual Instrument Engineering Workbench (LabVIEW) was used to achieve the contextdesign. It is very efficient and commonly used in engineering as a context-aware system design for data acquisition, instrument control, and industrial automation system. It is a multi-threading and multiprocessing hardware system that is automatically engaged by the in-built scheduler during the execution flow structure (nodes) of a graphical block diagram. The connection wires will propagate the variables and execute the process immediately all its input data reachable. This system is used to control the temperature and humidity of the greenhouse system using a non-homogeneity control system. The temperature and humidity inputs parameter are set and the system keeps both values constant regardless of the outside temperature of the controlled system. This is achieved using the combination technique of the linearized system with non-linear fuzzy, and adopt Lyapunov function to achieve system stability in the model. This model helps in controlling the opening greenhouse roof for rainfall and sunshine, and/or by turning on the sprinkler to reduce the temperature as presented in the algorithm of decision-making process of the system is achieved using a combination technique of linear and non-linear approach consequent of the fuzzy logic controller system. The sub-system irrigation and ventilation classification help the agronomist to manage the setpoints of the control input variables. This irrigation-ventilation model is an intelligence unit that is used for the senses and responds to immediate action by introducing the prediction and optimization facilities that are supervised by the agronomist as presented in Figure 3. The calories required for heating the air in the greenhouse is calculated as expressed in Eq. (3). For the determining value of temperature, it requires average heat of 0.30Kcal to achieve a one-meter cube of air. It is observing that 1 kW heat can produce 860Kcal, and a heat source of 30 W can produce 25.8Kcal heat per hour, and equivalent to 0.43Kcal heat per minute [33]. where ∅ is the heat, M is the mass,  is the heating temperature (0.24 Kcal/kg), ℓℸ is the difference in temperature. Mathematical modeling of greenhouse control system The behavior of the greenhouse microclimate is dynamic and combinations of physical processes involve mass balance and energy transfer. The physical processes involved are used in estimating the greenhouse climate. The amount of energy leaving the greenhouse can be calculated as expressed in Eqs. (4) and (5). where, E total is the total energy balance (W), E gain is the amount of energy entering the greenhouse (W), E loss is the amount of energy leaving the greenhouse (W), E k is heat loss due to conductive heat loss (W), E v is the heat transfer due to ventilation (W), E inf is heat transfer due to infiltration (W), E r is heat transfer due to the longwave radiation and E cond is heat loss due to condensation (W). The conductive loss encompasses all the heat transfers through the greenhouse cover from the internal to the external air, conductive heat transfer through the covering material and radiative heat transfer can be expressed as in Eq. (6). The thermal wave radiation exchange from the interior greenhouse to outside can be calculated as given in the non-linear Boltzmann relation in Eq. (7) and Eq. (8). Therefore, the ventilation of heat lost in the greenhouse is proportional to the rate of air exchange and the differences occur between the inside and outside air temperature [34], and the loss can be determined as in Eq. (9). where λ o is outside air temperature (K), λ i is inside air temperature (K), h is the conductive heat transfer coefficient (W/m 2 ), A is the area of greenhouse cover (m 2 ), Q r is radiation loss, ε is the combining emissivity between the cover and sky, σ is Boltzmann constant, ρ is air density (kg/m 3 ), C is the specific heat of air (J/kg K), G is airflow due to ventilation (m 3 /s), w is the wind speed (m/s), r v is percent of the ventilator opening, k v is the slope of the curve showing the ventilation flux divided by wind speed variation and A is area of the ventilator (m 2 ). The heat energy is transfer within the intelligent greenhouse system as a result of infiltration of energy loss which is due to the exchange air through cracks occurs in the greenhouse and is considered. Since the infiltration rate is based on the volume of water vapor changed per unit cover area (roof and walls). This volume of water vapor is directly proportional to the wind velocity and the temperature difference from both inside to outside the greenhouse can be determined as in Eq. (10). Then, the sources of heat gain from the greenhouse model include solar radiation heat which is the most determinant of heat gain by the intelligent greenhouse system during crop growing and system heating from the environment [35]. So, the energy of the greenhouse can be calculated as in Eq. (11), the heat transfer from tubes to the greenhouse environment is expressed as in Eq. (12) and the internal temperature increases are within the range of (0.3-0.7) which 0.3 was chosen. where, H inf is the infiltration heat loss (W), λ i is the temperature inside the greenhouse (K), λ o is the outside temperature (K) of a greenhouse, V is greenhouse volume (m 3 ), and N is the number of air changes per hour (h À1 ), E r is solar energy radiate into the greenhouse environment (W), I is total external solar energy falling on a horizontal surface of the greenhouse (W/m 2 ), A is an area of greenhouse floor (m 2 ), τ is radiation light transmission to the greenhouse cover, γ is constant of the proportion of solar radiation that radiates into the greenhouse. Q hs is heat gain from the heating system (W), m is the heating water flow rate (kg/s); λ ωi is heating water inlet temperature (°C), λ ω0 is heating water outlet temperature (°C) and C p is the specific heat capacity of water (J/kg K). A linearize and non-linear consequent fuzzy controller design for greenhouse management A closed-loop or called feedback controller transfer function is adopted since the output of the intelligent control system φ t ð Þ is fed back into the system through a sensory measurement device (sensor) γ. The comparison is for reference value τ t ð Þ, where the controller system α takes the error ε (difference) between the reference point or set values and the output to adjust the inputs μ feedback to the system under control β. From the perspective of implementation of the controller with a linear approach and time-invariant, the elements of the transfer function α s ð Þ, β s ð Þ, and γ s ð Þ do not depend on time where α is controller, β is the system under controller (plant), and sensor measurement denotes γ [36][37][38]. We can analyze the systems using the Laplace transform on the variables as expressed in Eqs. (13) By solving φ s ð Þ in terms of τ s ð Þ can be expressed as given in Equation The closed-loop or feedback transfer function of the greenhouse control system is expressed as ℵ s ð Þ in Eq. (17), where the numerator is identified as open-loop (forward gain) from τ input parameter to φ output values, and the denominator is a feedback loop that goes around the system called loop gain. So, if β s ð Þα s ð Þ j j≫ 1, that is, it has a standard model with each value of s, and if γ s ð Þ j j≈ 1, then φ s ð Þ is approximately equal to τ s ð Þ and the output system is close to the reference input. The flowchart technique for a linearized and non-linear fuzzy model for the optimization function of the greenhouse control and management model is illustrated in Figure 4. This mechanism operates as a reference model to the non-linear system and is connected in parallel in such a way that the linear system passes across the non-linear for better stability. The state-space model for the non-linear fuzzy controller is given in Eq. (18), which increases the fuzzy rules quantity exponentially with non-linearities measures. The delayed in the state-space model for the fuzzy controller is given in Eq. (19). Where x t ð Þ is the state vector of x t ð Þ ∈ R n x , υ t ð Þ is an input vector for υ t ð Þ ∈ R n υ , s is the number of rules, ω t ð Þ is the available premise vector, δ k ω ð Þ is the membership function, α k and β k are the linear models, and the convex sum is given as: δi ω This state-space model for the fuzzy system can be expanded to determined the time delay dependent as given in equation, where τ t ð Þ is the delay time dependent, and δ m ω t À τ t ð ÞÞ ð Þ ð is the delay states that dependent on fuzzy membership functions. x The notation of τ t ð Þ ≔ τ can be expressed as Eq. (20), and the closed-loop fuzzy model for non-linear time-dependent is in Eqs. (21) and (22). This is to reduce the number of fuzzy rules and to serve the purpose of measured-state and non-linearities unmeasured-state [34,39,40]. The x t À τ ð Þis the state vector for time-delayed, υ t À τ ð Þis an input vector timedelayed, G is the system matrix, x(t) is a function of linear combination for each input to the model, and ψ ξx t ð Þ ð Þis a vector function. The boundary condition for the existence of vector function in the model can be expressed as Lyapunov function for stabilization of non-linear consequent fuzzy controller based-greenhouse For the stabilization and dynamical nature of the system, a Lyapunov non-linear function (LNF) is adopted to operate the system model as a linear with a limited range of function at every region. This approach of LNF helps the model to present auxiliary nonlinear feedback which can be operated as linear for control design purposes. Since a Lyapunov direct method of stability criterion for a linear system can be defined, suppose u = 0, and the exist two-point p > 0 and q > 0. Therefore, a linear system is asymptotically stable at the beginning for any given symmetric that existed given a unique solution that used for stability analysis as given in Eq. (24), But the choice of q can be made arbitrarily which is mostly set as q = 1, an identity matrix p for all successive principal minors of p is positive using Sylvester theorem as expressed in Eq. (25). , P 11 > 0: Therefore Δ P ½ > 0 (25) However, the parameters for the fuzzy model-dependent can be given as in Eq. (26). The associated weighting function of normalized fuzzy with the ith system are calculated through the degree of fuzzy membership functions θ 1 σ ð Þ and premise variable with a closed interval of [0, 1] which must satisfy these properties in Eq. (27); The state-space matrices function can be replaced in the derivation with a new introduce operator as expressed (Eqs. (28) and (29)), where α, P can be replaced with ω, and the subscript μ refers to all signals (x, d, εÞ and η μ is the dimension of signal μ. The notation in Eq. (29) can be expressed as given in Eq. (30) using fuzzy weighting membership functions properties. Therefore, the Lyapunov fuzzy model function for the system stability is given From the expression given in Eqs. (27)- (29), these symbolizations can be achieved as given in Eq. (34), when the fundamental matrix will be represented as X, while Y is the out factor, and Z ≔ I nσ A T θ ð Þ Â Ã T . Then, Therefore, the condition of Lyapunov stability expression in Eq. (34) is comparable with YZ T X YZ ð Þ< 0. So, the matrix YZ can be reform as given in Eq. (35). But the variable X which depends on the fuzzy weighing function derivative can be solved using conservatism of LMI-based stabilization conditions as in Eq. (36). The constraint notation is the stability of Lyapunov fuzzy weighing function is guaranteed by the expression given in Eq. (37). where For the fuzzy system controller to be asymptotically stabilized, then it is given The Lyapunov fuzzy system function is given as ∀ ¼ ϰ T Q À1 θ ð Þϰ and the system controller can be expressed as u ¼ U θ ð ÞQ À1 θ ð Þx, and the condition for stabilization d = 0 can be finally described as in Eq. (38) with a similar derivation of LMI-based stabilization condition [41,42]. Implementation of non-linear consequent fuzzy controller based-greenhouse design in LabVIEW The Fuzzy Inference System (FIS) consists of two inputs (temperature and humidity) and two outputs (electric roof and water spills). A Mamdani fuzzy logic technique was implemented in this study due to its wide acceptance and suitability for this application. The triangular membership functions were implemented for all inputs and outputs. The input 'Temperature' had membership function values of 'cold', 'normal', and 'warm', while the input 'Humidity' membership function had values of 'dry', 'normal', and 'wet'. As for the outputs, the membership function for 'Electric Roof' signified the level of the opening for the roof. The output membership function parameter is 'closed', semi-open', and 'open'. The output 'Water Spills' represented the amount of water to be spilled by the sprinkler. This parameter has membership function values of 'low', 'moderate', and 'more'. Besides, the greenhouse control system was designed to consider each of the four major seasons (spring, summer, fall, and winter). As a result of this weather variation, each season has different membership function values for the weather conditions. The block diagram is illustrated in Figure 5. Simulation results of an intelligent greenhouse management based nonlinear control The intelligent greenhouse control system was designed and simulated in LabVIEW using non-linear consequent for the controller. Two major interface environments were used to achieve the design of the system, the front panel, and the block diagram interfaces. The LabVIEW environment also provides a tool for fuzzy logic designs and the fuzzy logic designer has three interfaces, namely: Variables, Rules, and Test System. These interfaces respectively give the user an interface to specify the inputs and outputs of the system, provide the IF-THEN rules, and test the system to analyze the performance. In LabVIEW, an algorithm was implemented for the intelligent control of the greenhouse. This algorithm was implemented using a block diagram for the simulation of a nonlinear based intelligent greenhouse control system. The interface has a knob that can be used to select a particular season. Also, the temperature and humidity can be altered to view various results. Selecting different values for temperature and humidity result in different outputs for roof opening and the water spills through system actuators. These outputs are determined by the fuzzy logic controller. Depending on the season selected, the outputs of the FIS will differ even with the same inputs. This is mainly because each season uses a different membership function for its decision-making. Considering these scenarios, experiments were conducted for each of the four seasons with the same input values. This was done to analyze varying results of the seasons and to examine the effectiveness of the control system. During this summer season, the dynamic sensor deployed to the environment is temperature and moisture sensors for monitoring the temperature and humidity of the greenhouse at constant temperature input of 25°C and the relative humidity of 85%. The membership function for temperature has three stages cold, normal, warm. It observed that temperature starts to normalizes from 22.5-32.5 degrees celcius to get constant temperature input. Therefore, the roof is open at 50%, and water spilled at the relative humidity of 40.1%. The simulation of fuzzy controller based intelligent greenhouse during the summer season was presented in Figure 6, and its fuzzy membership functions. The surface view of the dynamic system testing is presented in Figure 7. In this work, a knob is designed to mimic the outside environment based on four possible weather conditions in a year (summer, spring, rainfall, winter). The constant temperature parameter set is 25°C and humidity at 65%. The membership function for temperature has three stages cold, normal, warm. During the summer, the temperature starts to normalize from 22.5-32.5 degrees Celsius to get a constant temperature value. Then, from the understanding of physics, an increase in temperature reduces humidity and relatively controls the sprinkler to turn ON and cause the roof opening. All these calculations are handled logically by the fuzzy logic controller in the software-context based on the input and possible output variables. Tables 3-6 presented the results obtained for summer, spring, winter, and fall seasons respectively, and the graphical representation of the results obtained is in Figures 8-11. Table 3. Results for the summer season. In this context, a knob is designed to mimic the outside environment based on four possible weather conditions in a year (summer, spring, rainfall, winter). The constant temperature parameter set is 25°C and humidity at 65%. The membership function for temperature has three stages cold, normal, warm. During the summer, the temperature starts to normalize from 22.5-32.5 degrees Celsius to get constant temperature value. Then, from the understanding of physics an increase in temperature reduces humidity and relatively controls the sprinkler to turn ON and the cause the roof opening. All these calculations are handled logically by the fuzzy Series Temperature (°C) Humidity (%) Water sprinkler flow (%) Electric roof opening (%) Table 6. Results for rainfall season. logic controller in the software-context based on the input and possible output variables. The results for each season depending on different environmental and season behavior which is processed by non-linear consequent fuzzy logic controller. This is achieved using different membership functions for each season. Since each season has its unique weather conditions and temperature requirements. The variation based on the season's implementation is to ensure an effective performance of controller during the different seasons. Furthermore, it can be observed from the results that irrespective of the season, higher temperatures lead to wide roof openings and high-water spill levels. This is done to reduce the temperature to the level specified by the farming environs. Also, low temperatures result in no roof openings or water spillage, since there is no need to lower the temperature further. But, during the summer season, the average temperature and humidity required is 27:5 0 C&65% À Á respectively. For every season that beyond 30:5 0 C&75% À Á of temperature and humidity will require automation of roof opening and water spilled. Simulation results of a Lyapunov stability of nonlinear control system The nonlinear fuzzy controller system for managing intelligent greenhouse was simulated in the MATLAB environment to achieved the stabilization of linearizing system, when its asymptotically stable using Lyapunov function. From the statespace of fuzzy model given in Eqs. (18), (25) and (26), the characteristics equation is derived as ℓı À Å , and the description is given [42]. If f ℓ, ℷ ð Þ ¼ ℓı À Å , the system is universal and stable since the eigenvalues are positioned at the left-half side. Also, the eigenvalues ℷ ð Þ follow a trend when plotted a multi-dimensional of f ℓ, ℷ ð Þas illustrated in Figure 12. This eigenvalue ℷ ð Þ help to achieved a steady with better dynamic performances, good compensation quality and fast responses of the system as it moves closer to the trend of red spotted lines. The system controller undergoes processes to achieve stabilization when the eigenvalue is ℷ ¼ À1 * 10 2 at periods of (0-0.50) seconds using Fast Fourier Transform (FFT) analysis. Therefore, if the polynomial coefficient is both positive then equilibrium point is stable when ℊ > 1, else is unstable when at least one eigenvalue ℊ < 1. The simulation results for the system control pathways for nonlinear system using Lyapunov function with given stability conditions ℊ ¼ 2, ℊ ¼ 3, ℊ ¼ 4, and ℊ ¼ 5 are shown in Figure 13. Conclusions The greenhouse control system was implemented using the Fuzzy Logic Controller design with non-linear consequent as an intelligence in the decision-making process of the system. The membership functions include two inputs (temperature and humidity) and two outputs (roof opening and water spills). The intelligent greenhouse system was designed to cater for each of the four major seasons (summer, spring, winter, and rainfall) and this was achieved by implementing different membership functions for each season. The development of an intelligent greenhouse control system was simulated and implemented in LabVIEW. These technologies, FLC and Virtual Instrumentation in LabVIEW are widely adopted to enable computing and communication to migrate out of the gray box into ordinary objects (standalone system). However, it is significant that building of an intelligent systems to model human activities or interactions is important to the agricultural technology development. The results obtained show varying performances for each season to cater for different weather conditions. Future research will be considered incorporating a heating mechanism to raise the temperature for varying conditions and hybrid intelligent techniques using optimization technique for a better system performance.
2021-07-26T00:05:40.410Z
2021-06-13T00:00:00.000
{ "year": 2021, "sha1": "bec543f5f489667c942f8cfb51ac6c3e53e8cd12", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/77093", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "bff53a4f87da5d312914e2a7f61c27dd8f769dfd", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
2510376
pes2o/s2orc
v3-fos-license
Using SWE Standards for Ubiquitous Environmental Sensing: A Performance Analysis Although smartphone applications represent the most typical data consumer tool from the citizen perspective in environmental applications, they can also be used for in-situ data collection and production in varied scenarios, such as geological sciences and biodiversity. The use of standard protocols, such as SWE, to exchange information between smartphones and sensor infrastructures brings benefits such as interoperability and scalability, but their reliance on XML is a potential problem when large volumes of data are transferred, due to limited bandwidth and processing capabilities on mobile phones. In this article we present a performance analysis about the use of SWE standards in smartphone applications to consume and produce environmental sensor data, analysing to what extent the performance problems related to XML can be alleviated by using alternative uncompressed and compressed formats. Introduction Changes in environmental conditions affect the environment itself and put pressure on human society [1]. Tools for modelling, monitoring, and assessment have increasingly become crucial instruments to continually observe the environment and the Earth. In this context, the GEOSS initiative (Global Earth Observation System of Systems) pursues to connect data producers of environmental and sensor data, and decision-support tools with end users, with the aim of exploiting the potential of Earth observations and sensor data to tackle with global issues [2]. GEOSS is regarded as a global "system of systems" that includes sensor networks, data communication protocols, spatial-environmental data infrastructures and other essential components and technologies for monitoring and observing the Earth. The combination of sensor networks (data providers), smartphone applications (data consumers), along with data communication protocols, forms the basic pieces in most ubiquitous environmental sensing applications. Nowadays, smartphone applications represent a typical data consumer tool from the citizen perspective, largely motivated by the rapid increase of hardware capabilities of these devices, which have permitted an exponential growth in the number of applications targeted to them. As a result, smartphones not only play a traditional role as consumers of sensor-related information but also may act as producers of such information. As they are increasingly equipped with a variety of data-capturing sensors (ambient light sensors, accelerometer, digital compass, gyroscope, GPS, proximity sensor, microphone and cameras), smartphone applications are becoming attractive in many scenarios [3] such as health [4,5], traffic [6,7], or environmental monitoring [8,9]. In addition to sensors included on the phones, external devices can be attached to them to track dynamic information about different phenomena [10]. For the consumer role, environmental sensor data can be retrieved from remote sensors through such applications. For the producer role, people can make environmental observations with these devices and share them with other users. For example, smartphones are being used for in-situ data acquisition in varied scenarios such as geological sciences [11,12], epidemiology [13], biodiversity [14], and noise pollution monitoring [8,9]. In these examples smartphones play either a consumer or producer role as typical clients in a client-server architecture. Nevertheless, they may also act as intermediaries or client aggregators. For instance, in low-connectivity situations, a mobile application may consume and process data from nearby in-situ sensors and upload aggregated datasets to the corresponding servers when network links are restored [15][16][17]. In this particular case, smartphones may potentially collect large quantities of data to be further uploaded to remote servers, which may be a serious impediment in terms of performance. Providers and consumers exchange sensor data through communication protocols. Internet and Wireless Sensor Networks (WSN) are examples of active communication channels that connect sensor networks and consumer applications. Regardless of the particular channel chosen, communication is based on internationally adopted standard protocols [18]. The use of standard protocols to exchange information between smartphones and sensor infrastructures (servers, services, etc.) brings several benefits to both developers and users such as interoperability and scalability. In this context, the Open Geospatial Consortium (OGC) (http://www.opengeospatial.org) has developed a set of standards to deal with sensor-related data. These standards are part of the Sensor Web Enablement (SWE) initiative that aims to provide data communication protocols via XML-based encodings and service interfaces for discovering, accessing and exchanging any kind of sensor data [19]. The use of XML increases network traffic as a consequence of its well-know verbosity [20][21][22][23][24][25], which may seriously affect performance on Background The Sensor Web Enablement (SWE) initiative is a framework that specifies interfaces and metadata encodings to enable real-time integration of heterogeneous sensor networks. It provides services and encodings to enable the creation of web-accessible sensor assets [26]. SWE is an attempt to define the foundations for the Sensor Web vision, a worldwide system where sensor networks of any kind can be connected [27][28][29]. It includes specifications for service interfaces such as Sensor Observation Service (SOS) [30] and Sensor Planning Service (SPS) [31], as well as encodings such as Observations and Measurements (O&M) [32] and the Sensor Model Language (SensorML) [33]. In this article we particularly focus on SOS, SensorML and O&M as they are the main specifications involved in the exchange of most sensor data between clients and servers. We consider in our experiments versions 1.0.0 of SOS and O&M and version 1.0.1 of SensorML, because although newer versions of SOS and O&M have been recently approved (as of April 2012), the older ones are still widely used. SOS-based services provide access to observations from a range of sensor systems, including remote, in-situ, fixed and mobile sensors, in a standard way. The information exchanged between clients and servers, as a general rule, will follow the O&M specification for observations and the SensorML specification for descriptions of sensors or system of sensors (both referred by the term procedure). Observations in SOS are grouped into observation offerings. An observation offering is a set of observations related by some criteria (e.g., procedure's location). SOS services expose a set of public operations, some of them are mandatory (core profile) and others are optional (e.g., transactional profile). The core profile is composed by three operations: GetCapabilities, DescribeSensor, and GetObservation. GetCapabilities allows clients to access metadata about the capabilities provided by the server. DescribeSensor allows to retrieve descriptions of procedures. GetObservation is used to retrieve observational data from the server. This data can be filtered using several parameters, such as procedures, observed phenomena, location, time intervals and instants. The transactional profile offers support for data producers to upload observations into SOS servers. Using RegisterSensor and InsertObservation operations, data producers can register its sensor systems and insert observations into the server, respectively. The service interfaces and data models in SWE fit nicely in the creation of information systems according to service-oriented architectures (SOA). The main SOA design principles such as loose-coupling between service implementations and interfaces, independence, reusability and composability, encourage the use of SWE specifications and data models in such information systems [14,34]. Therefore, these specifications such as SOS services and O&M data models are becoming common artifacts in the design and creation of SOA-based applications addressing the integration and management of observations and sensor networks. However, in our opinion, their application to the mobile realm is limited because of the large amount of exchanged information, which often exceeds the processing capabilities of mobile phones. The need to reduce data communication is then a crucial aspect, which inevitably relates to data formats used in communication protocols. XML (eXtensible Markup Language) is likely one of the most widely used formats in data communication on the Web. It has been adopted as the most common form of encoding information exchanged by Web services [35,36]. Kay attributes this success to two reasons [35]. The first one is that the XML specification is accessible to everyone and it is reasonably simple to read and understand. The second one is that several tools for processing XML are readily available. Another reason is that because of its agnosticism the language can be used in almost any domain, and being text-based can be very helpful for debugging purposes. Despite its popularity XML is considered a highly verbose language, which increments unnecessarily the amount of network traffic and storage space occupied by exchanged data represented in this format [20][21][22][23][24][25]. In this regard, several attempts have been made to overcome this problem such as the use of alternative encoding formats [21][22][23]25] or the use of compression techniques [37][38][39]. In this context, compactness and processing efficiency are competing requirements because the smaller the messages transmitted, the less resources are spent in data transmission, but this may require the use of more processing power if data must be compressed and decompressed. The choice of using compression or not must be carefully considered because it has been proven that wireless communication can be much more expensive than computation in terms of energy consumption [40]. An alternative that is gaining a lot of attention is the Efficient XML Interchange (EXI) format [41]. EXI is a W3C's recommendation that is intended to provide "... a very compact representation for the Extensible Markup Language (XML) Information Set that is intended to simultaneously optimize performance and the utilization of computational resources". EXI encodes XML data using a binary format to reduce its verbosity. As stated by [25], a binary format typically has more favourable size and memory properties, hence it is the preferred option for in-memory representation, storage, and transmission. EXI does not offer direct interoperability with XML, but it can be examined, stored, or transmitted as XML format when necessary. It has a schema-informed mode that allows users to make use of available schemas to improve compactness and performance. This format also includes the option to apply additional compression through the DEFLATE algorithm (RFC 1951) [42]. EXI has been reported to have very good compression rates and performance by the research literature [37,38,43] and by W3C itself [44]. Another option is JSON (JavaScript Object Notation)(http://www.json.org), which is one of the most popular alternatives to XML because of the easiness to be read and serialized to Javascript variables and to overcome some security limitations present in some browsers [45,46]. Like XML, JSON is a text-based format which can be very useful for debugging purposes. The attention gained by this language as an alternative to XML has been reflected in the research literature [21][22][23]. The arguments in favour of JSON are that it is simpler than XML, it has less overhead (namespace information, tags, etc.), and basic information items of any XML document (element and attribute names, character information items, etc.) can be easily mapped to a JSON document. Related Work With the basic concepts and definitions introduced in Section 2, in this section we overview related work from different fields which are relevant for our experiments. First, we present strategies to reduce data communication in ubiquitous sensing, followed by a discussion of previous work dealing with XML performance and the use of alternative formats. Strategies to Reduce Data Communication in Ubiquitous Sensing In the context of ubiquitous sensing, transmission of huge volumes of sensor data over the Internet consumes network resources excessively. Efficiency in data communication represents a big issue which has been addressed from different perspectives. The use of caching techniques is a widely adopted strategy to mitigate the amount of exchanged data, as proven by the Web itself. For example, a model to establish the conditions under which caching at the client and proxy side reduces the average response time of a requested document in a Web-based traffic application was presented by [47]. Agent-based systems have been also explored by several authors as a strategy to reduce data communication in ubiquitous sensing [48,49]. These authors proposed similar approaches based on agent-based systems to eliminate needless data communication through implementing data processing capabilities nearby or within sensor nodes. Such agents were installed in a sensor node to handle sensor data, expand data processing capabilities over such sensor nodes, and reduce communication traffic. In the case of [49], agent systems were implemented as web services to be accessed via request-response operations encoded in XML format. The above works [47][48][49] put emphasis in three crucial aspects: (i) the need to reduce communication data; (ii) sensor nodes and devices are mainly exposed as web services; and (iii) exchanged data is encoded in XML-related formats. Furthermore, experiments as ours to explore efficiency and performance in data communication by exploring suitable encoding formats become relevant for sensor network systems, services and applications regardless of their application domain. Performance Analysis for XML Processing and Alternative Formats Numerous articles have been addressing the topic of XML processing performance. For example, the impact of XML processing in the context of web servers and databases was analysed in [50,51]. These works stated that XML processing is a performance bottleneck in several kinds of applications. An extensive review of XML processing in the context of SOAP-based web services was conducted by [20]. The review analysed different techniques to improve performance in XML serialization (e.g., [52,53] cited by [20]), parsing (e.g., [54][55][56] cited by [20]), and deserialization (e.g., [57,58] cited by [20]). The use of compression in the context of SOAP was also included in that review, highlighting the study presented in [59] which concluded that traditional compression methods for XML-based documents might not always be appropriate for SOAP messages, because these are of relatively small size (a few kilobytes), in comparison with XML-based documents. In the last few years, several performance studies have been conducted to evaluate the use of alternatives to XML for data representation in mobile or embedded devices. These studies compared XML with JSON [22,23], and other binary alternatives (e.g., Fast Infoset (http://fi.java.net) [21], BXML (Binary Extensible Markup Language) [38], EXI [43]). Some of them also included compression tools, such as gzip (http://www.gzip.org) [39] and bzip2 (http://www.bzip.org) [37]. All the studies including XML and JSON coincided that the latter offers an important reduction in message size and better performance [21][22][23]. Regarding binary alternatives, EXI showed better compression rates than Fast Infoset and bZip2 [37], gzip [37][38][39], and BXML [38]. Unfortunately, the studies including binary formats and compression tools did not measure processing performance. W3C performed an extensive performance evaluation of an EXI implementation: AgileDelta's Efficient XML 4.0 (http://www.agiledelta.com/product efx.html) [44]. The experiments evaluated, on the one hand, EXI to gzipped XML (XML + gzip) and ASN.1 PER (Abstract Syntax Notation One, Packed Encoding Rules [60]) regarding compactness. On the other hand, a second set of experiments compared EXI without compression to XML, and EXI with compression to gzipped XML in terms of encoding and decoding speed. The results showed that EXI files were more compact than gzipped XML regardless of document size, document structure or the availability of schema information. Similarly, EXI produced much more compact data than ASN.1 PER. EXI without compression was 14.5 times faster than XML on average decoding speed. When compression was used EXI was 9.2 times faster than gzipped XML. For average encoding speed EXI was 6 times faster than XML. With compression, it was 5.4 times faster than gzipped XML. Studies on performance for SWE standards are still few. Particularly for SOS services, in a previous work [61], we conducted a performance analysis that processed sensor data in an Android mobile phone and a desktop PC environment. The results only included XML data processed locally and showed that processing times in mobile devices were around 30 to 90 times slower than those for desktop environments. This article tries to explore an area that has not been included in the previous works: performance analysis on real mobile phones of large sensor datasets encoded using SWE standards and considering alternative encoding formats. Most of the work exposed above used small XML messages that range from just a few bytes to about 200 KB, except for the W3C's EXI evaluation that was performed on desktop settings. Moreover, just a few of them explored EXI beyond the reduction of size that can be achieved, and only two were targeted to real mobile phones [21,22]. Data and Methodology In this section we present the description of the experiments that measure the performance cost of exchanging sensor-related information. Compared with related work in Section 3, we go one step further and build our experiments on a wide range of datasets in terms of size and encoding formats. We explore the use of alternative protocols such as JSON and EXI to experimentally analyse the reduction of size of exchanged data and how the use of these alternative formats affect performance. Selected Datasets In our analysis we build a set of datasets with data captured from SOS servers available on the Internet [62]. Data files contain varied information captured by sensors related to meteorological variables such as temperature, pressure, seawater salinity, and rainfall. These files have been selected to cover a large range of message sizes since we are especially interested in measuring the performance penalty of processing large datasets in mobile phones. The datasets are described in Table 1 and are grouped according to the type of data they contain (See description column). The first three datasets (CAPS, SD and OBS) are consumed by mobile applications, and they illustrate the situation when smartphone applications download data from a server. The last two (RS and IO) reflect the opposite case when sensor-related data is uploaded to a server. These two groups cover the most frequent situations in ubiquitous sensing: data consumption and provision for mobile applications. Table 1. Description of selected datasets. Dataset Description Role Capabilities (CAPS) These files contain metadata of the server such as name, keywords, information about provider, and list of available observation offerings. Consumption Sensor Descriptions (SD) SD files contain descriptions of procedures as defined by the SensorML specification. The files contain information such as location, measured phenomena, etc. Consumption Observations (OBS) These files, in O&M format, contain measurements of a specific phenomenon captured by a procedure. Consumption Register Sensor (RS) RS files contain a sensor description that must be added to an SOS server. Provision Insert Observations (IO) These files contain observation values that must be inserted in an SOS server. Provision For each XML file in each dataset, we create three other files as follows: • A JSON file: Each XML file is converted to JSON using JSONLib (http://json-lib.sourceforge.net). Namespace-related information is removed manually from the converted files. • An EXI file: XML files are converted to EXI with default parameters (bit-packed alignment, no compression, no schema information) using EXIficient (http://exificient.sourceforge.net) • A compressed EXI file (EXI-C): Using again EXIficient, XML files are converted to EXI but keeping its compression capabilities on. As a result, each dataset is encoded using four different formats, which allows us to measure data size variations and to observe how performance is affected by each format. We do not consider the EXI's schema-informed mode because the library used for Android do not support it. Methods Apart from defining different datasets and encoding formats, our experiments are extended with two more variables to be as much realistic as possible. We perform our experiments with two different mobile terminals and over two types of communication links (Wi-Fi, 3G). We describe next the metrics and the experimental environment used for the performance study. Metrics The main metric to measure performance is execution speed, which includes local parsing time on the mobile devices and time spent on data communication with the server. It does not take into account processing tasks at the server side, such as handling requests and generating responses, as we are more interested in the performance from the smartphone perspective. We measure execution speed over three scenarios: parsing time from memory (Local scenario), parsing time from HTTP source using a Wi-Fi connection (Wi-Fi scenario), and parsing time from HTTP source using a 3G connection (3G scenario). In the first scenario, we measure parsing time without considering disk transfer or network delays. To accomplish this, information is first loaded into memory before being processed. These values are compared later to measurements including communication delays (Wi-Fi and 3G scenarios), which allow us to identify the influence of parsing and communication in the overall execution times. According to [63], despite advances in network performance, the time required to access shared resources on a local network remains about a thousand times greater than that required to access resources that are resident in local memory. The time, of course, will be even larger if resources are accessed over the Internet and using wireless technologies. Calculating execution speed of Java programs in a modern system, such as Android-based smartphones, is a difficult task mainly due to the existence of multiple factors that may alter the final result, such as cache memories, multi-threading, background processes, Just-In-Time compilation (JIT), or garbage collection. These factors cause that different executions of the same program may lead to very different results. For this reason, we used the methodology presented by [64], as it provides a statistically rigorous approach to mitigate the effect of all these factors on execution time. The methodology allows to calculate steady-state performance, which is the normal action of the application once all its classes are loaded in memory and, in case where available, the JIT compiler has also done its job and the application is supposed to run without major interferences of other factors [64]. It attempts to cope with non-deterministic errors that may affect measurements, by executing iterations of the instrumented code until a consecutive number of measurements (k) showing minimal variation is found. Variability is calculated using the coefficient of variation (COV) (standard deviation divided by the mean), which is a normalised measure of dispersion of a probability distribution. Mean and confident interval for the mean are calculated for the k values. We apply this method to Local and Wi-Fi scenarios with k = 10 and COV = 0.05. The value of COV was determined experimentally as we were unable to obtain measured values with lower variability. A 95% confident interval is calculated for the mean using the Student's t distribution. In the 3G scenario we cannot apply this methodology because of limitations of the data plan used by the smartphones (200 MB/month at full speed). In this case, we execute a warmup iteration and calculate mean values and confidence intervals for the next 3 iterations. Additional measures are taken to reduce possible sources of interferences with the instrumented code such as disabling background data synchronization in the smartphones, and stopping manually several applications commonly included in Android or supplied by the phone manufacturer that are continuously executed in the background, such as messaging services and email clients, location services, etc. Experimental Environment CAPS, SD, and OBS datasets are accessible from an Apache HTTP server (http://httpd.apache.org) that emulates a real SOS server. The HTTP server runs in a 2.26 GHz Dual Nehalem Quad Core CPU, with 24 GB DDR3 RAM w/ECC, and offering download and upload speeds of up to 100 Mpbs. The server is located in a data center in Dallas, Texas. The (emulated) SOS server manages GetCapabilities, DescribeSensor, and GetObservations requests returning pre-existing response files. For the case of data provision (RS, IO), the server emulates RegisterSensor and InsertObservation requests by copying the messages' content to a server folder. For each of the encoding formats we use streaming parsers to support data parsing tasks. We used the XMLPull (http://www.xmlpull.org) parser available in Android for XML, the streaming parser included in GSON (http://code.google.com/p/google-gson/) for JSON, and we ported the StAX API (Streaming API for XML [65] ) provided by EXIficient to Android for EXI. The choice of using streaming parsers was made to avoid unfair comparisons between APIs that incur additional tasks, such as creation of application data structures, or performing data type conversions beyond the task of parsing itself. In addition, the use of streaming APIs is preferred for networking applications, as information can be processed as it is received and memory is used much more efficiently [66]. All the parsers considered in our experiments return a very similar stream of events that allow reading the content of each data entity in a similar manner (e.g., XML element, JSON object). As mobile terminals we use two Android-based smartphones: HTC Desire and Samsung Galaxy SII (SGS2). The HTC Desire smartphone, released in March 2010, has a 1 GHz CPU, 576 MB of RAM, and runs Android OS v2.2. The SGS2 mobile phone was released about a year later with a dual-core 1.2 GHz Cortex-A9 CPU, 1 GB of RAM, and running Android OS v2.3.4. We selected two devices with different processing power to attempt to measure, although very roughly, how much improvement could be expected by using more powerful devices. In the Wi-Fi scenario, smartphones connect to a 802.11 g access point with a theoretical speed of up to 54 Mbps. During our experiments we monitored signal strength which ranged from −57 to −72 dBm. A connection with these values can be considered as having a good signal strength. Changes in signal strength levels can have great impact on TCP throughput, and low signal levels are more susceptible to random disturbances [67]. We also calculated approximate values for round-trip time as the time spanned from the moment a request is sent to the server to the moment the response starts to arrive at the client application. These values were above 150 ms and include, in addition to the actual latency of the underlying network, the software overhead of the implementation of HTTP and network protocols in Android. In the 3G scenario, an UMTS/HSDPA connection is used, with a download speed advertised by the carrier of up to 7.2 Mbps. Upload speed values are not provided by the carrier. Similarly, signal strength was monitored during the experiments with values ranging between −60 and −65 dBm, which can be also considered as good signal levels. Round-trip times were above 500 ms and showed great variability. These values can be considered as normal, as previous experiments with several network carriers reported median round-trip times between 300 ms and 500 ms at the TCP level [68]. Authors of those experiments also reported that measurements taken at different times and locations vary widely, even for a single carrier, resulting in transfer rates lower than advertised rates. The experiments in the networked scenarios are executed in a fixed location (Castellón de la Plana, Spain) hence variations in the conditions of the experiments because of devices' mobility do not need to be considered. Results In this section we present the result of the performance experiments. Because of the extension of the results, we present in detail those for CAPS, OBS and IO datasets. The results obtained for the SD dataset were very similar to those of small files in CAPS. Similarly, the results for RS had a lot in common with small files in IO. Table 2 summarises the information related to size and content density of the files in this dataset. Content density (CD) is calculated as the percentage of an XML document that is "actual data" (attribute and element values), in contraposition to the portion that is "structure" (namespace information, tags, etc.). According to size and content density, files are classified as follows [69]: CAPS Dataset • High CD (Cat. I): content density > 33%, data predominates. • Low CD: content density < 33%, these are documents with highly structured data and are further separated in: Table 2 we can see that six of the CAPS files have a high CD. The rest of the files belongs to Cat. III. The percentage of reduction achieved by alternative formats along with the content density of each file is represented in Figure 1. In this figure we can observe that by using JSON we can obtain a 40%-60% size reduction. By using EXI, in the best cases, we obtain a 90% reduction without compression and a 98% reduction with compression. The compression ratio increases as files are larger. In the case of JSON, it is obvious that the percentage of reduction cannot fall below the value of content density, as all the "actual data" in the XML file must be also included in the JSON file. Local Scenario As mentioned before, local parsing time is the time taken to process a file without considering disk transfer or network delays. Figures 2 and 3 show parsing times for CAPS in both mobile phones. In the figures, x-axes represent the size of the original XML files, while y-axes represent parsing times in milliseconds. For the HTC phone ( Figure 2) we observe that JSON presents the best processing times, with EXI and XML having very similar results, and EXI-C, as expected because of the need to decompress information, being the slowest option. For the SGS2 mobile ( Figure 3) the relative difference between the formats is almost the same but the time values are smaller than for the HTC phone. The execution times in the SGS2 phone are 20% to 60% faster than in the HTC phone. In all cases the execution times seem to increase linearly with file size. In this scenario, using JSON is about 3 to 8 times faster than using XML or EXI, and 10 to 20 times faster than using EXI-C. A study of the impact of network latency on users of interactive applications was presented in [70] (cited by [71]). This study measured users' experience and satisfaction as a function of application response times. Authors categorized subjective impression of response times as follows: crisp (<150 ms), noticeable to annoying (>150 ms and <1 s), annoying (>1 s and <5 s), and unbearable (>5 s). According to this classification parsing small CAPS files can be integrated in an interactive mobile application without producing annoying or even noticeable delays. But for larger files this is only possible if JSON were used. Fortunately, streaming parsers allow information to be shown to users as it is processed, improving the responsiveness of the application. As Android offers a multi-threaded environment, a thread can parse a capabilities file while another one displays the part of the information already processed, e.g., the list of observation offerings. Wi-Fi Scenario The Wi-Fi scenario includes the time taken to download sensor-related data over a Wi-Fi connection. In these scenario, round trip times are higher than parsing times for small files (<100 KB) in all formats, except for EXI-C. Hence, time spent on communication represents most of the overall execution times. When larger files are transferred the ratio between communication and parsing time varied greatly between the formats. For example, communication takes between 6 and 15 times more than parsing for JSON files, between 2 and 5 times for XML files, between 0.8 and 1.3 times for EXI files, and between 0.15 and 0.3 times for EXI-C. Figure 4 shows the overall processing times for the SGS2 smartphone. Numbers for the HTC follow a similar trend but are 40%-80% larger. Under the network conditions of the Wi-Fi scenario, XML is the format with worst execution times, as it spends a lot of time transmitting much larger files. EXI-C shows slightly better results as larger parsing times are compensated by much smaller communication times. JSON and EXI outperform XML and EXI-C, being almost twice faster for files above 100 KB. Even when communication delays are considered, parsing and data transfer do not seem to be an obstacle in terms of execution speed when small files are processed, as for all formats these values are consistently below 1 s. Once more, additional measures have to be taken to not harm application responsiveness when large files are processed. 3G Scenario The last scenario uses a 3G connection as communication link. In this case, we exclude the largest file in CAPS because if included, the overall data to be transmitted would exceed the limitations of the data plan used by the 3G connection. In this scenario round-trip times, around 500 ms, are even higher than parsing times for EXI-C small files. Because of this and the much smaller size of EXI-C files, the overall times for EXI-C are the best when small files are processed. Although counterintuitive at first, this result makes perfect sense. The burden brought by decompression is tolerable if files are small, because most of the overall times is spent on communication. XML presents again the worst results and on average EXI shows the best. Figure 5 shows the overall processing times for the SGS2 smartphone. The measurements for the HTC smartphone are not shown to simplify exposition. In general, time spent on communication when using 3G was between 1.1 and 7 times slower than in the Wi-Fi scenario. The lack of control we have over the networking conditions in this scenario, such as signal strength or bandwidth, along with the low number of iterations executed do not allow to generalize the results presented above. Nevertheless, we consider that they reflect what might be expected under less reliable networking conditions: formats with smaller size will be favoured (EXI and EXI-C) even when more CPU time is needed in the device to process them. OBS Dataset The OBS dataset contains files encoded in O&M. Table 3 shows the size of these files for each format, as well as their content density. The values for the content density, almost 100% for larger files, is the most distinctive attribute of these files. This happens because observation values are typically encoded in an om:Observation element that packages a group of values that shares some metadata in a single block ( Figure 6). This allows to reduce the size of XML messages in comparison to the use of more specialised observation types (e.g., om:Measurement) [62]. The block of observations represents almost the total size of larger files. As a consequence, no reduction in size is achieved by using JSON or EXI. On the other hand, EXI-C still presents excellent compression rates (80%-96%). For this dataset we calculate the parsing times for JSON and EXI, but we decided to exclude them for the Wi-Fi and 3G scenarios. We made this decision because the only advantage of using JSON, having faster parsing times, is measured in the first part of the experiments. On the other hand, EXI does not offer any important advantage as it has shown similar parsing times to XML and the size of EXI files is not much smaller for this dataset. In addition, we only present results for the SGS2 smartphone as performance differences with the HTC phone are similar to what we have shown in the previous section. Local Scenario Parsing times for OBS files present a similar behaviour that for CAPS files (Figure 7). Again, JSON is the format that shows lower processing times, being consistently between 1.4 and 5 times faster than XML and EXI. The main difference with OBS files is that time spent on decompressing EXI-C files is considerably larger for this dataset, being up to 25 times slower than JSON. JSON, XML and EXI parsers spend most of the time reading the string value containing observations, and as a consequence they have few extra overhead (parsing tags or object names, processing namespace information, etc.). The SGS2 smartphone is 20%-60% faster than the HTC phone for all formats. Encoding observations as a single string value presents an unwanted side-effect. It is impossible to start processing these values until the whole block of observations has been parsed. This does not allow to process information incrementally, as mentioned in previous sections as a solution to improve application responsiveness. As a consequence, when large files are processed, delays will be annoying for users despite the format used, and they can even be unacceptable in the case of EXI-C. The option of using more specialised observation types, such as om:Measurement, where individual values are separated and are accompanied by its own metadata section, does not seem feasible for handling large volumes of data. In a small experiment, we converted the three largest files in OBS to use om:Measurement resulting in a 10 to 20 times increase of the file sizes. Exchanging 30 to 90 MB files over wireless connections does not sound like a good alternative in this case. Figure 8 shows the overall execution times for both scenarios. Unlike files in CAPS, the time used to decompress EXI-C data was not compensated by larger communication times of other formats in most of the cases. Communication times for larger EXI-C files were only a fraction of the time spent on decompression ( Figure 9). In Figure 8 we can also observe that larger XML files transmitted over 3G shows even better overall execution times than EXI-C files transmitted over Wi-Fi. On the other hand, for small files the use of compression improved the performance for both types of communication links. IO Dataset Experiments with the IO dataset differs from those presented before in which data is generated/captured by the phone itself or other sensor(s) that could be attached to or communicating with it, and sent to the server. In this case, we measure the ability of mobile phones to generate the required messages in the given formats. Messages generated in the experiment contain synthetic values of a measured phenomena (e.g., temperature). We build messages with a different number of measurements as reflected in Table 4, trying to cover different possible scenarios, such as data transmitted continuously (e.g., in a near real-time application), or data that is stored temporally on the device because of the lack of connectivity or the unavailability of efficient connection links (e.g., it is more efficient to upload larger amounts of data only when Wi-Fi connections are available). Table 4 also shows the size of files in IO for each format, as well as their content density. Similarly to the case of observation files, the content density is almost 100% for large files and the reduction in size achieved by JSON and EXI is minimal. On the other hand, with EXI-C we achieve compression rates above 98% for files larger than 100 KB. Local Scenario Instead of measuring parsing time, in this case we measure serialization time, i.e., the time taken to convert application data in a message in the required format. In this scenario the resulting messages are serialized to the device's main memory. All the experiments for the IO dataset assume that application data is available as string values. The reason for such assumption is that both XML and EXI documents are created using the StAX API, which only allows strings to be specified as attribute or element values. As data type conversion may require a large portion of parsing or serialization times [25], we considered a better option to perform this conversion incrementally when this data is stored locally on the device and not when it is sent to the server. We also build JSON messages differently, rather than using GSON we take advantage that these files are text-based and build messages just by concatenating string values. Serialization times are shown in Figure 10. These values are very similar to those of OBS, with the difference that time is spent on building the messages and not on parsing them. Once again, JSON is processed faster than the rest of the formats. Although in this case similar executions times could be obtained for XML if we had built messages in the same way, by concatenating string values and not using the XML parser. EXI-C is 15-30 times slower than JSON, 5-10 times slower than XML, and 3-6 times slower than EXI. Wi-Fi and 3G Scenarios As we did for the OBS dataset, for these scenarios we only include the results for XML and EXI-C. We have shown previously that because of the way observations are encoded JSON and EXI offer few advantages over XML regarding size reduction and time spent on communication. When a communication link is considered we measure the time taken to build and send the message to the server, and to receive an acknowledgment of receipt. The results for Wi-Fi and 3G links are shown in Figure 11. The figure shows that in the case of Wi-Fi, XML performs better than EXI-C, as larger processing times of the latter are not compensated by larger communication times of the former. When a slower connection link is used, the use of compression seems to be a better option. Upload links are expected to be much slower than download links [68], for this reason the potential gain in terms of execution time of sending smaller messages is higher. The figure shows that when uploading messages above 100 KB, the use of EXI-C exhibits much better execution times than XML. This advantage seems to grow quickly as the message size increases. Summary and Discussion In the experiments presented in the previous sections we have observed that the use of alternative formats to XML may be used to improve performance of mobile SWE applications. Regarding reduction of data size, JSON allowed reductions between 40% and 60% of the original XML files, while EXI without compression produced a reduction that ranged from around 80% to 90%. The problem with these formats was that they failed to produce a significant reduction for files with very high CD (above 90%) such as OBS and IO files. On the other hand, EXI-C showed excellent results for all files, with reductions ranging between 80% and 98% regardless of the content density of the files. Regarding parsing times, JSON presented the best results, being 1.4-8 times faster than using XML or EXI, and 10-25 times faster than using EXI-C. Serialization times showed a similar trend. Parsing and serialization times for XML and EXI were very similar for all datasets, with some advantage for XML in serialization. According to previous experiments (see Section 3.2) faster processing times for EXI should be expected, nevertheless that was not the case in our experiments. A possible reason could be that the code of the EXI parser used has been ported to Android without any important modification, thus it is not optimized to be executed in a resource-constrained device. The option of using compression led to an important performance penalty to local processing, being up to 30 times slower than JSON for serialization. Execution times in the SGS2 smartphone were regularly 20%-60% faster than those for the HTC phone. Processing times for small files (original XML file < 100 KB) of any of the datasets showed that this task can be integrated in an interactive mobile application without producing annoying or even noticeable delays, but for larger files this is only possible if JSON were used. When large files are processed the responsiveness of these applications can be improved by showing the information as it is processed. However, this method cannot be applied to OBS and IO files because they encode observations as blocks of values, preventing them to be processed incrementally. When communication links were considered, communication times seemed to have a large impact on the overall processing times, taking longer than local computation for some of the formats, especially when no compression was involved. Nevertheless, when the quality of communication links decreased (load, maximum bandwidth, etc.), the use of compression started to perform better than XML and even the rest of the formats, which make the option of using compression an alternative that is worth considering, despite the performance penalty imposed on the client-side. Another point in favour of EXI-C when compared with XML is that in the cases where XML performed faster (Figure 8), the gap between them was substantially reduced when most powerful hardware was used, which is a good sign considering how fast phone hardware is evolving. Last, trading more local processing for less transmission time may have a positive impact on battery life because, as stated by [72], energy per instruction executed by phone's CPUs is dropping faster and continue to drop, while the energy used by communication hardware will likely drop at a far slower pace. Conclusions In this article we have presented a performance analysis of using SWE standards as data communication protocols in smartphone applications to consume and produce environmental sensor data. Our experiments were aimed to analyse to what extent the performance problems related to transmitting and processing potentially large messages encoded using XML can be alleviated by using alternative uncompressed and compressed formats such as JSON and EXI. Our results suggest that using EXI with compression (EXI-C) greatly reduce the size of exchanged messages, but adds a high overhead to processing times in the mobile phones. Nevertheless, it can be an appealing alternative if information is exchanged over very slow or unreliable communication links. This option seems to be also favoured by the increase of processing capabilities of mobile phones and the drop of the amount of energy consumed per instruction executed. Under certain conditions, EXI showed a very good trade-off between size reduction and processing times, even when it does not use compression, which implies less energy consumption. The disadvantage of using this alternative is that it does not reduce the size of observation blocks, which is a major drawback because exchanged information is expected to be composed of observation values in its majority. This problem could be solved by using a different way to encode these values where strings, timestamps and measurement values were not mixed. A pragmatic trade-off could be to use EXI for CAPS, SD, and small observation files, and to use EXI-C only when a large set of observation values must be exchanged or when information is transmitted over very slow or unreliable communication links. In terms of compactness and data reduction, JSON provides in files with low-medium content density a size reduction of 50% on average. In the case of very high content density files, the reduction rate is far less important as the content density value is a lower bound for this value. Nevertheless, because it presents faster parsing times and it can be seamlessly integrated in Web-based applications using JavaScript, its use in these applications could bring benefits to SWE applications when they do not handle large volumes of observations or network bandwidth is not an issue. In summary, the encoding format used in data communication in ubiquitous sensing scenarios through smartphone applications is clearly a determining factor, among others, for improving performance. The experiments presented in this article may help application developers to figure out the possibilities of the interplay of encoding formats and performance on data communication to enhance application responsiveness and user's satisfaction in ubiquitous sensing applications.
2014-10-01T00:00:00.000Z
2012-08-31T00:00:00.000
{ "year": 2012, "sha1": "c02bdfdbae3a721b7da7125179f9b076feb33a69", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1424-8220/12/9/12026/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c02bdfdbae3a721b7da7125179f9b076feb33a69", "s2fieldsofstudy": [ "Environmental Science", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
232434472
pes2o/s2orc
v3-fos-license
2D materials for conducting holes from grain boundaries in perovskite solar cells Grain boundaries in organic–inorganic halide perovskite solar cells (PSCs) have been found to be detrimental to the photovoltaic performance of devices. Here, we develop a unique approach to overcome this problem by modifying the edges of perovskite grain boundaries with flakes of high-mobility two-dimensional (2D) materials via a convenient solution process. A synergistic effect between the 2D flakes and perovskite grain boundaries is observed for the first time, which can significantly enhance the performance of PSCs. We find that the 2D flakes can conduct holes from the grain boundaries to the hole transport layers in PSCs, thereby making hole channels in the grain boundaries of the devices. Hence, 2D flakes with high carrier mobilities and short distances to grain boundaries can induce a more pronounced performance enhancement of the devices. This work presents a cost-effective strategy for improving the performance of PSCs by using high-mobility 2D materials. Introduction Perovskite solar cells (PSCs) based on organic-inorganic halide perovskites have been increasingly studied in recent years. Since the first report in 2009 of a perovskite material used in solar cells 1 , the power conversion efficiencies (PCEs) of PSCs have now reached a certified value higher than those of solar cells based on multi-crystalline Si, cadmium telluride or copper indium gallium diselenide, according to the efficiency chart provided by the National Renewable Energy Laboratory (NREL) 2 . Organic-inorganic halide perovskites have shown many advantages over conventional semiconductors for photovoltaics, including long carrier lifetimes, high light absorption, easy processing and low fabrication cost [3][4][5][6][7][8][9][10][11] . Therefore, PSCs are promising for use in practical applications in the future. Grain boundaries (GBs) in PSCs have been found to be detrimental to the photovoltaic performance of devices 12 . Numerous papers have reported that defects in perovskite GBs should be passivated by suitable materials, such as quaternary ammonium halide 13 , fullerene derivatives [14][15][16] and CH 3 NH 3 I (MAI) 17 , to alleviate carrier recombination and consequently improve device performance. Here, we report a novel method to overcome the drawback of perovskite GBs without passivating defects. Several 2D materials, including black phosphorus (BP), MoS 2 and graphene oxide (GO), are specifically modified on the edge of perovskite GBs by a solution process. These 2D materials have high carrier mobilities, ultrathin thicknesses and smooth surfaces without dangling bonds [18][19][20][21][22] . The PCEs of the devices are substantially enhanced by the 2D flakes, in which BP flakes can induce the highest relative enhancement of~15%. Notably, we find that under certain conditions, GBs modified with 2D materials are favourable for device performance. Therefore, a synergistic effect between 2D flakes and perovskite GBs is observed for the first time. Although nanotechnology using 2D materials in PSCs has been reported in some papers 20,[23][24][25][26] , the synergistic effect between 2D flakes and perovskite GBs has not been reported until now. To better understand the underlying mechanism of the above effect, device simulation is conducted by using commercial software 27 . The hole conduction processes from GBs to 2D flakes in PSCs are clearly demonstrated, showing that the GBs and 2D flakes all act as hole channels in the devices. The simulation results confirm that the performance enhancement induced by BP is higher than that induced by the other 2D materials because BP has the highest hole mobility [28][29][30][31][32][33][34] . In addition, the modification of 2D flakes on the perovskite grains away from GBs has little effect on device performance, indicating that the synergistic effect of 2D flakes and perovskite GBs is essential to the performance enhancement in our devices. Modification of BP flakes on perovskite films First, we modified the surfaces of CH 3 NH 3 PbI 3 (MAPbI 3 ) perovskite active layers in PSCs with BP flakes by a solution process. BP has recently emerged as a promising 2D semiconducting material for various applications owing to its tunable direct bandgap and high carrier mobility 28 . A BP dispersion solution was prepared using an ultrasonication method from BP crystals (Fig. S1a) 35 . Anhydrous isopropanol (IPA) was chosen as an orthogonal solvent for the BP dispersion to be coated on perovskite films. BP flakes were first coated on flat Si substrates and characterized with atomic force microscopy (AFM) (Fig. S1b). The average size and thickness of the BP flakes are estimated to be 39 ± 19 nm and 4.3 ± 2.0 nm, respectively (Fig. S1d, e). The high-resolution transmission electron microscopy (HRTEM) image of a BP flake confirms the orthorhombic crystal structure of BP (Fig. S2). Ultraviolet photoelectron emission spectra (UPS) of the BP thin films on silicon substrates were obtained (Fig. S3a). The valence band maximum (VBM) of BP is found to be~−5.32 eV. Based on the PL spectrum of BP films (Fig. S4), the bandgap of BP flakes is estimated to be approximately 1.53 eV. Therefore, the conduction band minimum (CBM) of the BP flakes is calculated to be− 3.79 eV. PSCs were fabricated with a device configuration of glass/ fluorine-doped tin oxide (FTO)/compact TiO 2 (c-TiO 2 )/ mesoporous TiO 2 (mp-TiO 2 )/MAPbI 3 /BP/2,2',7,7'-tetrakis-(N,N-di-p-methoxyphenylamine)−9,9'-spirobifluorene (spiro-OMeTAD)/Au, as shown in Fig. 1a. MAPbI 3 perovskite films were prepared by an anti-solvent-assisted crystallization method 36 . BP flakes were modified on perovskite layers by spin coating the BP dispersion solution on their surfaces. Figure 1b shows the band structure of the PSCs, where some of the energy levels are derived from the UPS spectra (Fig. S3) and others are taken from the literature 37,38 . It is notable that the VBM of BP (−5.32 eV) matches very well with those of MAPbI 3 perovskite (−5.50 eV) and spiro-OMeTAD (−5.15 eV). The cascade band structure of the multilayer device is favourable for the separation of electrons and holes and can reduce the carrier recombination rate at the interface between perovskite and hole transport materials (HTMs). Figure 1c shows the representative current density-voltage (J-V) curves of PSCs without (control device) and with BP flakes modified on the perovskite surfaces. The amount of BP flakes on the surfaces was controlled by coating the solution for different times. The control devices exhibit only a moderate PCE (17.97%), while a clearly improved efficiency (19.85%) is obtained when the BP solution is coated one time. The PCE further improves to 20.32% when BP is spin-coated two times. However, the device performance cannot be further improved when BP is coated three times. The corresponding photovoltaic parameters for champion devices and the average values for 30 devices are listed in Table S1. Figure 1d shows the external quantum efficiency (EQE) spectra of the above PSCs. In comparison with the control devices, a remarkable increase in EQE can be observed on the BP-modified devices over the whole wavelength range (from 300 to 800 nm), which explains the enhanced J sc in the J-V characteristics. The integrated current density based on the highest EQE spectrum (two coats of BP) is 22.5 mA cm −2 (Fig. S5a), which is very close to the shortcircuit current density presented in the J-V curve of the device. Figure 1e shows the histogram statistics of PCEs for 30 devices fabricated with BP modification (two coats) and 30 control devices. The average PCE of the PSCs is improved by the BP flakes from 16.94 to 19.54% with a relative enhancement of 15.3%. By applying a voltage bias at the maximum power point (0.935 V) to the champion device, a stabilized photocurrent of 21.30 mA cm −2 and an efficiency of~19.92% are obtained (Fig. S5b), which is equal to the average value of the PCEs obtained from the reverse (20.32%) and forward (19.53%) scans of the J-V curves. We also notice that the hysteresis of the J-V curves is greatly reduced due to the incorporation of BP flakes ( Fig. S5c and Table S2). Control experiments indicate that the solvent IPA has no obvious effect on the performance of PSCs (Fig. S5d). Effect of BP flakes on perovskite grain boundaries To shed light on the effect of BP flakes, the morphology of MAPbI 3 perovskite films was characterized with scanning electron microscopy (SEM), as shown in Fig. 2. The SEM images demonstrate that the perovskite films have an average grain size of~300 nm without pinholes. It is notable that the coverages of BP flakes on the film surfaces are only 4%, 8 and 10% for 1, 2 and 3 coats of the BP solution, respectively. Although the coverages are rather low, almost all the BP flakes are located on the GBs of the perovskite films. Notably, the space charge limited current (SCLC) characterizations of hole-only and electron-only devices (Fig. S6) confirm that the introduction of BP flakes at perovskite GBs has little effect on the electron and hole mobilities of the perovskite layer. Therefore, we believe the effect of BP flakes on PSCs is mainly due to the modification of GBs. The existence of BP flakes on the perovskite surface is also confirmed by energy-dispersive X-ray spectroscopy (EDX) mapping of phosphorene (Fig. S7). Notably, BP flakes on the GBs cannot passivate defects because they are physically absorbed on the edge of GBs, and no chemical bond can be formed with the perovskite layer, which is confirmed by Fourier transform infrared (FTIR) spectroscopy measurements (see Fig. S8). To determine why the BP flakes are coated on the GBs only, we measured the contact angle of IPA on perovskite films with different crystallinities (Fig. S9). The average contact angles are 8°, 16°and 27°for the amorphous, polycrystalline and single-crystal perovskite surfaces, respectively, indicating that the contact angle of IPA decreases with the decreasing crystallinity of perovskite films. The surface energy of IPA on a perovskite film is related to the contact angle θ provided by Young's equation 39 : where γ SV , γ SL and γ LV are the interfacial tensions of the solid-vapor, solid-liquid, and liquid-vapor interfaces, respectively, and γ SV − γ SL is the adhesion tension. Therefore, the adhesion tension increases with decreasing crystallinity. On the other hand, GB grooves can be formed during the crystallization process due to the diffusion of ions away from the boundaries 40 . Therefore, during the spin-coating process, the BP solution tends to move to GB regions, as shown in Fig. 2e, resulting in BP flakes mainly depositing on GBs after the evaporation of the IPA solvent. Based on the above results, we present a schematic diagram in Fig. 2f to illustrate the mechanism for the performance enhancement induced by BP flakes in a PSC. Under light illumination, photo-generated holes tend to move to the GB and generate hole currents across BP flakes. Because BP has a much higher hole mobility (~1000 cm 2 V −1 s −1 ) than spiro-OMeTAD (~10 −4 cm 2 V −1 s −1 ), BP flakes can conduct hole currents from GBs more efficiently. This model is also based on the assumption that the hole current density along a GB is higher than that in the adjacent grains due to band bending when near the GB (Fig. S10). To confirm this effect, we characterized our perovskite films on TiO 2 by using c-AFM and find that the photo-induced hole current density at a GB is normally higher than that on adjacent grain surfaces, which is consistent with the results in the literature 41 . Moreover, the band banding that occurs close to a GB can enhance electron-hole separation due to the built-in electric field in this region. Therefore, GBs act as hole channels in perovskite active layers, and BP flakes can enhance hole transfer from GBs in the devices to improve device performance. In addition, we are keen to know whether BP flakes can conduct electrons in PSCs to improve device performance. Hence, PSCs with an inverted structure of glass/ITO/poly [bis(4-phenyl)(2,4,6-trimethylphenyl)amine] (PTAA)/MA PbI 3 /BP/phenyl-C61-butyric acid methyl ester (PCBM)/ bathocuproine (BCP)/Ag were prepared 42 . BP flakes were modified on the perovskite surfaces under the same conditions as the above experiments before coating the PCBM electron transport layers (ETLs). However, the performance of the inverted PSCs is not enhanced by the addition of BP flakes (Fig. S11), which can be attributed to the following two reasons. First, GBs in perovskite films can only conduct holes; thus, the modification of BP flakes on GBs beneath the ETL cannot facilitate the transfer of electrons in these devices. Second, the band structure of BP flakes is not favourable for electron transfer in these devices since the CBM level of BP is~0.15 eV higher than that of the perovskite layer. To better understand the effect of BP flakes on MAPbI 3 perovskite films, we characterized the steady-state and time-resolved photoluminescence (PL) behaviour of the following three samples: perovskite, perovskite/spiro-OMeTAD and perovskite/BP/spiro-OMeTAD films. As shown in Fig. 3a, the perovskite film without any modification exhibits the highest PL peak. The introduction of a spiro-OMeTAD layer on top of perovskite greatly reduces the PL intensity, which is further quenched by the presence of a BP layer, indicating more efficient hole transfer in the perovskite/BP/spiro-OMeTAD film. The time-resolved PL decay curves presented in Fig. 3b show a similar effect. The decay curves are fitted with a biexponential model: y t ð Þ ¼ y 0 þ A 1 e Àt=τ 1 þ A 1 e Àt=τ 2 (Fig. S12). The derived PL lifetime constants (τ 1 and τ 2 ) of the bare perovskite are 3.4 ns and 25.3 ns, which decrease to 1.4 ns and 9.2 ns by spiro-OMeTAD and further shorten to 0.8 ns and 2.9 ns after the BP modification. The improved PL quenching yield of the BP-modified devices indicates an enhanced hole transfer rate from perovskite to spiro-OMeTAD via the BP flakes on GBs. These results are consistent with the improved efficiency of BP-modified PSCs. To further investigate the charge recombination behaviour of PSCs, we performed electrochemical impedance spectroscopy (EIS) measurements on the devices under light illumination of 100 mW cm −2 . Figure 3c shows the EIS spectra of the devices at a bias voltage of 0.80 V. Two distinct regions can be observed in the Nyquist plots, including a high-frequency region correlated with the charge recombination process (large half circle) and a low-frequency region corresponding to the slow dielectric 43 . Figure 3d shows the recombination resistances (R rec ) of the two PSCs at different bias voltages, which are derived from the Nyquist plots (Fig. S13). Both devices show a decreasing R rec with an increasing bias voltage due to increased carrier densities 43,44 . A high R rec value and long carrier lifetime can be observed in the BP-modified solar cell at any bias voltage, indicating a slow recombination rate being induced by BP flakes; thus, the improved hole extraction rate from perovskite to spiro-OMeTAD can be attributed to the BP flakes. The stability of BP-modified PSCs was investigated in air. The statistical data of the PCEs of devices are presented in Fig. S14. The BP-modified PSCs and control devices can maintain~92 and 76% of their initial average PCE after 1000 h, indicating that the devices with BP flakes are more stable than the control devices. This improved stability of the device can be attributed to the BP flakes that partially block the diffusion of outside ions, oxygen and moisture to the GBs [45][46][47] . On the other hand, Ahn et al. reported that the GBs in perovskite films are unstable in the simultaneous presence of charge and moisture 48 . BP flakes can efficiently conduct holes and decrease the density of holes accumulated in GBs, which can be another reason for the improved stability of BP-modified devices. Grain-size effect of perovskite films with the BP modification To demonstrate the effect of GBs in PSCs, we prepared MAPbI 3 perovskite films with different grain sizes by controlling the thermal annealing periods. As shown in Fig. 4, perovskite films with average grain sizes of 200, 250, 300 and 390 nm were prepared after thermal annealing at 100°C for 10, 30, 60 and 120 min, respectively. For the devices without the BP flake modification, the device efficiency increases with an increasing grain size (Table S3 and Table S4), which is consistent with the results in the literature 42,49 . After the BP flake modification under the same conditions, the PCEs of the devices dramatically improve. When the average grain size is~300 nm, the devices show the highest PCE and the maximum relative enhancement. This result indicates that a suitable number of GBs is favourable for the performance of PSCs because the photocurrents in the devices can be conducted out along the GBs (Fig. S10). Notably, devices with smaller grain sizes (200 and 250 nm) show lower PCE enhancements when the GBs are modified with BP flakes. We can find the reason from the cross-sectional SEM images of the perovskite films shown in Fig. 4. When the grain size is smaller than 300 nm, the perovskite grains cannot penetrate the films from the bottom to the top. In this case, the GBs in the middle will prohibit electron transport in the devices, although they continue to be hole conductors (Fig. S10f) 50 . Modification of other 2D material flakes Solutions of other 2D materials, including MoS 2 , GO and BP quantum dots (BP-QDs), were prepared and modified on the perovskite layers of PSCs with the same procedure [51][52][53] . As expected, almost all of the MoS 2 and GO flakes are located on the perovskite GBs (Fig. S15). The average sizes of the MoS 2 and GO flakes are estimated to be~40 nm. However, BP-QDs are too small to be observed with SEM. The average size of BP-QDs observed with TEM is less than 5 nm (Fig. S16). It is reasonable to expect that BP-QDs are located on the GBs of perovskite films as well. The J-V curves and photovoltaic parameters of the PSCs modified with MoS 2 , GO and BP-QDs under optimum coating conditions were characterized ( Fig. S17 and Table S5). The photovoltaic performances of the devices are improved by all the 2D materials, and the PCE enhancements are presented in Fig. 1f. Notably, BP flakes can induce better photovoltaic performance than other 2D materials, which can be attributed to BP having the highest carrier mobility. Moreover, BP-QDs can only conduct holes at a distance of several nanometres, and thus, their effect is inferior to that of BP flakes. Device simulation for the effect of 2D flakes To better understand the performance enhancement of . e, f J-V curves of the PSCs without (control) and with BP (2 coats) deposited on perovskite films. g Average PCEs and relative PCE enhancement (after the BP modification) of PSCs with different perovskite grain sizes. The black curve (control) and red curve (BP) correspond to the PCEs of devices without and with the BP modification, respectively, while the green columns represent the relative PCE enhancement after the BP modification performed a device simulation by using the commercial software Silvaco. A device with a perovskite grain size of 300 nm was designed for simulation (Fig. S18, Table S6 and Table S7). The GB grooves were assumed to be 30 nm deep, which is consistent with our AFM measurements. To simulate the effect of BP in the devices, a BP layer was added on the surfaces of these grooves. As shown in Fig. 5, the J-V curves of the devices are simulated before and after coating the BP layer under standard illumination conditions (AM 1.5 G, power density: 100 mW cm −2 ). It is interesting to find that the simulated performance is very similar to our experimental results (Table S8). After the BP modification, the open-circuit voltage and the fill factor of the device are substantially improved. We can clearly find high hole currents conducted out by the BP flakes from the GBs in Fig. 5b. Here, the hole mobility of BP is 1000 cm 2 V −1 s −1 , which is 7 orders of magnitude higher than that of spiro-OMeTAD 29 . When we replaced BP with MoS 2 , which has a hole mobility of~70 cm 2 V −1 s −1 54-56 , the PCE enhancement is lower than that provided by BP. We also simulated the effect of GO flakes (hole mobility is assumed to be~0.015 cm 2 V −1 s −1 ) on GBs, and a performance enhancement is observed due to the hole conduction of GO 57 . Our simulation results demonstrate that the performance enhancement induced by 2D flakes increases with an increase in the hole mobilities of 2D flakes, which is consistent with our experimental results presented in (Table S9). In this case, the hole currents from GBs cannot be efficiently conducted out by the HTMs; thus, the advantage of GBs that can form vertical hole channels in PSCs cannot be observed. Mixed perovskite solar cells modified with 2D flakes We then tested the effect of 2D flakes in (CsPbI 3 ) 0.05 (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 mixed PSCs with high efficiency 58 . The mixed PSCs have the normal device structure shown in Fig. 1a. Similarly, different 2D materials dispersed in IPA were spin coated on the perovskite surfaces. Notably, most of the 2D flakes are located on the GBs of perovskite films (Fig. S19a, b). The 2D material interlayer can also form a cascade band structure with the (CsPbI 3 ) 0.05 (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 mixed perovskite and spiro-OMeTAD films (Fig. S3h), which will be favourable for charge separation and transport. Similar to the MAPbI 3 devices, we find that 2 coats of 2D flakes on the mixed perovskite leads to the best effect (see Fig. S20). As shown in Fig. 6a, the pristine mixed PSCs have the best PCE of 20.25%, V oc of 1.09 V, J sc of 24.56 mA cm −2 and FF of 75.6%, while the BP-modified champion device (Fig. S19d). Similar to the PSCs based on MAPbI 3 , the stability of the mixed PSCs was also improved by BP flakes (Fig. S21). For the light stability test, the BPmodified device can maintain~91% of its initial PCE after light soaking tests for over 400 h, while the control device can only retain~65% of its original efficiency (Fig. S21c). It is reasonable to find that the performance enhancements induced by MoS 2 and GO flakes are also obvious, and the photovoltaic parameters are shown in Table S10. As shown in Fig. 6b, mixed PSCs with different 2D flakes are simulated with Silvaco, and the photovoltaic parameters enhanced by the 2D flakes fit very well with the experimental results. Considering that the carrier mobility of a 2D flake varies with its layer number and size, we assumed that the carrier mobilities of 2D flakes changed from 0.01 to 10 4 cm 2 V −1 s −1 in the simulation and obtained the relationship between the device efficiency and hole mobility (Fig. S22a and Table S11). As shown in Fig. 6c, regions I, II and III correspond to the carrier mobilities of the three different 2D flakes, namely, GO, MoS 2 and BP, respectively. The device efficiency increases with an increasing carrier mobility, while the enhancement is saturated when the mobility is over 10 3 cm 2 V −1 s −1 . Therefore, BP should be the optimum 2D material for the conduction of carriers from the GBs in PSCs. Notably, the performance enhancement induced by BP flakes cannot be simply attributed to the cascade band structure of the perovskite/BP/spiro-OMeTAD interface because the coverage of 2D flakes on the surface of a perovskite film is less than 10%. To better understand the underlying mechanism, we further simulated the effect of the location of BP flakes on the device performance. Schematic diagrams of the device structures (without BP and with BP flakes at different positions) and corresponding hole current density distribution are shown in Fig. S23. High hole current densities can be found at positions near BP flakes in the simulated mixed PSCs due to enhanced hole conduction at the perovskite/BP/spiro-OMeTAD interface. Compared with control devices (without BP), the highest relative PCE enhancement (~12.5%) can be obtained when BP flakes are located exactly on GBs (Fig. 6d, Fig. S22b and Table S12), which is consistent with our experimental results shown above. In contrast, only an~2% relative PCE enhancement can be achieved when the BP flakes are located at the centre of grain surfaces. Therefore, the synergistic effect of BP flakes on GBs in a PSC plays an essential role in performance enhancement. Discussion We developed a convenient and cost-effective approach to modify the GBs of perovskite films by using high-mobility 2D flakes. Although the coverage of 2D flakes on the perovskite films was only several percent, most of the flakes were located on the perovskite GBs. Due to the high carrier mobilities of 2D materials, especially BP, the hole transfer from GBs was dramatically enhanced in PSCs, resulting in substantial improvements in the efficiency and stability of the devices. Our results also indicated that the GBs in PSCs were not detrimental to the device performance if the accumulated holes in the GBs could be efficiently conducted out. Under certain conditions, GBs could even be favourable for the photovoltaic performance of PSCs due to the built-in electric fields around them, which could facilitate photocarrier separation and transfer in devices. Therefore, the perovskite GBs were electrically benign, which was consistent with previously reported theoretical calculations [59][60][61] . Notably, we observed the synergistic effect of 2D flakes on GBs in PSCs for the first time. Both the carrier mobility and the location of the 2D flakes on the perovskite surface were essential to the performance enhancement. This work provides a guideline for modifying perovskite layers with novel high-mobility 2D materials to improve the photovoltaic performance and stability of PSCs. Preparation of 2D materials Commercialized bulk BP crystals were used for the preparation of BP thin flakes. Ten milligrams of the BP crystals were first ground with a mortar in a glovebox, dispersed in 5 ml anhydrous IPA, and then ultrasonicated for~20 h with an ultrasonic disrupter (BioSafer 650-92, output power~400 W) at low temperature (maintained by an ice-water bath). The solution was then centrifuged at 3000 rpm for 40 min, and the upper dispersion was collected and re-centrifuged at 6000 rpm for 40 min. The BP thin flakes in the upper dispersion solution were carefully collected for tests and solar cell applications. MoS 2 , GO flakes and BP-QDs were prepared using a similar procedure. [51][52][53] The concentration of 2D flakes was approximately 0.1 mg/ml (measured by filtration and weighing). Device fabrication PSCs with a normal structure in this work have a configuration of glass/FTO/TiO 2 /perovskite/spiro-OMe-TAD/Au. The FTO substrates (14 Ω/sq were cleaned with soap water, distilled water, acetone and IPA and treated with O 2 plasma for 5 min. A thin c-TiO 2 film (~30 nm) was then deposited on the FTO glass by spin coating at 4000 rpm from a solution of 0.15 M titanium isopropoxide in ethanol (with the addition of 1.5 mM HCl from 37 wt% hydrochloric acid) and was sintered at 500°C for 30 min in an oven. After cooling to room temperature, a mp-TiO 2 layer (~150 nm) was spin coated on cp-TiO 2 at 4000 rpm by using TiO 2 paste (Dyesol 30 NR-D, 30-nm nanoparticles) diluted in tert-butanol (weight ratio 1:7) and sintered again at 500°C for 30 min. After cooling down again, the substrates were immersed in a 40 mM clear aqueous solution of TiCl 4 for 30 min at 70°C, washed with distilled water and IPA, blow-dried with air flow and sintered again at 500°C for 30 min. Upon cooling to 150°C, the substrates were immediately transferred into a glovebox for perovskite film deposition. The perovskite precursor was prepared by dissolving 0.5763 g of PbI 2 and 0.1986 g of MAI in 1 ml of mixed solvents of anhydrous DMF and DMSO (volume ratio, 8:1). After filtering, 60 μl of a CsPbI 3 solution (1 M, dissolved in DMF and DMSO, 4:1) was added to the solution. The Csdoped MAPbI 3 precursor was spin coated on top of the TiO 2 substrates at 4000 rpm for 30 s in a glovebox. After 10 s of the spin-coating process, approximately 150 μl of chlorobenzene was poured onto the spinning substrate. The substrates were then annealed on a hotplate at 65°C for 2 min followed by 100°C for 60 min in a glovebox. In the experiments of the grain-size effect, the annealing time at 100°C was tuned from 10 min to 120 min. To prepare (CsPbI 3 ) 0.05 (FAPbI 3 ) 0.95 (MAPbBr 3 ) 0.05 mixed perovskite films, 1.013 g of FAPbI 3 yellow powder 62 , 41 mg of MAPbBr 3 single crystals 63 and 38 mg of MACl were dissolved in mixed solvents of 0.89 ml of DMF and 0.11 ml of DMSO. After filtering, 80 μl of a CsPbI 3 solution (1 M, dissolved in DMF and DMSO, 4:1) was added to obtain the final mixed perovskite solution. The mixed solution was then spin coated on the mp-TiO 2 films at 1000 rpm for 5 s and 5000 rpm for 25 s. After 20 s of the spin-coating process, 0.5 ml ethyl ether was poured onto the spinning substrates. The perovskite precursor films were then annealed at 150°C for 10 min. For the BPmodified devices, the freshly prepared BP dispersion was spin coated on perovskite films at 4000 rpm, followed by annealing at 70°C for 5 min. This process may be repeated several times to coat BP flakes with different amounts. After the BP flake modification, a spiro-OMeTAD (75 mg ml −1 in chlorobenzene) solution was spin coated on top at 4000 rpm. The spiro-OMeTAD solution (1 ml) was doped with 28.5 μl of tBP and 17.5 μl of Li-TFSI (520 mg ml −1 , dissolved in acetonitrile). Finally, a gold electrode (~100 nm) was deposited on top through a shadow mask by thermal evaporation. The active area (the overlapping area between the FTO and Au contacts) of an individual solar cell was 8 mm 2 . Each substrate was then encapsulated with epoxy and glass covers. The fabrication process of PSCs with other 2D materials was similar to that of the BP-modified devices. MoS 2 , GO or BP-QDs dispersions in IPA were spin coated on perovskite films before the deposition of spiro-OMeTAD and Au electrodes. In addition, hole-only devices (ITO/PTAA/MAPbI 3 /Au; ITO/PTAA/MAPbI 3 / BP/Au) and electron-only devices (ITO/SnO 2 /MAPbI 3 / PCBM/Ag; ITO/SnO 2 /MAPbI 3 /BP/PCBM/Ag) were prepared and tested for space charge limited current (SCLC) measurements. Material and film characterizations AFM measurements were carried out with a Bruker Nanoscope 8 system (tapping mode). TEM observations were carried out by using a JEOL JSM 2100 F scanning transmission electron microscope operating at 200 kV. SEM images were obtained by using a field-emission scanning electron microscope (Hitachi S-4300). FTIR measurements were performed using a Bruker Fourier transform infrared spectroscopy system. UPS measurements were carried out with a VG ESCALAB 220i-XL ultrahigh vacuum surface analysis system equipped with a He-discharge lamp providing He-I photons at 21.22 eV. The base vacuum of the system was~10 −10 Torr, and a −5.0 V bias was applied during the measurements. The PL measurements were carried out by using an Edinburgh FLSP920 fluorescence spectrophotometer. A 636-nm laser was used as an excitation light source. The c-AFM measurements were performed by using an MFP-3D series instrument (Asylum Research) with light illumination from the side of the perovskite film. The used AFM tips were coated with Cr/Pt, and no bias voltage was applied during the test. Device characterization The J-V curves of the PSCs were measured in ambient air with a Keithley 2400 source meter under light illumination of 100 mW cm −2 (Newport 91160 solar simulator, 300 W, equipped with an AM 1.5 G filter). The light source was calibrated frequently during the J-V tests by using a standard silicon reference cell (also calibrated) to ensure accurate light intensity. The J-V characteristics were determined by applying an external voltage while measuring the current response. For a standard sweeping cycle, the external applied bias changed from 1.2 to 0 V (reverse scan) and then returned to 1.2 V (forward scan) with a voltage scan rate of 30 mV s −1 . No preconditioning (such as forward bias or light soaking) was applied before the measurements. EQE spectra were measured with a standard EQE test system from Newport, consisting of a Xenon lamp (Oriel 66902, 300 W), a monochromator (Newport 66902), a silicon detector (Oriel 76175_71580) and a dual-channel power meter (Newport 2931_C). EQE measurements were performed in DC mode (300-800 nm) at room temperature. Stable power output characteristics were carried out at the maximum power point of the device under standard light illumination of 100 mW cm −2 , and the current response was recorded as a function of time by a Keithley 2400 source meter. Impedance measurements of the PSCs were carried out under light illumination of 100 mW cm −2 (white light) by using a Zahner Zennium 40630 electrochemical workstation. The oscillating voltage was 50 mV, and the applied DC bias voltages varied from 0 to 1.0 V. The frequency of the sinusoidal signal changed from 2 MHz to 1 Hz. For solar cell stability tests, all devices were encapsulated with epoxy/glass. Device simulation Silvaco TCAD 2014 software was used to model and simulate PSCs with a structure of FTO/TiO 2 (30 nm)/ perovskite (380 nm)/spiro-OMeTAD (120 nm)/Au electrode. The device structure and two-dimensional mesh were constructed with the DevEdit module. The output structure file was imported by the ATLAS module followed by setting parameters for user-defined materials and declaring physical models to be calculated. The Luminous module integrated in the ATLAS framework was used to simulate light propagation and absorption. The J-V characteristics of the devices were measured under simulated solar irradiation (AM 1.5 G, 100 mW cm −2 ) with light illuminated from the FTO/TiO 2 side. The grain boundaries were simulated by creating a narrow region with a high trap density between two adjacent perovskite grains. We assumed interfacial p-type doping in the GB regions (thickness is 1 nm) with a doping level of 10 15 cm −3 at an energy level of 0.1 eV above the VBM. The thickness of each layer and the dimensions of the grain boundary region are presented in Table S7 and Fig. S23. The recombination of carriers in the body part and at the interfaces was considered by calculating SRH and Auger recombination. Finally, TonyPlot was used to visualize the results written in the log file output by ATLAS. All diagrams except the J-V curves were extracted at a bias voltage of zero. The material parameters used are listed in Table S6 and Table S7.
2021-04-01T13:40:28.178Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "2bafe262eeffa08cf272230c19683b75cfe9d074", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41377-021-00515-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3580a435d1833df29f01fd0fd917e624fbc78301", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
251694777
pes2o/s2orc
v3-fos-license
Copy-number dosage regulates telomere maintenance and disease-associated pathways in neuroblastoma Telomere maintenance in neuroblastoma is linked to poor outcome and caused by either TERT activation or through alternative lengthening of telomeres (ALT). In contrast to TERT activation, commonly caused by genomic rearrangements or MYCN amplification, ALT is less well understood. Alterations at the ATRX locus are key drivers of ALT but only present in ∼50% of ALT tumors. To identify potential new pathways to telomere maintenance, we investigate allele-specific gene dosage effects from whole genomes and transcriptomes in 115 primary neuroblastomas. We show that copy-number dosage deregulates telomere maintenance, genomic stability, and neuronal pathways and identify upregulation of variants of histone H3 and H2A as a potential alternative pathway to ALT. We investigate the interplay between TERT activation, overexpression and copy-number dosage and reveal loss of imprinting at the RTL1 gene associated with poor clinical outcome. These results highlight the importance of gene dosage in key oncogenic mechanisms in neuroblastoma. Introduction Neuroblastoma is the most common extracranial solid tumor in children accounting for 6-10% of malignancies 1 and 9% of pediatric cancer deaths 2 . The disease shows heterogeneous clinical manifestations ranging from high-risk cases with poor survival rates despite multimodal treatment to tumors that spontaneously regress without intervention 3 . Incidence is highest in the first year of life and only 5% of diagnoses are made in patients older than ten years 1 . Diagnosis at an advanced age is generally associated with worse outcomes 2 . Genetically, neuroblastoma is characterized by low single-nucleotide variants (SNV) burdens and only few recurrently mutated genes 4 , but frequent somatic copy-number alterations (SCNAs) [5][6][7] . Amplification of the oncogenic transcription factor MYCN, often through extrachromosomal circular DNAs 8,9 , is found in 20% of tumors and a key clinical indicator for high-risk disease and poor prognosis 3,10 . In addition, recurrent segmental gains and losses, including 17q gains and losses of 1p and 11q 6,11,12 are associated with unfavorable outcomes 13 . These SCNAs affect cellular phenotypes by modulating gene expression. Amplifications of MYCN and ALK upregulate these oncogenes and their downstream targets 14,15 , and larger segmental gains and losses were also found to correlate well with local RNA levels 16,17 , which in turn predict patient survival 14,15,17 . Telomere maintenance leading to replicative immortality 18 is a common mechanism in high-risk neuroblastoma [19][20][21] . Canonical telomere maintenance involves activation of the Telomerase reverse transcriptase (TERT) gene either indirectly as a downstream effect of MYCN amplification, or directly through genomic rearrangements at the TERT locus 19,21 . Alternative lengthening of telomeres (ALT) in tumors that lack TERT activation 22 involves DNA recombination induced by breaks at telomeric sequences 23 and is characterized by single stranded telomeric (CCCTAA) n sequences 24 . Generally, ALT is associated with loss of function mutations in the ATRX and DAXX genes 25 and has been found in 50% of all cancer types of the Pan-Cancer Analysis of Whole Genomes (PCAWG) cohort 26 . Affected tumors show excess telomere length compared to normal tissue and other tumors, including those with activated TERT 26 . In neuroblastoma ALT is associated with ATRX alterations [27][28][29] , significantly enriched in relapse cases and associated with poor outcome independent of other risk markers 20,28 . While previous studies have highlighted the molecular characteristics of telomere maintenance in neuroblastoma 19,27,28,30,31 , ATRX mutations were only found in 25% of high-risk and 50-60% of ALT-positive neuroblastomas 27,28,32 , suggesting additional yet unrecognized mechanisms of ALT activation. Telomere maintenance is therefore a key phenotypic property of neuroblastoma cells and a prime example of phenotypic convergence in cancer evolution 33 , where multiple somatic aberrations act individually or in concert to activate telomere maintenance pathways by modulating gene expression. To reveal such mechanisms, we here investigate the effect of genomic instability on total and allele-specific gene expression and telomere maintenance in 115 primary neuroblastomas. We analyze whole genome sequencing (WGS) and RNA-seq from tumors and WGS of matched normals, characterize local genetic effects on gene expression variability, and examine the role of copy-number dosage in telomere maintenance and survival. Cohort overview We assembled a cohort of matched tumor WGS and RNA-seq and normal WGS from blood from 115 primary neuroblastoma samples, including 52 samples from the University Hospital of Cologne, previously reported in 19 , and 63 new specimens from the GPOH-NB2004 clinical trial. All samples were jointly processed using unified pipelines to limit cohort-specific biases (Methods) and stratified according to the GPOH-NB2004 clinical trial protocol 34 into 66 high-risk, 6 medium-risk, and 43 low-risk tumours (S. Fig. 1) and equipped with clinical annotations including age, sex and survival times (S. Table 1). Normal samples from blood were genotyped and phased at common germline variant sites (S.Methods). Total and allele-specific gene expression (ASE) was quantified using phased variants and variant effects on gene expression in cis were quantified by genome-wide expression quantitative trait locus (eQTL) mapping (S.Methods) 35 . To explore the mutational landscape we determined somatic single-nucleotide variants (SNVs), structural variants (SVs) and allele-specific somatic copy-number alterations (SCNAs) from WGS (Methods). Telomere maintenance status of 115 primary neuroblastomas We first set out to determine the primary telomere maintenance mechanism ( Fig 1A) and genetic alterations across all 115 tumors by examining somatic SNV, SV, SCNA and expression data as well as WGS-based estimates of telomere length (Methods). We found MYCN amplifications in 23 tumors (20%), rearrangements affecting the TERT locus in 19 tumors (17%) and ATRX mutations in 12 tumors (10%), comprising 7 focal deletions, 4 missense or nonsense mutations and one tumor affected by a structural rearrangement (NBL54) (Fig 1B, S.Fig. 2). In addition, ALK mutations were found in 8 tumors (7%), of which 6 carried a missense mutation and 2 were affected by genomic amplifications. We queried TERT gene expression in all tumors and found both MYCN amplified and TERT-rearranged samples to have significantly higher TERT expression than those lacking both molecular features (Fig. 1C), in line with previous observations 19,36 . We additionally found 4 tumors without MYCN amplification or TERT rearrangements to show TERT overexpression (S. Fig 5, Methods). To determine the ALT status of tumors we estimated telomere lengths relative to the matched normal tissue by the abundance of telomeric repeat sequences from WGS (S. Fig. 3A, Methods) 37 . We found 21 tumors to show increased telomere lengths, of which we assigned 20 to the ALT group, as one (NBL54) also harbored a TERT rearrangement and upregulation of TERT (Fig. 1B, S.Fig 3A,B). We validated our ALT classification by comparison against experimentally determined status of ALT-associated PML-nuclear bodies (APB) 20 in 52 donors (S. Fig. 3B) and found a strong correspondence (P = 5.47 × 10 -9 , one-sided Fisher's Exact Test; sensitivity: 0.86; specificity: 0.97). While ATRX altered samples had significantly longer telomeres (P = 1.72 × 10 -6 , one sided Wilcoxon rank sum test) (S. Fig. 4), in 11 out of 20 ALT samples (55%) ATRX mutations were not detected, pointing towards alternative activation of the ALT pathway independent of ATRX mutations. Except for three tumors, MYCN amplifications, TERT rearrangements and long telomeres were mutually exclusive (Fig. 1B), in support of convergence towards a common high-risk phenotype characterized by telomere maintenance [19][20][21] . MYCN amplifications were also mutually exclusive to ATRX alterations, corroborating findings on incompatibility of these two molecular traits 38 . Comparison of TERT expression with telomere length estimates confirmed the existence of two distinct groups of high risk tumors: those with high TERT expression but short telomeres and those with low TERT expression but increased telomere length, indicative of ALT (Fig. 1D). In contrast, 40 of 43 low risk tumors (93%) showed neither increased telomere length (log ratio > 0.5) nor elevated TERT expression (z-score > -0.10, S. Fig. 5). Interestingly, active telomere maintenance was predicted in three low risk tumors (NBL09, NBL23, CB2035), which all showed disease progression. Notably, we did not find any MYCN amplifications in ALT samples and only a single sample with both TERT rearrangement and long telomeres (NBL54). Quantifying genomic instability We next investigated overall genomic instability in our cohort. We determined allele-specific SCNAs and overall ploidy from WGS (Methods) and classified resulting copy-number segments into states loss, shallow loss, neutral, weak gain, medium gain, strong gain and focal amplification ( Fig. 2A,B) and into allelic imbalance states balance, weak imbalance, strong imbalance, amplification and LOH (S. Fig. 7, S.Methods). We additionally determined the presence of whole-genome doubling (WGD) events by phylogenetic analysis from copy-number profiles as recently described 39 . As expected, the tumors showed pervasive patterns of genomic instability: on average 50% of the genomic regions harbored SCNAs, 31% of genomic regions showed gains and losses relative to ploidy, and 44 tumors (38%) showed WGD (Fig. 1B, S.Fig. 8). We identified gains in 17%, losses in 15% and amplifications in <0.1% of genomic regions, with distinct hotspots visible across the cohort ( Fig. 2A). We found a significant enrichment of WGD in tumors without telomere maintenance (26 of 51, expected 20, P=0.03, fisher-exact test), as opposed to ALT, where fewer WGD events than expected were observed (2 of 20, expected 8, P=0.01) and tumors with canonical telomere maintenance in contrast did now show enrichment in either direction (16 of 43, expected 16, P=1.0). Next, we determined ASE in all 115 tumors. Briefly, read counts from RNA-seq were tallied up at heterozygous germline variants (Fig. 2B) and aggregated to haplotype counts per gene using statistical phasing (S.Methods). In line with prior observations that MYCN amplified tumors are overall genomically more stable than their non-MYCN amplified counterparts 40 we found both the number of copy-number-imbalanced genes (P = 3.7 × 10 5 ) and genes with ASE (P=0.0023) to be significantly lower in MYCN amplified tumors (one-sided Wilcox rank sum test) (Fig. 2C). Interestingly, we also found 4 out of 23 (17%) MYCN amplified tumors to harbor a substantially higher number of copy-number imbalances than the median non-MYCN-amplified samples (37% of genes). All 4 tumors showed signs of WGD and overall high chromosomal instability (>80%) (Fig. 2D, S. Table 1) To investigate the effect of SCNAs on patient survival systematically we associated allelic copy-number imbalances on the level of chromosome arms and in non-overlapping 5Mb bins with mortality (S.Methods) and found expected associations at 1p and the MYCN locus as well as a yet undescribed association of 17p imbalance (S. Fig. 11A-C). Five tumors of deceased patients harbored extreme copy-number imbalances (> 0.9) due to loss of 17p (S. Fig. 12A), pointing towards elevated risk conferred through chromosomal loss. However, also 10 out of 26 donors (38%) with tumors harboring imbalanced gains died from the disease. We compared survival probabilities using the Kaplan-Meier method and found that survival was significantly reduced for tumors with 17p imbalance (P = 5.2 × 10 -4 ) (S. Fig. 12B). Similarly, Cox proportional hazard regression showed that 17p imbalance is significantly associated with mortality (P = 1.44 × 10 -5 ), independent of MYCN amplification (P = 4.32 × 10 -6 ) (S. Fig. 13). Notably, 17p LOH is frequent in neuroblastoma cell lines 43 , but its occurrence in primary neuroblastoma is less well described. Interestingly, we did not find TP53 missense mutations or SVs, suggesting that 17p loss might act through down-regulation of neuronal genes (S. Fig. 12C-D, S. Table 2 Even though SCNAs exhibit a strong allelic dosage effect on gene expression, transcription levels of genes are subject to transcriptional adaptations and buffering 46,47 . To investigate dosage sensitivity in our cohort systematically, we examined copy-number components in our linear models and found statistically significant copy-number effects that explain between 2.4% to 71.0% of observed variance in gene expression (S. Fig. 14). We ranked all protein-coding genes by expression variance explained and tested for pathway enrichment using gene set enrichment analysis (GSEA, S.Methods). We found 69 Reactome pathways enriched (FDR < 0.05) for copy-number dosage effects (S. Table 3), of which 25 remained after accounting for overlapping gene sets (S. Fig. 15). Notably, dosage sensitive genes were enriched in pathways involved in cell cycle and DNA repair, and in regulation of tumor suppressor genes TP53, PTEN and RUNX3. In contrast, conducting the same GSEA analysis on genes ranked by total copy-number alone did not yield any significant pathway enrichment. Our findings show that SCNAs adjust the regulatory landscape of neuroblastoma towards dysregulation of key cancer pathways and that copy-number gains effectively upregulate TERT in tumors with CTM (Fig. 3B), with the highest telomerase expression found in tumors with both TERT activation and copy-number gains. 11q loss and 17q polysomy link alternative lengthening of telomeres to upregulation of histone variants To investigate if SCNAs are linked to increased telomere length in ALT tumors, we tested each chromosome arm for association between tumor DNA content and the ALT phenotype using logistic regression, controlling for ATRX mutations (Methods). We found 11q losses (P = 4.83 × 10 -7 , ANOVA Chi-squared test) and 17q gains (P = 2.88 × 10 -5 , ANOVA Chi-squared test) to be significantly associated with ALT (Fig. 3C), confirming previous observations of frequent 11q loss in ALT 28 and revealing a yet undescribed association of 17q gain with ALT. We noticed that 11q loss co-occurs with strong 17q gains in 14 tumors and observed an overall negative correlation between DNA content of both chromosome arms across the cohort (r = -0.45, P = 2.01 × 10 -7 , Pearson's correlation) (Fig. 3D), suggesting a genomic rearrangement involving both chromosomes. Indeed, somatic SV analysis revealed 17q to 11q translocations in 19 tumors (Fig. 3E), confirming that additional copies of chromosome arm 17q translocate to 11q in the aberrant tumor karyotype 42 . Notably, 17q gains were identified in 105 of 115 tumors (91%) independent of ALT. However, ALT tumors were significantly enriched in the strongest 17q copy-number gains (S. Fig. 16). To pinpoint candidate genes contributing to ALT we tested for differential gene expression between ALT and non-ALT tumors, while controlling for MYCN amplification status, the presence of ATRX mutations and the sex of the patient (Methods). We found 293 such genes (FDR 0.05), of which 143 and 150 were up-and down-regulated respectively (S. Fig. 17, S. Table 4). We hypothesized that a subset of these genes might be driven by the ALT-associated SCNAs on 11q and 17q. Correlation between gene expression and DNA dosage of these chromosome arms revealed up-regulated histone variant genes H3F3B (17q), H2AFJ (12p) and H3F3C (12p) among genes strongly affected by 17q and 11q dosage (Fig. 3F). H3F3B (and its paralog H3F3A) encode the histone variant H3.3 48 , which is altered by activating mutations in several pediatric tumor entities, including tumors of the central nervous system 49,50 and up to 95% of chondroblastomas 51 . Interestingly, activating H3.3 mutations triggered ALT in pediatric high-grade glioma regardless of ATRX mutation status 52 , indicating that similarly, H3.3 upregulation may have functional implications in ALT neuroblastomas. H3F3C, which encodes for histone variant H3.5 is frequently mutated across different pediatric brain tumors, where alterations were found to be mutually exclusive to those in TP53 and associated with reduced genome stability 53 . The H2AFJ gene encodes for histone variant H2A.J and is deregulated in melanoma 54 , breast cancer 55 and colorectal cancer, where its upregulation is associated with poor survival 56 . Taken together, these results suggest that copy-number alterations may deregulate histone variants contributing to epigenetic dysregulation and genome integrity in ALT neuroblastomas. The genetic effects model (Methods) predicted 41% and 60% of expression and ASE variance of H3F3B explained by local copy-number effects, indicating that expression of H3F3B is directly associated with 17q dosage (S. Fig. 18). However, only 3% of H2AFJ and 2% of H3F3C expression variance is explained by local copy-number effects on 12p, indicating that here ALT-associated upregulation may result from regulatory effects in trans. To obtain a quantitative understanding how expression of the identified histone variant genes relates to ALT we predicted presence of ALT from the expression of H3F3B, H3F3C and H2AFJ using logistic regression. We found expression of H3F3B and H2AFJ, but not H3F3C to be significantly associated with ALT in the presence of the two other genes (H3F3B: P = 0.001; H2AFJ: P = 0.008 ; H3F3C: P = 0.543; ANOVA), suggesting that expression of H3F3B and H2AFJ is independently associated with ALT. For an independent validation, we compared the expression levels of H3F3B and H2AFJ between 130 telomeric c-circle positive and negative neuroblastomas from Hartlieb et al. 28 , and found significantly higher expression of H3F3B (P = 3.01 × 10 -4 , ANOVA) and H2AFJ (P = 0.02, ANOVA) in c-circle positive tumors, confirming their upregulation in ALT (S. Fig. 19). Despite ATRX alterations being significantly associated with longer telomeres, we did not find ATRX to be differentially expressed between ALT and non-ALT (S. Table 4). We speculated that interaction partners of ATRX could be subject to deregulation in ALT tumors. To identify potential interactions of ATRX and identified histone variants with proteins of differentially expressed genes in ALT we obtained direct (first order) predicted protein interactions between ATRX, H3F3B, H2AFJ, H3F3C and other proteins of differentially expressed genes in ALT affected by 11q or 17q dosage (S.Methods). The resulting network predicted high-confidence direct interactions between ATRX and differentially expressed histone 3.3 variant gene H3F3B, as well as RAD51C and SRSF1 (Fig. 3G). A network module containing H3F3B, H2AFJ and H3F3C also included deregulated histone methylation factors EED and KMT2A. EED is part of the polycomb repressive complex 2 (PRC2), which modulates transcriptional repression by methylation of H3 histones 57,58 , and we found EED to be down-regulated in ALT tumors by 11q-dosage effects (Fig. 3H, S.Fig. 20, S. Table 4). The PRC2 complex is frequently inactivated by EED loss in malignant peripheral nerve sheath tumors 59 and adenosquamous lung tumors 60 . Upregulation of H3.3 and H3.5 histones and concomitant downregulation of EED in ALT point towards a relative depletion of H3K27me3 as a consequence of higher H3 variant histone availability and impaired PRC2 activity (Fig. 3H), similar to PRC2 inhibition by activating H3.3.pK27M mutations in pediatric gliomas [61][62][63] or expression of PRC2 inhibitor EZHIP in ependymomas 64 . Investigating this in our cohort, we did neither find H3.3.pK27M nor ALT-associated upregulation of EZHIP (S. Fig. 21) in any of the tumors. Our findings implicate 11q loss and strong 17q gain in ALT neuroblastomas and show that these alterations deregulate ATRX interaction partners. They highlight histone variants as key components of ALT-deregulated ATRX protein interactions and indicate that activity of the PRC2 complex could be reduced due to attenuated EED expression resulting from 11q loss, providing additional evidence for histone-dependent chromatin deregulation by copy-number dosage in ALT neuroblastomas. Imprinted RTL1 is upregulated by bi-allelic activation in unfavorable tumors Finally, we characterized genes by ASE frequency and average ASE ratio across tumors. Since ASE can be caused by either up-or downregulation of gene expression on one parental haplotype, we systematically explored effect directionality by testing for association between ASE and total expression. 10,862 genes that were informative for ASE in at least 20 samples were considered, out of which 455 showed a significant (FDR < 0.05) effect of ASE on total gene expression (S.Methods, S. Table 5). To narrow the search, we intersected these 455 genes with those differentially expressed between deceased and other patients, resulting in a final set of 107 candidate genes (S. Table 5). Among these, genes contained on the MYCN amplicon MYCN, NBAS and DDX1 showed a positive ASE-expression effect due to strong upregulation by mono-allelic amplifications. In contrast, chromosome arm 1p (56%) and 17p (12%) were most frequent among all 76 genes with negative ASE-expression effect, indicating that loss of 1p and 17p underlies downregulation of these genes in tumors of deceased patients. Interestingly, a substantial negative ASE-expression association was found in the Retrotransposon Gag Like 1 (RTL1) gene, which was upregulated in tumors of deceased patients (Fig. 4C,D). RTL1 is a maternally imprinted gene involved in placental/neonatal development 67 and widely expressed in the nervous system 68 . Upregulation of RTL1 confers selective growth advantage in hepatocarcinoma 69 and promotes cell proliferation by regulating Wnt/β-Catenin signaling in melanoma 70 . RTL1 was one of 16 genes informative for survival time in a previous study of high-risk neuroblastomas, with stronger RTL1 expression associated with shorter survival 71 . Our linear model revealed only a minor contribution of SCNAs and germline variants to ASE in RTL1 (S. Fig. 22), suggesting that differences in allelic expression levels may result from methylation differences. Analyzing a subset of tumors using Bisulfite sequencing (BS-seq) (S.Methods), we found that decreased methylation levels at CpGs upstream of RTL1 are associated with higher RTL1 expression ( Fig. 4E,F, S.Fig. 23). Taken together these findings suggest that upregulation of RTL1 in neuroblastoma is induced by bi-allelic activation in unfavorable tumors, likely due to loss of imprinting on the maternal allele (Fig. 4G). Discussion We here systematically characterized the effects of copy-number dosage on neuroblastoma gene expression and demonstrated how copy-number gains interact with upregulated TERT to increase the efficacy of canonical telomere maintenance. We found 11q loss and strong 17q gain as markers of ALT in addition to ATRX alterations, and revealed upregulation of histone variant genes H3F3B, H3F3C and H2AFJ. Histone variants replace replication-dependent canonical histones in nucleosomes during the cell-cycle, affecting chromatin organization at telomeric 72 and actively transcribed regions by replication-independent chromatin incorporation 73-75 and interaction with chaperones and chromatin factors 76 . H3F3B resides on 17q, and our findings strongly suggest that H3F3B is directly upregulated by 17q gains, which have already been reported to exert oncogenic effects through increased gene dosage 43 . In contrast, H3F3C and H2AFJ expression are associated with 11q loss and 17q gain, but neither of these reside on these chromosome arms, suggesting that regulatory effects in trans underlie this association. Possibly, this way copy-number alterations mediate histone replacement and chromatin re-organisation in ALT, leading to decondensation and increased transcription 73,74 . Dosage-dependent down-regulation of the repressive PRC2/EED-EZH2 complex, which methylates the lysine 27 residue of H3 histones may contribute to this reprogramming, and we found EED, which is predicted to interact with all three histone variants, as differentially down-regulated in ALT tumors by 11q loss. Similarly, PRC2 activity in pediatric high-grade glioma is impaired by H3.3K27M mutations altering EZH2 binding 63 and resulting in depletion of H3K27 di-and tri-methylation 62 . ATRX stabilizes telomeres through depositing of H3.3 histones, thereby preventing replication-induced breaks conducive to ALT 72,77 . In contrast, ATRX is not required to deposit H3.3 histones in actively transcribed regions 72 . Consequently, H3.3 upregulation through H3F3B dosage in ALT tumors with defective ATRX may increase the prevalence of H3.3 in nucleosomes of active chromatin without its stabilizing effect at telomeres. Importantly, we found 11q loss and 17q gain to be associated with ALT independent of ATRX mutations. Because not all ALT tumors harbor ATRX alterations, deregulated histone variants may contribute to the ALT phenotype more directly. In high-grade gliomas ALT frequently occurs in H3.3G34R-mutant tumors independent of ATRX alterations 52 , indicating a functional link between impaired H3.3 function and ALT. Additionally, loss of ATRX alone may not be sufficient to induce ALT 77 , and ATRX mutations are likely not the only molecular feature responsible for this phenotype. However, in ATRX-wildtype ALT-positive neuroblastomas, ATRX protein levels were found to be significantly decreased 28 , suggesting that impaired ATRX activity could still underlie ALT in these cases. Furthermore, not all ALT-positive tumors showed 11q loss and strong 17q gain and these alterations were also present in a few ALT-negative tumors. Additional research with larger cohorts will be needed to characterize this relationship further. We also found that 17p imbalance is associated with poor outcome in neuroblastoma. In tumors with a 17p LOH event, loss of function of TP53 due to a second hit could be responsible for this, but no second hit was found in our cohort and we did not observe a copy-number dosage effect on TP53 expression. Alternatively, dosage-dependent down-regulation of other genes than TP53 on 17p could underlie this association. Survival-associated 17p copy-number dosage effects were enriched for neuronal genes, which suggests that impairment of neuronal processes could result in a more aggressive phenotype. The exact mechanism that underlies higher mortality of donors with 17p imbalance still needs to be investigated, such as the neuronal differentiation state of 17p loss or the mutational status of TP53 in relapsed tumors that initially showed a heterozygous deletion at diagnosis. Lastly, we identified RTL1 as a candidate marker for unfavorable tumors due to loss of imprinting of the maternal allele, similar to earlier reports on loss of imprinting of the IGF2 gene in Wilms' tumors 78 BAFs to obtain start and end points for segments. We found noisy coverage log ratios to introduce over-segmentation in some samples and therefore replaced the segmentation procedure with a custom implementation that only considers BAFs to determine start and end points of segments, but still estimates the segment's coverage using the log coverage ratios. ASCAT's output comprises copy-number segments with integer copy-numbers of major and minor alleles as well as estimates for tumor purity and ploidy. All copy-number segments were inspected manually for quality. For samples with estimated tumor purity less than 60% copy-number calling was rerun with adjusted purity and ploidy values that were manually selected after inspection of the goodness-of-fit plots and in agreement with pathology estimates of tumor purity. Gene expression analysis Aligned tumor RNA-seq reads were counted using HTseq/htseq-count 0.9.1 on exons of protein coding genes according to Ensembl release 75 human gene annotations for the GRCh37 reference, summarizing counts on gene-level. We normalized gene expression for the purpose of between-sample comparisons in a given gene. To mitigate the effect of sequencing depths and batch effect introduced by different RNA library preparation-and sequencing methods between the two cohorts we normalized htseqs by the following strategy: We first calculated library-size normalized DESeq2 variance stabilized counts from htseq counts. Then, we modeled the variance stabilized counts by cohort membership using a linear model for each gene and determined the residual for each gene and sample. If not indicated otherwise, this residual was used as the measure for gene expression in our analyses. In addition we measured allele-specific expression (ASE) for genes with at least one expressed heterozygous SNP and sufficient coverage in a given sample (S.Methods). We analyzed gene expression differences between ALT and non-ALT tumors by linear regression similar to the analysis that identified copy-number differences between these groups described above. We expect that this approach facilitates detection of expression differences mediated by the ALT-associated SCNAs identified. Expression values were modeled by linear combination of ALT status, MYCN amplification, ATRX alteration, age, sex, cohort, tumor purity and tumor ploidy. The p-value was derived from an ANOVA Chi-squared test for significance of the ALT status covariate and adjusted for multiple testing using the Benjamini Hochberg method. Genes with FDR < 0.05 were considered as significantly different expressed in ALT tumors. Analysis of genetic effects on gene expression and ASE We modeled both total expression and ASE by local genetic effects based on detected germline and somatic variation at the respective gene locus and additional covariates using Data Availability The data analyzed in this study is available from the European Genome-phenome Archive
2022-08-21T13:39:19.649Z
2022-08-16T00:00:00.000
{ "year": 2022, "sha1": "121245a190e104fefd6e66343fef1058192276ff", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/08/16/2022.08.16.504100.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "121245a190e104fefd6e66343fef1058192276ff", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
54912590
pes2o/s2orc
v3-fos-license
Length Isolation and Phylogenetic Analysis of C-type Lectin Gene from Bacterial-challenged Cotton Leafworm , Spodoptera littoralis Experiments were designed to investigate the molecular immune response of Spodoptera littoralis larvae against bacterial infection. In addition, sequence and phylogenetic analyses of the involved gene were studied. Using differential display technique, a partial insect lectin gene (SpliLec) was isolated from bacterial-challenged S. littoralis haemolymph. Five differentially displayed bands were sequenced. Sequence results revealed that a fragment of 640 bp was amplified within the open reading frame (orf) of a lectin gene. This fragment contained the complete 3` end with a poly(A) tail, but it lacks start codon, AUG at its 5` end. Using RACE PCR reaction, 5` end was extended and a final reaction was performed to obtain the full length of the SpliLec. Sequence analyses of the data revealed that SpliLec consists of a single orf encoding a deduced polypeptide consisting of a 18-residue signal peptide and a 291residue mature peptide. SpliLec sequence contained two CRDs: short form CRD1 and long form CRD2 stabilized by two and three highly conserved disulfide bonds, respectively. SpliLec shares homology with some dipteran lectins suggesting possible common ancestor. These results suggested an important role of the SpliLec gene in cell adhesion and non-self recognition. It may cooperate with other AMPs in clearance of invaders of Spodoptera littoralis. INTRODUCTION After pathogens penetrate the insects' structural barriers, they rely solely on an efficient innate immune system which shares many characteristics with the innate immune system of vertebrates.Insect innate immune system comprises both humoral and cellular responses (Pinheiro andEllar, 2006, Lemaitre andHoffmann, 2007).Insect humoral defenses include the production of a potent arsenal of antimicrobial peptides (AMPs) (Pinheiro andEllar, 2006, Lemaitre andHoffmann, 2007), coagulation, and melanization led by protease cascades (Kanost et al., 2004).Insect cellular defense refers to haemocyte-mediated immune responses, such as phagocytosis, nodulation, and encapsulation (Lavine and Strand, 2002).The encapsulation process involves cell adhesion and melanization (Eslin and Prevost, 2000). Lectins are an important class of carbohydrate-binding proteins that have several distinct biological activities.They mediate cell adhesion (i.e.bind to microbial surface components), non-self recognition and immuno-protection processes in immune responses (Vasta et al., 1999).They exist in a wide variety of plants, animals, fungi, bacteria and viruses (Sharon, 1977) and play significant role in clearance of invaders, either as cell surface receptors for microbial carbohydrates or as soluble proteins existing in tissue fluids (Yu and Kanost, 2003).Such proteins are known as pattern recognition receptors (PRPs), because they bind to the pathogen associated molecular patterns (PAMPs) present in the array of carbohydrate components on the surface of microorganisms and consequently, trigger a series of protective immune responses (Medzhitov and Janeway, 2002).Various proteins that display carbohydrate-binding activity in a calcium-dependent manner are classified into the C-type lectin family (Drickamer and Taylor, 1993).They contain C-type carbohydrate-recognition domains (CRDs) or C-type lectin domains (CTLDs) composed of 110-130 amino acid residues in common.These CRDs or CTLDs contain a characteristic double-loop (loop in a loop) stabilized by two or three highly conserved disulfide bonds.The vertebrate C-type lectins are usually multi-domain lectins and they fall into seven groups (I-VII) (Day, 1994).Seven new groups (VIII-XIV) were added in the revised classification in 2002 (Drickamer and Fadden, 2002) and three new groups (XV-XVII) were updated, recently (Zelensky and Gready, 2004).In contrast, the invertebrate C-type lectins are mostly single-domain proteins, but C-type lectins that contain two CRDs are characterized too.Although all C-type lectin CRDs have sequence similarity, they can be divided into two types: a "short form" approximately 115 residues long and a "long form" approximately 130 residues long, which includes two additional disulfide-bonded cysteine residues at the amino terminus (Drickamer andTaylor, 1993, Day, 1994).In recent years, more and more C-type lectins with two tandem CRDs have been identified and characterized from invertebrates, especially from insects (Yu and Kanost, 2000, Yu et al., 2005, Tian et al., 2009).Examples of the C-type lectins with two tandem CRDs include the M. sexta immunolectins (IML-1, IML-2, IML-3 and IML-4) which serve as humoral PRPs (Kanost et al., 2004), LPS-binding lectins from the silkworm, Bombyx mori (Koizumi et al., 1999) and the fall webworm, Hyphantria cunea (Shin et al., 2000). In this paper, the full length cDNA of a C-type lectin with two tandem CRDs from S. littoralis, was isolated using differential display and RACE PCR techniques.Sequence characterization and phylogenetic analyses were reported, too. Insects and bacterial strains Laboratory colony of the cotton leafworm, S. littoralis, used for our experiments was originally collected from a private okra field at Giza, Egypt in 1995 and maintained in the insectary of the Department of Entomology, Faculty of Science, Cairo University according to the technique described by Levinson and Navon (1969) and kept at 25 °C, 65-70% RH and 14L: 10D photoperiod cycle. Two gram (+) bacteria, Staphylococcus aureus and Streptococcus sanguinis and three gram (-) bacteria, Escherichia coli (D 31 ), Proteus vulgaris and Klebsiella pneumoniae were obtained from the Unit for Genetic Engineering and Agricultural Biotechnology, Faculty of Agriculture, Ain Shams University and used for insect immunization.Bacteria were grown in a peptone medium (1%), supplemented with 1% meat extract and 0.5% NaCl, at 37 °C in a rotary shaker. Insect immunization and haemolymph collection Bacterial challenge was performed as described by Seufi et al. (2011).Haemolymph was collected at 24, 48 and 72 h post-infection (p.i.) at 4 °C (500 µl/ each), containing few crystals of phenylthiourea to prevent melanization.Aliquots of 100 μl each were stored at -80 °C until investigated.Control group was injected with bacteria-free saline solution. RNA extraction and reverse transcription Total RNA of the insect haemolymph (300-500 µl) was extracted using RNeasy kit according to the manufacturer's instructions (Qiagen, Germany).Residual genomic DNA was removed from RNA using RNase-free DNase (Ambion, Germany).RNA integrity and purity were justified by examining 260/280 and 260/230 ratios for protein and solvent contamination.Reverse transcription reaction was carried out according to the ABgene protocol (ABgene, Germany).The cDNA was aliquoted and stored at -80 °C until processed. Differential display using primers corresponding to lectin sequence (DD-PCR) A total reaction volume of 25 μl containing 2.5 μl PCR buffer, 1.5 mM MgCl 2 , 200 μM dNTPs, 1 U Taq DNA polymerase (AmpliTaq, Perkin-Elmer), 2.5 μl of 10 pmol primer (Table 1) and 2.5 μl of each cDNA was cycled in a DNA thermal cycler (Eppendorf, Mastercycler 384, Germany).The amplification program was one cycle at 94 °C for 5 min (hot start), followed by 40 cycles at 94 °C for 1 min, 40 °C for 1 min and 72 °C for 1 min.The reaction was then incubated at 72 °C for 10 min for final extension.PCR product was visualized on 1.5 % agarose gel and photographed using gel documentation system.For DNA contamination assessment, a no-reverse transcription control reaction was performed. Based on the sequence and alignment data, specific primers (LecSF 1,2 and LecSR 1,2 ) for lectin-related sequences were designed (Table 1) and tried for reverse transcription polymerase chain reaction (RT-PCR).RT-PCR reaction was performed as previously described in this section regarding to the optimum annealing temperature (T a ) for each specific primer set.Positive PCR products were visualized and eluted from the gel using GenClean Kit (Invitrogen Corporation, San Diego, CA, USA) following the manufacturer's instructions. The purified PCR product (SpliLec) was cloned into PCR-TOPO vector with TOPO TA cloning kit (Invitrogen, USA) following the manufacturer's instructions.Ligation mix was used to transform competent E. coli strain TOPO 10 provided with the cloning kit.White colonies were screened using PCR as described earlier in this section.Two positive clones of SpliLec fragment were selected and sequenced (to exclude PCR errors certainly) using their specific forward and reverse primers (Table 1).Sequencing and sequence analyses were performed as described early in this section. Full-length cDNA isolation of immunolectin gene Specific primers (sense and antisense) were designed based on the sequence of SpliLec containing 3µ end.The 5µ end fragment was amplified using SMART RACE cDNA Amplification kit (Clontech) following the procedure outlined in the supplied user manual.The amplified 5µ end fragment was purified, cloned into PCR-TOPO vector, and sequenced as described early in this section.The sequences of 3′ and 5′ end fragments were aligned and the predicted full-length cDNA was obtained.Thus a pair of primers, LecFLF and LecFLR (Table 1), was designed for the amplification of full-length SpliLec cDNA.PCR was carried out in a total volume of 25 μl reaction solution containing 2.5 μl PCR buffer, 1.5 mM MgCl 2 , 200 μM dNTPs, 1 U Taq DNA polymerase (AmpliTaq, Perkin-Elmer), 2.5 μl of 10 pmol of each primer and 2 μl cDNA using the following protocol: 94 °C for 5 min (hot start) followed by 35 cycles of amplification (94 °C for 1 min, 60 °C for 1 min, 72 °C for 1.5 min) and a final extention step at 72°C for 10 min.Full-length SpliLec was visualized and eluted from the gel using GenClean Kit (Invitrogen Corporation, San Diego, CA, USA) following the manufacturer's instructions. Moreover, Phylogenetic analyses of the nucleotide sequence and its deduced amino acids were done using Mega4.Poorly aligned positions and divergent sequences were eliminated manually.Multiple alignment of available published lectin-related nucleotide sequences was done before phylogenetic analyses to approximate sequence lengths manually.100% homologous sequences of the same species with different accession numbers were represented by only one sequence.The cloned DNA fragment was deposited in GenBank under the HQ603826 accession number. Differential display using primers corresponding to well known lectins Differential display technique was used to characterize the genetic variation (at RNA level) between bacterial-challenged and control cotton leafworm, S. littoralis.Fig. (1) shows the results of differentially displayed cDNAs of bacterial-challenged and control insects using 8 primers corresponding to previously characterized lectins (Table 1).Haemolymph samples were differentially displayed at 24, 48 and/ or 72 h p.i. with S. aureus, S. sanguinis, E. coli, P. vulgaris and K. pneumoniae bacterial strains.It was observed that S. aureus-challenged insects died 24 h p.i., E. coli-challenged insects died 48 h p.i. and S. sanguinis-challenged insects died 72 h p.i.All insects died before sampling in the case of P. vulgaris and K. pneumoniae.Differential display results revealed that the average number of bands per sample was 4.3 bands for each amplification reaction.The total number of bands (transcripts) resolved in 1.5 % agarose gel for both control and challenged insects was 124 (molecular size ranged from >1300 to ~80 bp).Forty seven polymorphic bands (37.9 %) were differentially displayed with 6 of the used primers.Five reproducible, infection-induced bands were cloned and sequenced using M 13 universal primer.Analyses of the results revealed that a fragment of 640 bp was amplified within the open reading frame (orf) of a lectin gene.This fragment contained the complete 3` end with a poly(A) tail, but it was not complete at the 5` end (lacking starting codon, AUG at its 5` end). RT-PCR amplification and cloning of the lectin gene To obtain the full-length sequence, the 5µ end of the cDNA was amplified using RACE PCR method, purified, cloned and sequenced.The full-length sequence of SpliLec cDNA was amplified using LecFLF and LecFLR.RT-PCR was optimized for the primer set and successfully amplified ≈1150 bp fragment (Fig. 2). The positive PCR product was visualized, eluted and cloned into PCR-TOPO vector (Fig. 2, lane 2).Using PCR screening method, the clone PCR-TOPOSpliLec was tested as positive (Fig. 2, lane 4).Two positive clones of SpliLec fragment were selected and sequenced (to exclude PCR errors certainly) using LecFLF and LecFLR primers (Table 1). Nucleotide sequence and sequence analyses Nucleotide sequences of the SpliLec and its deduced amino acid sequence is shown in Fig. (3).A single orf encoding a 309-residues polypeptide was detected in the SpliLec sequence.One stop codon was found at the 3` end.The flanking region of the initiation codon ATG is AGTATGGAG, and the length of 5µ untranslated region (UTR) was 60 bp before the start codon ATG.The length of 3µ UTR was 60 bp before the poly (A) track. The putative polyadenylation sequence AATAAA was located 15 bp downstream from the stop codon (Fig. 3).The identified SpliLec orf includes a signal peptide (54 bp), and a mature peptide (873 bp).Analysis of the amino acid sequence deduced from the cDNA indicated that SpliLec is a member of the C-type lectin superfamily.It contains two C-type CRDs, an amino-terminal domain, CRD 1 (residues 1-149), and a carboxyl-terminal domain, CRD 2 (residues 160-301).The deduced SpliLec polypeptide contains 50 strongly basic, 28 strongly acidic, 127 hydrophobic and 104 polar uncharged amino acids. The calculated molecular masses of the putative SpliLec and its mature peptide are 34.85 and 32.91 KDa, respectively.The theoretical isoelectric points (PIs) were 9.27 and 9.38 for the full length and mature SpliLec peptides, respectively.The net charges at pH 7.0 were 15.9 and 16.9 for the SpliLec and its mature peptide, respectively.Both the full length and the mature SpliLec peptides were classified as unstable (Instability Index (II): 55.81 and 56.95, respectively).Ratios of the hydrophilic residues were calculated as 37 and 38% for the full length and its mature peptides, respectively.Nucleotide sequence and its deduced amino acid sequence of the SpliLec were blasted with all available sequences in GenBank database.Alignment results revealed that the SpliLec sequence (Acc# HQ603826) has a significant alignment with 9 and 14 published lepidopteran DNA and peptide sequences, respectively.Although the percentage identity ranged from 100% to 69% with IML-A precursor (Acc# AF053131) and IML-3 (Acc# AY768811) of Manduca sexta, it did not necessarily mean full consistence, especially when the percentage coverage of the gene was regarded.Some insect lectins covered the forward region of the SpliLec sequence and others covered the backward segment (e.g.M. sexta and Bombyx mori immunolectins) (Fig. 4 A and B).Primary, secondary structure analyses, post-translational modifications and topology predictions revealed that amino acid sequence of the putative SpliLec peptide had one signal peptide cleavage site (between positions 18 and 19), one tyrosine-glycosylated and two tyrosine-sulfated sites at positions 111, 31 and 33, respectively.Fifteen O-GlcNAcylated residues (8 Ser and 7 Thr) and six potentially glycated lysines were predicted.Twenty one phosphorylation sites (Ser: 11, Thr: 6 and Tyr: 4) and 44 (24 S, 2 Y and 18 T) kinase specific phosphorylation sites (highest score: 0.82 PKC at position 185) were also predicted.In addition, two transmembrane helices (one primary: 166-182 with outside to inside orientation and one secondary: 3-22 with inside to outside orientation) were predicted. Phylogenetic analyses of the SpliLec sequence Phylogenetic analyses of the SpliLec have been performed with the 47 nucleotide seuquence (including 10 insect genera from the order Lepidoptera.)and 14 polypeptides (including 8 insect species: 3 lepidopterans and 5 dipterans).The results of these analyses are shown in Figs.(5 A and B).LPS-binding proteins of the silkworm, B. mori (Koizumi et al., 1999) and the putative lectin of the fall webworm, H. cunea (Shin et al., 2000).The predicted modifications of the SpliLec protein suggested an important role of the SpliLec protein in modulating a broad range of biological processes in the cell.The predicted O-GlcNAcylation suggested a possible function of the SpliLec protein in macromolecular complex assembly and intracellular transport.Glycosylation and glycation serve for the correct folding and stability of the protein (unglycosylated proteins degrade quickly).Glycosylation of proteins play a role in cell-cell adhesion (a mechanism employed by cells of the immune system), as well (Varki et al., 2009).Reversible phosphorylation of proteins (using kinases and phosphatases) is considered an important regulatory mechanism in protein-protein interaction via recognition domains, (i.e.many proteins and receptors are switched "on" or "off" by phosphorylation and dephosphorylation).It also results in a conformational changes in the structure in many peptides, causing them to become activated, deactivated or degraded (Olsen et al., 2006).In addition, many transmembrane proteins (TPs) function as gateways or "loading docks" to deny or permit the transport of specific substances across the biological membranes (to get into or out of the cell by folding up or bending through the membrane). Reconstruction of the phylogenetic trees of the SpliLec nucleotide sequence and its deduced polypeptide resulted in two different topologies.Both of the two trees clustered SpliLec sequence in two different groups (clustered with Bombyx in the case of nucleotide-based tree and with Anopehles in the case of amino acid-based tree) indicating the possibility of evolutionary trend between these lectins which might descend from a common ancestor.Grouping of some lepidopteran and dipteran lectins (e.g.M. sexta with Sarcophaga and S. littoralis with Anopehles) in one sister clade indicated that they may be homologous or share some similarity.In addition, lepidopteran lectin-like sequences were diverged in many sister clades as amino acids due to the difference in codon usage in different species. In short, these findings shed a new light on the lectin-mediated immune system.Combination of these findings with that reported by Seufi et al. (2009), Seufi et al. (2011) and Seufi (2011) suggested that the SpliLec, SpliDef and SpliCec peptides with other possible AMPs may constitute the defense network of S. littoralis (Lepidoptera) against invading microorganisms. Conclusively, the current results provide a novel insect lectin gene (SpliLec) with a two tandem CRDs.The SpliLec plays an important immune role in S. littoralis by cooperating with other AMPs to clear invading microorganisms.These findings would be helpful in future studies on lectins concerning ELISA, PCR and other related molecular and immunological techniques.Future studies on the carbohydratebinding and blood group specificities, on the determination of molecular weight and three-dimensional structure of the SpliLec will be needed to provide direct evidences and more understanding of the SpliLec mode of action. Fig. 1 : Fig. 1: Representative 1.5% agarose gels of the DD-PCR patterns generated from control and S. aureus, E. coli and S. sanguinis-challenged haemolymph samples using 8 primers corresponding to well known lectin genes.Lane M: DNA marker 100 bp Ladder, lanes 1, 4, 8 and 10: controls of different treatments, lanes 9 and 11: 24 h post-infection with S. aureus, lanes 2, 3 and 5, 6: 24 and 48 h post-infection with E. coli and lane 7: 72 h post-infection with S. sanguinis.Arrows refer to differentially displayed sequenced bands. Fig Fig. (3): Nucleotide and corresponding deduced amino acid sequence of S. littoralis immunolectin gene (SpliLec).Cleavage site between the signal and mature peptides are indicated by an arrow.Positions of cysteine residues are shaded and numbered.Asterisk indicates the stop codon.Boxed sequence represents the putative polyadenylation signal. Fig. 5 : Fig. 5: Phylogenetic analysis of SpliLec nucleotide and deduced amino acid sequences compared to 46and 13 sequences registered in NCBI.Phylogenetic trees were generated from 47 and 14 lectin-related sequences by neighbor-joining distance analysis using Mega4 software.Full sequence names and accession numbers are included in the tree. Table 1 : Key table for the primers used in this study providing their names, origin and sequences.
2018-12-07T10:11:10.647Z
2012-04-01T00:00:00.000
{ "year": 2012, "sha1": "f4a43c9d3f965250e475e807b097bf83f5904d9f", "oa_license": "CCBY", "oa_url": "https://eajbsa.journals.ekb.eg/article_14905_5bcc417a33a56d7bb3a33b3baab21952.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f4a43c9d3f965250e475e807b097bf83f5904d9f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
260887248
pes2o/s2orc
v3-fos-license
Determining the Fundamental Failure Modes in Ni-rich Lithium Ion Battery Cathodes Challenges associated with in-service mechanical degradation of Li-ion battery cathodes has prompted a transition from polycrystalline to single crystal cathode materials. Whilst for single crystal materials, dislocation-assisted crack formation is assumed to be the dominating failure mechanism throughout battery life, there is little direct information about their mechanical behaviour, and mechanistic understanding remains elusive. Here, we demonstrated, using in situ micromechanical testing, direct measurement of local mechanical properties within LiNi0.8Mn0.1Co0.1O2 single crystalline domains. We elucidated the dislocation slip systems, their critical stresses, and how slip facilitate cracking. We then compared single crystal and polycrystal deformation behaviour. Our findings answer two fundamental questions critical to understanding cathode degradation: What dislocation slip systems operate in Ni-rich cathode materials? And how does slip cause fracture? This knowledge unlocks our ability to develop tools for lifetime prediction and failure risk assessment, as well as in designing novel cathode materials with increased toughness in-service. Introduction The successful development of next-generation Li-ion batteries (LIBs) exhibiting higher capacity relies on the stable functioning of cathode materials [1]. LiNixMnyCo1-x-yO2 (NMC) is a favourable material family owing to their high capacity and operating voltage, yet relatively low cost [2][3][4]. LiNi0.8Mn0.1Co0.1O2, or NMC811, stands out due to its high specific energy density and low Co content [5]. The biggest challenge is the rapid in-service degradation of Nirich NMC particles, leading to significantly reduced capacity over time [6][7][8]. Apart from the chemical degradation processes, observation of intergranular fracture events in agglomerated polycrystalline particles post-cycling has prompted the recent development of single crystal particles in LIB cathodes [9,10]. This has moderately improved mechanical stability [11] and efficiency of Li transport, however the evidence of intragranular defect generation suggests that durability is still a challenge [12]. The mechanical properties of NMC particles contribute significantly to their chemical and electrochemical behaviour, thereby affecting the capacity and cyclability of LIBs. It is believed that mechanical degradation is an undoubted reason for battery failure [7]. Mechanical degradation, or more specifically, cracking, increases the total surface area of the particles. The fresh and free surface will convert into rock-salt layer from layered structure with increased oxygen loss. Studies have proved that the released oxygen is singlet oxygen, which is very active and can react with the electrolyte to generate moisture and HF [13,14]. HF can attack the active cathode material, in this case NMC811, which expedites transition metal dissolution and further oxygen loss. Moreover, formation of the rock-salt layer increases the ionic and electric impedance of the cathode, leading to sluggish Li-ion diffusion. Thus, mechanical failure of NMC ultimately results in degraded capacity and cyclability of LIBs. Two main sources of mechanical degradation of these materials have been reported: 1. NMC particles are bonded to the Al current collector via tape casting followed by a calendaring process, to improve particle packing density and to enhance the particle-3 current collector connection. The mechanical load experienced during calendaring can however cause particle cracking which subsequently deteriorates battery performance [15,16]. 2. Charge and discharge cycles involve the removal and insertion of Li ions in the basal planes ("Li slabs") of the crystal lattice (Figure 1(a)). This changes lattice parameter and therefore shape of the crystals [5]. For polycrystalline particles, this may cause intergranular fracture [17]. For single crystal particles, this, along with concentration gradient of Li atoms, may trigger microcrack formation and dislocation movement [12]. There is however controversy over whether the Li-intercalation-induced stress is significant enough to cause defect formation and mobility. The strength of LiCoO2 single crystals (with the same crystal structure as NMC) has been measured at 2-4 GPa [18], but shear stresses involved upon charge/discharge are an order of magnitude lower according to numerical simulations based on an isotropic diffusion-induced stress model [12]. Stallard et al. who employed an indirect mechanical testing approach has argued that the basal plane shear strength of NMC811 is below 100 MPa [19]. This is not supported by the results of Feng et al. [18] where none of the samples, even the cycled ones, showed sub-GPa-level strength. Therefore the basal plane shear strength is unlikely to be much lower than 1 GPa, unless the orientations of all samples tested, although not reported, coincidentally had their basal planes either parallel or perpendicular to the loading axis, making Schmid factor for basal slip zero [20]. Numerous research articles report the detrimental effect of dislocations in cathode materials on the electrochemical performance of batteries [21][22][23][24][25][26][27]. In crystalline materials, dislocations are a group of 1D defects, and their movement at the atomic scale leads to plastic deformation at a larger scale. Under load, dislocations often glide on specific sets of crystallographic planes. These planes, and the dislocation Burgers vectors, are often material-dependent and are termed slip systems. Slip systems, and the stresses required to activate them, fundamentally determine the mechanical performance of a (plasticity-controlled) material [28] such as NMC811. In LIB cathodes, dislocations form due to electrochemically and/or thermally induced strain fields upon synthesis and cycling [23,27], and they are found to facilitate crack formation through two mechanisms: 1. Cracks are formed at edge dislocation cores directly due to the local stress fields [22][23][24][25]. 4 2. Dislocation glide results in oxygen release [21] which routinely triggers crack formation in battery materials [25]. However, work in the literature to date has failed to answer key questions about how dislocations facilitate cracking of NMC materials quantitatively: What are the slip systems, failure modes, and just how strong are each of these modes? Also, where is the critical point during dislocation glide that cracks start to form? Answers to these questions can form the basis of understanding mechanical behaviour of any material [28] and potentially expedite the improvement of materials and device design against degradation [29]. They are also key to understanding failure at all stages of battery life, as the (dislocation-assisted crack formation) failure mechanism operates independently of the stress state and the source of stress. In LIBs, NMC materials are often in the form of powders [10]. Mechanical testing of powders, by compressing them with a flat indenter, can be used to evaluate performance [30]. However, converting load-displacement data into stress-strain curves can result in large uncertainties given the irregular shapes of the samples, and extraction of deformation modes is difficult as the crystal orientations are largely unknown. Fabrication of small-scale samples with welldefined geometries and known crystal orientations is hence essential, and current state-of-theart equipment allows testing to be carried out in situ inside electron microscopes [31][32][33] where failure can be imaged directly in real-time. Here, we carried out in situ compression tests of pristine NMC811 single crystal micropillars with known crystal orientations in a scanning electron microscope (SEM), to calculate the slip systems in this material, their critical stresses, and how dislocations gliding on different slip systems trigger crack formation. NMC811 powders were sintered into a bulk material to produce a stiff substrate for mechanical testing. X-ray diffraction (XRD) was used to identify the crystal structure ( Figure 1(b)) and to measure lattice parameters (Table S1). Transmission electron microscopy (TEM) was used to observe the layered atomic arrangement (Figure 1(c)), and secondary ion mass spectrometry (SIMS) to probe the variations in chemical composition before/after thermal treatment ( Figure S1). Electron backscatter diffraction (EBSD) analysis was conducted to reveal the microstructure and crystal orientations (Figure 1(d)), for fabricating and testing micropillars within targeted crystals. Pillars with dimensions on the same length scale as the single crystal particles employed in real batteries were fabricated and tested in order to rule out any size effects in micromechanical testing [34]. Thereafter, the slip systems were identified based on knowledge of the crystal orientations, and the stress required to activate each of the slip systems was calculated. This work enabled detailed analysis of how deformation gives rise to failure, and allowed compared behaviours between single crystal and polycrystalline samples. Sample preparation Sintered pellets were prepared with commercially available single crystal NMC811 powders from Li-Fun Technology Co. (Figure S2(a,b)), which were stored in an Ar-filled glovebox (< 0.6 ppm H2O; < 0.6 ppm O2). There are two reasons why the powders were sintered into a bulk pellet before the mechanical tests were carried out: 1. It is not practically feasible to polish the powders for EBSD experiments, making it difficult to determine the crystal orientations pre-test, and therefore activated slip systems of test pieces post-test. 2. A sintered pellet can serve as a stiff substrate for the micromechanical tests, so as to substantially reduce the compliance of the test setup (compared to stacked powders). The powders were firstly ground and mixed with Li2CO3 (10% Li excess) in the mortar and then uniaxially pressed into pellets in the glovebox. The pellets were further isostatically pressed at 300 MPa before sintering in a Pt crucible at 1000 °C for 10 h in static air [35]. The sintering temperature is higher than the decomposition temperatures for potential surface impurities such as Li2CO3 and LiOH which are detrimental to capacity retention [36,37]. To remove any residual Li impurities on the surface after sintering, the sintered pellets were ground using SiC papers with successive grades in the glovebox and cleaned in ethanol ultrasonic bath before further characterisation. SIMS An ION TOF TOF-SIMS 5 time-of-flight secondary ion mass spectrometer was employed. Analysis was performed in the negative mode with the high current bunch mode at the mass resolution of ca. 10000. A 25 keV Bi + primary beam was used for the analysis over an area of 100 × 100 µm 2 and a 500 eV single Ar + sputtering beam was applied for sputtering over an area of 300 × 300 µm 2 . EBSD We employed EBSD to determine the crystal orientations of the grains in the sintered pellet. We mapped a large area on the polished sample surface, and then looked for grains with desired orientations for micromechanical testing. The sample frame (x,y,z directions) was physically marked and kept constant throughout the EBSD -FIB milling -in situ testing -post-mortem analysis procedure, which allowed us to 1) fabricate and test pillars in grains with desired orientations and 2) work out the crystallographic planes and directions (slip systems) associated with the slip traces. A sintered pellet was polished mechanically with diamond suspension (in ethanol), and then polished with broad ion beam in a Gatan PECS II Ar ion polishing system. EBSD characterisation was carried out on a Zeiss Sigma 300 SEM equipped with a EDAX Clarity™ Plus direct electron detector EBSD camera, using a beam acceleration voltage of 20 kV, a probe current of ~10 nA, and a spatial step size of 0.25 μm. EBSD data was analysed using EDAX OIM Analysis v8 software. The raw data (IPF map in Figure S3) was processed through dictionary indexing of the EBSD patterns [39] to help resolve pseudo symmetry of the crystal structure, followed by a clean-up step with a single iteration of grain dilation. 7 In situ micropillar compression tests Micropillars were fabricated with FIB milling on the Helios 5 CX DualBeam microscope, automated via employing TFS NanoBuilder Software ( Figure S2(c)). The single crystal pillars are ~2.5 μm in height and 1 μm in mid-height diameter, while the polycrystalline pillars are ~15 μm in height and 5 μm in mid-height diameter. The taper angle is ~5°. For the 30 keV Ga + FIB used to mill the pillars, the FIB-damaged zone with Ga-ion implantation varies with material, but is typically on the order of ~10 nm [40][41][42][43][44][45][46], about 1% of the pillar thickness. Thus the effect of the damaged zone on the mechanical responses of the pillars should be nearly negligible. In situ compression tests of the micropillars were conducted using an intrinsically displacement-controlled Alemnis nanoindenter in a TFS Quanta 650 SEM. A schematic diagram of the experimental setup is shown in Figure S2(d). The single crystal and polycrystalline pillars were tested at displacement speeds of 5 nm/s and 30 nm/s respectively, resulting in the same strain rates for both sets of tests. The total displacement applied for each pillar can be found on the stress-displacement curves. No hold at maximum displacement was applied (loading was followed immediately by unloading). Stress is defined as load divided by the top surface cross-sectional area of each pillar. When the tests were complete, postdeformation SEM images of the micropillars were captured on a Zeiss Sigma 300 SEM, using an acceleration voltage of 5 kV and the in-lens detector for achieving high spatial resolution. The slip system(s) activated for each pillar were then identified based on the orientation(s) of the slip bands and the crystal orientation derived from EBSD data. It is worth noting that the displacement rates were selected such that the tests can be completed in minutes. This is important for small scale mechanical testing where thermal drift of load/displacement sensors may significantly affect the test results, especially when working with low load levels. Long experiments may invalidate the linear load drift assumption one normally makes when correcting raw data. Hence the displacement rate used could be higher than that often experienced in actual battery cathodes upon cycling. For example, if during delithiation of an NMC particle, a contraction of 10 % and a charge rate of 1 C are assumed, this would result in a deformation rate of ~7e-2 nm/s which is smaller than the 5 nm/s that were used in the manuscript. Strain rate sensitivity of the failure process could be studied in detail in future work. To activate basal slip, a single crystal pillar with its basal plane ~45° to the loading axis was deformed. An SEM image of the deformed (to a displacement of ~220 nm) pillar is shown in Figure 2(a), which indicates that plastic slip occurred on the basal plane and along the <a> direction. A long vertical crack is observed beneath a slip band. The crack is through-thickness as evidenced by Figure S4(a) where the same pillar is viewed from another angle. The crack appears to be non-straight and is therefore unlikely along a specific crystallographic plane. Results and discussion The pillar in Figure 2(b) which has its basal plane normal (c-axis) nearly perpendicular (~85°) to the loading axis, showed similar behaviour: plastic slip on an inclined crystallographic plane 9 and a vertical crack beneath the slip plane. In contrast, plastic slip on this pillar occurred on the prismatic plane and along the <a> direction. Additionally, the vertical crack was evidently connected to a shear crack on the plane of plastic slip, same as the crack on the previous pillar ( Figure S4(a)). Figure 1(a)) and the slip direction is <½c + a>. Figure S4(b) is an image of the same pillar viewed from another angle, which also shows no evidence of cracking. However, a secondary slip system, also of <½c + a> pyramidal type, can be observed and is marked in Figure S4(b2). The mechanical responses of the above pillars are plotted in Figure 2 2. The yield stress of the pillar exhibiting basal slip is only gently lower than the other pillars, meaning that basal slip is not significantly weaker than the other deformation modes, contrary to the hypothesis made elsewhere [19] and in contrast to the behaviours of some other layered materials [47,48]. 3. Stress drop events, as marked by the purple arrows, are present on the curves for the two pillars exhibiting basal and prismatic slip respectively, corresponding to the process of crack development as observed in situ (Figure 2(a,b)). Table 1 The CRSS for <a> basal, <a> prismatic, and <½c + a> pyramidal slip systems in NMC 811, calculated through multiplying the yield stress (the stress at the first observed deviation from linear elastic region, as marked by the dashed circles in Figure 2(d)) of each of the pillars in Figure 2 The critical resolved shear stresses (CRSS) for the three types of slip systems activated was extracted from the yield stresses and the crystal orientations of the pillars using Schmid's law (Table 1). Despite the highly low-symmetry crystal structure (Figure 1(a)), the shear stresses required to activate the basal, prismatic and pyramidal slip systems are relatively close (1.7, and 2.4 GPa respectively) , giving rise to the only moderately anisotropic yield strength levels (Figure 2(d)). Nevertheless, the post-yield behaviour was fairly anisotropic: the orientations that provoke single slip (Figure 2(a) and (b) where only one basal and one prismatic slip system was activated respectively) were seen to trigger crack formation that tends to cause complete fracture upon further loading. Due to the incompressible nature of plasticity [49], horizontal strain components were generated under uniaxial stress to retain the volumes of the pillars upon plastic deformation, which could drive the formation of these vertical cracks in brittle materials [50,51]. Such cracks were also observed in compression tests of agglomerated polycrystalline NMC811 particles ( Figure S5). Because of the instant nature of the crack formation process (see the load drops in Figure 2(d)), it was not possible to determine if the vertical or the shear crack occurred first in situ (frame time was ~1.6 s). A possibility is that the crack orientation was firstly perpendicular to the slip plane due to a widening of the atom layers on one side of the slip plane, and afterwards changes its orientation to a vertical orientation because of indirectly resulting horizontal tensile strain. Nonetheless confirming this, requires a more stable crack growth process and higher resolution real-time characterisation. This may be achievable through in situ fracture tests in the TEM which were recently demonstrated on other materials by some of the authors of this work [52]. No long vertical crack was observed on the pillar in These cracks, however, appear to be significantly shorter than those seen in Figure 2 They are also not through-thickness as evidenced by their absence in Figure S4(c) which shows the rear side of the same pillar. Another example is shown in Figure S4(d), where a short crack was observed at intersections between two sets of slip bands on pyramidal planes (the orientation of this pillar is similar to that in Figure 2(c) where the c-axis is ~parallel (~10°) to the sample surface). These data therefore suggest that single slip might trigger the generation of long and through-thickness cracks that strongly affect structural integrity of the crystals (Figure 2(a,b)), whereas multiple slip could suppress the formation of such large cracks ( Figure 2(c), Figure 3(a)) although much smaller cracks can form at slip band intersections (Figure 3(b), Figure S4(d)). These smaller cracks do not immediately lead to failure, yet they may harm local electron transport upon battery operation, interact with electronic defects and develop into larger defects during charge/discharge cycles causing safety issues. Our results indicate moderately anisotropic yield strength of single crystal NMC811, where basal slip is only gently weaker than other deformation modes. This explains prior work where pristine LiCoO2 single crystals with (presumably) random orientations always exhibit a similar strength level to those in Figure 2(d) [18]. However, when subjected to Li-intercalationinduced stresses of a few hundred MPa [12], such a strong material should not obviously degrade. This suggests that damage accumulation upon cycling is possibly accelerated by preexisting defects in the material and/or (electro-)chemical driving forces which can induce various types of defects [53]. During calendaring where the particles are subjected to compression and shear, defects similar to those observed on the pillars may be generated, which could decrease the threshold stress required for fracture, and therefore the Li-intercalationinduced stresses could indeed be high enough to provoke failure. During electrochemical cycling, chemical degradation processes, e.g., oxygen-loss-induced phase transformation, transition metal dissolution etc., can change the materials chemistry (and phase structure) and 13 further affect the mechanical properties. Future work on in situ and operando characterisation of uncalendared/calendared particles during charge cycles could help clarify the problem. During electrochemical cycling, the stresses in the cathode particles are sometimes tensile. One may argue that this might lead to a different mechanical response to the results of the compression tests here in ceramic-like materials such as NMC, and that under tensile stresses, the material may not fail due to plastic deformation caused by shear stresses, but due to brittle failure caused by normal stresses. However, many prior works reported dislocation-assisted crack formation in LIB cathodes after cycling [21][22][23][24][25], indicating the dominance of this mechanism regardless of the stress state. Therefore the slip systems, critical stresses and fracture modes extracted in this work should be useful for understanding/modelling cathode failure not only upon calendaring where the stress state is compressive, but also upon cycling where tensile stresses occur. Polycrystalline pillars were also tested to understand the role of grain boundaries (GB) in the failure of sintered NMC811. Note that sintered NMC studied here is different to common agglomerated polycrystalline NMC [10], and the latter should be much weaker at GBs as no "diffusion bonding" treatment is applied. As Figure 4 shows Conclusions In order to gain quantitative knowledge of the fundamental deformation and failure modes of pristine NMC811 as a cathode material for next-generation LIBs, we carried out in situ SEM micromechanical tests on samples with known crystal orientations and a well-defined stress state. The following conclusions can be drawn: Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. representation of crystal orientations was written by Dr Vivian Tong at National Physical Laboratory UK. Author contributions SW and FG designed the study. SW performed EBSD analysis and mechanical testing, and wrote the original draft. ZS prepared the sample and carried out XRD and SIMS experiments. AO and OGD conducted TEM imaging. MPR and FG supervised the work. All authors reviewed the manuscript. 24 Figure Figure 4 and Video S1. For all three pillars, stress drops occurred immediately after the elastic regimes and no stable plastic flow can be observed.
2023-08-15T06:43:13.806Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "a519473e951593d5341bec3b13d19597c592e5b0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a519473e951593d5341bec3b13d19597c592e5b0", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
234100720
pes2o/s2orc
v3-fos-license
Source-sink relationships of wheat plants obtained by the application of systemic herbicides The objective of this work was to evaluate the potential of solute translocation among stems of wheat (Triticum aestivum), by the application of the systemic herbicides clethodim and glyphosate. The experiments were carried out in greenhouse conditions, using wheat cultivars with contrasting potential for tillering, during the winter of 2017 and 2018 in Southern Brazil. Clethodim was applied on isolated stems, to evaluate the potential translocation of solutes from the first three tillers. Translocation was evaluated by the yield parameters on stem ears not treated with clethodim. Glyphosate was used to evaluate the potential translocation of solutes from late tillers. Translocation was evaluated by the assimilation of carbon in stems without the glyphosate treatment. When applied on isolated primary tillers, clethodim reduced yield parameters only in treated stems, which shows that the assimilates were targeting the main sink, in detriment of the other organs. There was a decrease in net carbon assimilation in stems without the glyphosate treatment at the first visible node stage. Glyphosate applied on late tillers, at anthesis, reduced even more net carbon assimilation in nontreated stems of the primary tillers of the BRS Parrudo wheat cultivar (low tillering). Well-developed late tillers show higher potential of solute translocation to the whole plant than the poor-developed ones. Introduction Physiological and productive parameters of contrasting wheat cultivars have been addressed under various conditions, such as sowing dates and densities, plant nutrition, and environmental stresses (Duggan et al., 2005;Ruan et al., 2012;Mitchell et al., 2012Mitchell et al., , 2013Guo & Schnurbusch, 2015;Hendriks et al., 2016;Houshmandfar et al., 2019). Despite all these studies, it is still difficult to understand the interactions between the main stem and tillers for the source-sink relationship. The ability of wheat plants to remobilize assimilated carbon from stem into spike during the grain filling has been widely reported, especially in restricted environments (Ahmadi et al., 2009;Mitchell et al., 2013;Guo & Schnurbusch, 2015;Dodig et al., 2017;Turek et al., 2018). However, the mobilization of reserves among stems of the same plant is less understood. Long-distance transport between main stem and lateral shoots is well developed in dicotyledon (or dicot) species, such, that are more studied in relation to solute translocation (Slewinski et al., 2013). In these plants, xylem and phloem vessels establish a large network of translocation and communication throughout the plant structure (Lucas et al., 2013). The multistem structure of monocots, such as wheat, raises questions on the effectiveness of communication among stems of the same plant. It is well known that tillers are supported by the main stem with sugars, water, and nutrients, during the emission period, as a result of vascular connections (Alves et al., 2000). However, to be a transient sink of assimilates, senescent tillers would have to keep these vascular connections with the whole plant until the grain filling stage. Source-sink relationship has been widely studied in the past decades, using increasingly improved tools (Turgeon & Wolf, 2009;Dinant & Lemoine, 2010;Lucas et al., 2013). Since phloem long-distance transport of solutes is dependent of vascular connections between source and sink organs, the application of systemic herbicides can be a useful tool to study the potential of solute translocation in plants. Clethodim is a phloem-mobile herbicide (Bromilow et al., 1990) that belongs to the cyclohexanedione chemical family of herbicides, which are potent inhibitors of the enzyme acetyl-coenzyme A carboxylase (ACCase, EC 6.4.1.2) (Burton et al., 1987). Glyphosate is also a phloem-mobile herbicide (Bromilow et al., 1990) that binds to and blocks the activity of the enzyme enolpyruvylshikimate-3-phosphate synthase (EPSP synthase, EC 2. 5.1.19). Clethodim and glyphosate show high mobility among plant organs of wheat plants (Nandula et al., 2007), as well as in other monocot species, such as Elytrigia repens (Tardif & Leroux, 1991). However, such translocation was only shown at early stages of plant development. The objective of this work was to evaluate the potential of solute translocation among stems of wheat, by the application of the systemic herbicides clethodim and glyphosate. Materials and Methods Two experiments were carried out in greenhouse conditions, in a randomized complete block design with 2×4 factorial arrangement and four replicates, during the winters of 2017 and 2018, in the experimental area of the Universidade Federal de Santa Catarina, on the Curitibanos campus, in Southern Brazil. The wheat cultivars BRS Guamirim (high tillering) and BRS Parrudo (low tillering) were used in both experiments as contrasting genotypes for tillering potential (Fioreze et al., 2019). In 2017, the four experimental factors consisted of clethodim application to wheat plants on the primary tillers of nontreated plants, as well as on the main stem, first tiller, and second tiller ( Figure 1). Clethodim Select 240 g L -1 EC (UPL, Itupeva, SP, Brazil) spray solution at 0.24 g L -1 was applied at the 10.5 stage (flowering) of Feeks scale (Large, 1954), on the first three leaves of the stem apex, using a brush paint. In 2018, the four experimental factors consisted of glyphosate application to wheat plants on late tillers of nontreated plants, as follows: all tillers at the 6 stage (first node) of Feeks scale; all tiller at the 10.5 stage of Feeks scale; and on the two last tillers emitted at the 10.5 stage of Feeks scale ( Figure 1). The spray solution of glyphosate (Crucial 540 g L -1 SL, Nufarm, Maracanaú, CE, Brazil), at 6.6 g L -1 , was applied on the whole described tillers (on leaves), using a brush paint. In both experiments, main stem and the first two (2017) or three (2018) emitted tillers, were considered as primary tillers. For 'BRS Guamirim', primary tillers were T0, T1, and T2, while for 'BRS Parrudo', primary tillers were T1, T2, and T3 (Masle 1985). Early after emergence, different classes of tillers were identified by colored cotton threads (primary tillers). All tillers emitted after that were considered as late tillers. Plants were grown in 3.6 L plastic pots filled with Cambissolo Háplico típico, soil Brazilian classification by Santos et al. (2018), which corresponds to Inceptisol, with a clayey texture (550 g kg -1 clay) limed with 1.51 g dm -3 limestone. Soil was fertilized with 120 mg dm -3 potassium chloride (60% K 2 O), and 2.16 g dm -3 triple superphosphate (42% P 2 O 5 ). Side dressing N fertilization took place at every 15 days between emergence and anthesis, using urea (45% N) applied via solution (25 mg dm -3 N) to reach 150 mg dm -3 N. In each pot, five seed were sown at 3 cm soil depth. Ten days after seedling emergence, only one plant was kept growing in each pot. At the maturity stage, clethodim-treated plants were collected to determine the yield parameters. Plants were separated in main stem, first tiller, second tiller, and late tillers, to study the effects of herbicide on yield parameters of nontreated stems. For each group, the length of rachis, number of fertile spikelets, number of grains, and mass of grains were evaluated. Additionally, the number of tillers and the number of fertile tillers were determined in control plants. In glyphosate-treated plants, the net carbon assimilation was used as a tool to study the effects of the herbicide. Net carbon assimilation was measured every two days in the main stem, first, second, and third tillers, and late treated tillers (as well as nontreated plants), using a portable photosynthesis meter with opened system IRGA LI-6400xt (Licor Inc., Lincoln, NE, EUA). The measurements were interrupted as soon as the net carbon assimilation approached zero. As glyphosate killed the plants, yield parameters were not evaluated. Data were subjected to the analysis of variance by the F test, at 5% probability. Means were compared by the Tukey's test, at 5% probability, using the Sisvar software (Ferreira, 2011) for clethodim-treated plants. Data on carbon net assimilation (glyphosate-treated plants) were plotted with the values of standard deviation. Results and Discussion Both wheat cultivars are contrasting for the potential of tiller emission (Figure 2). BRS Guamirim showed higher values of total number of tillers and number of viable tillers. The large number of productive tillers per plant is a consequence of the plant growing in pots, with no restrictions of water, nutrients, and solar radiation; therefore, there was a slight difference between the total number of tillers and the number of viable tillers per plant, for each cultivar. There was no significant interaction between wheat cultivars and clethodim application for the productive Pesq. agropec. bras., Brasília, v.56, e01600, 2021 DOI: 10.1590/S1678-3921.pab2021.v56.01600 parameters of wheat spikes (Table 1). BRS Parrudo cultivar showed an individual potential of spike yield higher than that of BRS Guamirim cultivar due to the lower number of spikes per plant. Therefore, BRS Parrudo cultivar showed higher length of rachis, as well as higher number of fertile spikelets and higher number and mass of grains per spike, for both primary tillers and average of other spikes of the plant. The effect of competition among tillers on the individual potential of spike yield in wheat plants was widely recorded. A high tiller emission results in limitations on the productive potential of spikes by wheat plants because of the competition for assimilates, water, and nutrients (Fioreze et al., 2019). Thus, limitations on the number of tillers per plant due either to mutations or even the manual removal of tillers improve the productive potential of wheat spikes (Duggan et al., 2005;Dreccer et al., 2013;Mitchell et al., 2013;Hendriks et al., 2016). The clethodim application on the main stems of wheat plants did not affect the length of rachis (Table 1) because the processes of the spike differentiation and expansion occurred before the herbicide application. However, clethodim affected the productive potential of the herbicide-treated stem, reducing the number of fertile spikelets, the number of grains and, consequently, the mass of grains of spike (Table 1). However, nontreated stems did not change their productive potential. Thus, in plants for which the herbicide application was performed only on the main stem leaves, the effect was observed only on that stem. The same effect occurred for clethodim applications on the first (T1), or second (T2) tiller. Glyphosate application to all late tillers of wheat, in the beginning of the stem elongation, resulted in a decrease of the net carbon assimilation (A) in the primary tillers of both wheat cultivars (Figure 3). The effect of herbicide exposure was observed more clearly four days after the application on BRS Guamirim cultivar plants, and the net carbon assimilation approached zero six days after application, while on BRS Parrudo cultivar plants, the net carbon assimilation approached zero only eight days after application. (Large, 1954). MC, main culm. T0, T1, T2, and T3 refer to Masle (1985) classification of wheat tillers. DAA, days after application. Vertical bars indicate standard deviation. Pesq. agropec. bras., Brasília, v.56, e01600, 2021 DOI: 10.1590/S1678-3921.pab2021.v56.01600 The effect of glyphosate application on all late tillers of 'BRS Guamirim' plants, at the flowering stage (Figure 3), was slightly different from that observed for application during early stem elongation. The effect of herbicide application was lower during the flowering stage, especially in primary tillers not treated with herbicide. The glyphosate application on all main stems of 'BRS Parrudo', at the flowering stage, showed the same effect as that observed for application during the stem elongation period. Eight days after glyphosate application, both treated and nontreated tillers showed net carbon assimilation approaching zero. A strong difference between the wheat cultivars was observed only when glyphosate was applied at the flowering stage, on the two late tillers (Figure 4). The main stem of 'BRS Guamirim' plants was not affected by herbicide application. However, the main stem of 'BRS Parrudo' was strongly affected by the herbicide, with the net carbon assimilation approaching zero eight days after application. For both cultivars, the two tillers treated with herbicide died six days after application. The lack of translocation of the clethodim herbicide between treated (primary tillers) and nontreated stems was quite consistent (Table 1). Several aspects on translocation of substances related to carbohydrate metabolism or even long-distance signaling, have been addressed in the past decades (Turgeon & Wolf, 2009;Dinant & Lemoine, 2010;Lucas et al., 2013), being the presence of vascular connections an essential criterion for translocation. Considering that each clethodim-treated tiller is a viable tiller, it is possible to suppose that the spike of the treated-tiller was the main translocation route, since the spike is the main sink of stems. This result does not prove the absence of vascular connections among wheat stems, but it clearly shows the direction of translocation between sources and sinks in the stem. It is important to highlight that metabolites associated with long-distance signaling throughout the plant do not necessarily follow the movement of photoassimilates (Dinant & Lemoine, 2010;Lucas et al., 2013), although the translocation of the herbicide occurred by this route in the present study. With the use of glyphosate in late tillers, some aspects were shown on the movement of assimilates among stems of wheat plants. The reduction of net carbon assimilation in primary tillers not treated with glyphosate indicates the existence of vascular connections among stems, both in the stem elongation period and in the flowering period ( Figure 3). This response was observed regardless of the tillering potential of each cultivar, although it was less intense for BRS Guamirim during the flowering stage. This is an interesting aspect to be considered, since the amount of active ingredient applied to BRS Guamirim plants was higher, despite this cultivar's higher number of tillers (Figure 2). The number of stems not treated with glyphosate was the same for both cultivars. The most surprising result was observed when glyphosate was applied only on the last two tillers, and the net carbon assimilation was measured on the main stem of each plant. The main stem of 'BRS Parrudo' was significantly affected by glyphosate, while the net carbon assimilation remained constant in the main stem of 'BRS Guamirim' (Figure 4). Fioreze et al. (2019) found that 'BRS Parrudo' plants show higher biometric and productive uniformity among tillers than 'BRS Guamirim', under a freetillering and protected cultivation. Adding these aspects to the results obtained in the present study, it is possible to state that the translocation of solutes among wheat stems varies depending on the degree of Pesq. agropec. bras., Brasília, v.56, e01600, 2021 DOI: 10.1590/S1678-3921.pab2021.v56.01600 development of a given tiller in relation to the others. Therefore, the translocation could be more effective among well-formed stems, with low possibility of translocation among senescent, malformed tillers. This would refute the thought that late, nonproductive tillers could be transient sinks that supply the grain filling of other stems under limiting environmental conditions. Conclusions 1. Well-developed late tillers show higher potential for the translocation of assimilates to the whole wheat (Triticum aestivum) plant than poor-developed late tillers. 2. Wheat plants show a low potential for translocation of assimilates from primary to secondary tillers. 3. There is translocation of assimilates from secondary tillers to the whole plant.
2021-05-10T02:49:28.495Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "461f62b864fa8422499e790b867b6cc9316d7e97", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/pab/a/cwcYGgkhX5PDKwgrfY5wvgd/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "461f62b864fa8422499e790b867b6cc9316d7e97", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
244104750
pes2o/s2orc
v3-fos-license
Clinical characteristics of single human papillomavirus 53 infection: a retrospective study of 419 cases Background Human papillomavirus (HPV) infection is the main cause of cervical cancer. Characteristics of HPV infections, including the HPV genotype and duration of infection, determine a patient’s risk of high-grade lesions. Risk quantification of cervical lesions caused by different HPV genotypes is an important component of evaluation of cervical lesion. Data and evidence are necessary to gain a deeper understanding of the pathogenicity of different HPV genotypes. The present study investigated the clinical characteristics of patients infected with single human papillomavirus (HPV) 53. Methods This retrospective study analyzed the clinical data of patients who underwent cervical colposcopy guided biopsy between October 2015 and January 2021. The clinical outcomes and the follow-up results of the patients with single HPV53 infection were described. Results 82.3% of the initial histological results of all 419 patients with single HPV53 infection showed negative (Neg). The number of patients with cervical intraepithelial neoplasia (CIN)1, CIN2, CIN3, vaginal intraepithelial neoplasia (VaIN)1, CIN1 + VaIN1, CIN1 + VaIN2, and CIN2 + VaIN2 was 45, 10, 2, 9, 6, 1, and 1, respectively. Cancer was not detected in any patient. When the cytology was negative for intraepithelial lesion or malignancy (NILM), atypical squamous cells of undetermined significance (ASC-US) or low-grade squamous intraepithelial lesion (LSIL), we observed a significant difference in the distribution of histological results (P < 0.05). 95 patients underwent follow-up with cytology according to the exclusion criteria. No progression of high-grade lesions was observed during the follow-up period of 3–34 months. Conclusions The lesion caused by HPV53 infection progressed slowly. The pathogenicity of a single HPV53 infection was low. rate of cervical cancer has increased significantly, and the mortality rate is also on the rise [3]. Human papillomavirus (HPV) infection is the main cause of cervical cancer [4,5]. Based on frequency of occurrence in cases of cervical cancer and available biological data, apha HPV types are classified as "carcinogenic to humans" (The International Agency for Research on Cancer [IARC] classification Group 1), "probably/possibly carcinogenic to humans" (IARC Groups 2A and 2B), and "not classifiable as to its carcinogenicity to humans" (IARC Group 3) [6]. The HPV53 genotype belongs to the IARC Group 2B and is relatively common in the population worldwide. So et al. [7] analyzed 1988 samples from healthy women, as well as those from women with cervical intraepithelial neoplasia (CIN) 1-3 and cervical cancer, and observed that HPV53 (9.3%) was the fourth most common HPV genotype among all samples in the study. Reportedly, HPV53 was one of the most frequent genotypes detected by several studies (prevalence rate 4.1-9.69%) [8][9][10]. Additionally, although HPV53 was considered to be a possible carcinogenic HPV type, it was detected in < 0.5% of invasive cervical cancer [11,12]. Although many studies have focused on HPV infection, the clinical characteristics and risks of HPV53 infection remain inconclusive. This study retrospectively analyzed the clinical data of patients who underwent cervical colposcopy guided biopsy. We investigated the clinical outcomes and follow-up results in patients with single HPV53 infection and recommend evidence-based clinical intervention strategies. Study design and participants This retrospective case-control study was conducted in the Women's Hospital, Zhejiang University School of Medicine, China, and was approved by the Institutional Ethics Committee (PRO2021-1292). We analyzed the clinical data of patients who underwent cervical colposcopy guided biopsy between October 2015 and January 2021. Among the 1048 patients with HPV53 infection, 424(40.5%) had single HPV53 infection and 624 (59.5%) had multiple HPV infection. Among the 424 patients, 419 patients without a history of cervical operation or hysterectomy were enrolled in the final statistical analysis (Fig. 1). Following were the exclusion criteria: (1) surgical treatment administered, (2) pregnancy, (3) no followup within 1 year after biopsy, and (4) lost to follow-up. By January 2021, 95 of 419 patients underwent followup for 3-34 months after colposcopy guided biopsy. All patients underwent cytological evaluation (examination of cervicovaginal exfoliated cells) and HPV testing during follow-up. Some patients with new indications for colposcopy guided biopsy underwent new histological analysis. In addition to cervical examination, we performed careful colposcopy evaluation of the upper third of the vagina, including the fornices in all patients. Patients who underwent surgical treatment including excision and ablation therapy showed negative results on cytological evaluation performed at initial follow-up. Statistical analysis All statistical analyses were performed using the Statistical Package for the Social Sciences Version 21.0 (SPSS Inc., Chicago, IL, USA). Continuous non-normally distributed variables are presented as median (range). Categorical variables are expressed as frequencies and percentages. The (One-sample) Chi-square test were used whenever appropriate. Statistical tests were 2-sided, and P values < 0.05 were considered statistically significant. Discussion In this study, we investigated 419 patients without a history of cervical surgery or hysterectomy, who presented with single HPV53 infection. The pathogenicity of single HPV53 infection was low; 82.3% of the initial histological results showed Neg and 14.3% showed CIN1, VaIN1, or CIN1 + VaIN1 lesions. No progression of high-grade lesions was observed during the follow-up period of 3-34 months; therefore, it is reasonable to conclude that the lesions induced by single HPV53 infection progress slowly. Characteristics of HPV infections, including the HPV genotype and duration of infection, determine a patient's risk of high-grade lesions [15,16]. Risk quantification of cervical lesions caused by different HPV genotypes is an important component of evaluation of cervical lesion [17]. Further data and evidence are necessary to gain a deeper understanding of the pathogenicity of different HPV genotypes. Owing to regional cultural differences, a large number of resources and expenditure are allocated for HPV53 screening in some areas, and a few reports suggest that HPV53 infection is relatively common [7,8,10,18], which may lead to panic among the population regarding this infection. Literature review yielded a few articles that describe HPV53 infection and follow-up [19,20]; however, most studies included a small number of patients or did not perform comparison of histological results. Our study investigated the clinical outcomes of HPV53 infection and provides evidence to support for the association between HPV53 infection and cervical lesions. The long follow-up period of 34 months described the developmental trend of HPV53 infection, which will contribute to inform clinicians of appropriate treatment and follow-up strategies. Furthermore, unnecessary treatment can be avoided and patients' anxiety can be reduced. A large number of epidemiological studies have shown significant differences in HPV infection status and distribution across different countries or regions worldwide. Del et al. [9] found that in Italy, HPV42 was the most prevalent virus type, followed by HPV16, 53 and 31, with a lower prevalence of the HPV11, 82 and 35 genotypes. Kantathavorn et al. [21] observed that HPV52, 16, and 51 were the most common high-risk HPV genotypes detected in Thai women and HPV52 was common in Asians. HPV 16,18,35 and 45 were the most prevalent genotypes in northeastern Brazil [22]. In China, HPV types are also different owing to their geographical distribution [13,[23][24][25]. HPV53 infection has been reported in various places, and is relatively common [8][9][10]18]. We investigated more than 1000 patients with HPV53 infection, who were treated at single center between October 2015 and January 2021, which indicates that HPV53 infection was relatively common in the population. Moreover, reportedly, a high percentage of single HPV infections are associated with cervical lesion (66.7-91.5%) [26]. Based on the methods used for HPV detection, the prevalence of multiple infections ranged from 1 to 52% [9,27,28]. Dickson et al. [29] observed that the HPV53 genotype was more likely to occur in multiple infections with other genotypes. Our study results are consistent with these findings. Among the 1048 patients with HPV53 infection who underwent initial investigation, in addition to 424 (40.5%) patients with single HPV53 infection, we observed a significantly large percentage of 624 (59.5%) patients with multiple HPV infections. Many studies have reported that in contrast to the HPV16 variety, which is the most common HPV genotype that causes invasive cervical carcinoma (55.2%) and high-grade lesions (45.1%) [2], the HPV53 genotype is more commonly associated with low-grade lesions [7,30]. Based on data provided by Padalko et al. [20], a sevenfold difference is observed in the frequency of ASC-US/LSIL (82.4%) and HSIL + (11.8%) in the cytological results of HPV53 infection. However, a meta-analysis showed that different distribution of HPV genotypes may be detected in HIV-positive women with HSIL, who were significantly more likely to be infected with the HPV53 genotype [31]. Our study showed that 405(96.7%) patients with single HPV53 infection had histological results of Neg, CIN1, VaIN1, or CIN1 + VaIN1, and only 14 (3.3%) patients showed CIN2, CIN3, CIN1 + VaIN2, or CIN2 + VaIN2. No patient showed cancer in our study, and no patient showed progression of high-grade lesions during follow-up. In our view, although the HPV53 genotype is designated as "possibly high risk" variety, the pathogenicity of single HPV53 infection was not so serious, and it is not a rapidly progressive infection. The main limitation of the study is its retrospective nature. Although all data were obtained from medical records, due to the quality of cytological specimens and the variety of HPV genotyping methods, samples and test results were obtained from multiple centers, and discrepancies in findings across various centers may have affected our results. Currently, no reference standard is available for HPV genotyping, and further research is warranted in this field. Notably, in this study, the cytological findings were classified based on the 2001 Bethesda classification system, and histological results of colposcopy guided biopsy were obtained from a single center. Therefore, in our opinion, our conclusions are reliable. As mentioned earlier, this was a single-center study, and multicenter studies are warranted to exclude possible biases associated with single-center research. Conclusions Our study indicates that single HPV53 infection shows low pathogenicity and is not a rapidly progressive condition. Combined with the results of cytological screening, some patients with single HPV53 infection might appropriately extend the screening interval. Our findings contribute to inform clinicians of appropriate treatment and follow-up strategies for patients with single HPV53 infection.
2021-11-15T14:37:38.301Z
2021-11-15T00:00:00.000
{ "year": 2021, "sha1": "e200ed67747e2133ff9b372b6e00c9864e3f24a6", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-021-06853-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e200ed67747e2133ff9b372b6e00c9864e3f24a6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
261589161
pes2o/s2orc
v3-fos-license
The Tax Revenue from Agriculture and Manufacturing Sectors in Lower Middle-Income Countries with Exchange Rate as a Moderating Variable : Middle-income trap triggers the middle-income countries to boost their economic growth. As tax revenue has causal relationship with economic growth, it is essential to conduct a study on how to improve tax revenue. Considering the potential of agriculture and manufacturing sectors in lower middle-income countries, particularly in East Asia and Pacific Regions one of which is Indonesia, this study aims to determine the effects of both sectors on tax revenue in the respective regions. This study uses exchange rate as moderating variable and foreign direct investment (FDI) as control variable. The utilization of the two variables becomes the novelty of this study since researches that uses the two variables have never been conducted. In addition, no references of former studies concerning the effects of the two sectors on tax revenue in lower middle-income countries found. The research is conducted from 2002 to 2019 by using panel data multiple linear regression analysis method. By using fixed effect model and ridge regression model, it is indicated that before the moderation is carried out, agriculture has a negative effect and manufacture has a positive effect on tax revenue. However, after the variables are moderated with exchange rate, the interaction of agriculture and exchange rate has positive effect on tax revenue, while the interaction of manufacture and exchange rate has negative effect on tax revenue. This study implies that to optimize a country's tax revenue, apart from focusing on optimizing agriculture or manufacture, exchange rate condition needs to be considered. INTRODUCTION uses "middle-income trap" term to define a phenomenon in which a country could be trapped in middle-income status due to the differences in effective growth strategies implemented by low-income countries to achieve high-income status. It implies that a very strong driving force is needed to achieve high-income status so that there can be a shift from a lowincome country to a high-income country. The transition to a high-income status requires the policy makers and the society to focus on the fundamental determinants of the growth itself (Eichengreen et al., 2017). World Bank (2022) divides middle-income countries into two types. Countries with a Gross National Income (GNI) between USD1,036 and USD4,045 defined as lower-middle income economies, and countries with GNI between USD4,046 and USD12,535 referred to as upper middle-income economies. In 2020, there are 36 lower-middle income countries and 57 uppermiddle income countries (OECD, 2021). In recent years, around 75 percent of the world's population and 62 percent of the world's poor population are considered middle-income countries (MICs). Furthermore, around one-third of global Gross Domestic Product (GDP) comes from MICs (World Bank, 2022). Clearly, high-income countries have different conditions from low-income countries. This condition can be seen from the proportion of low-income, lower middle-income, upper middleincome, and high-income countries' tax revenues to the GDP of each type of country. This is in line with what was stated by OECD (2022) that tax revenues in middle-income countries are of a smaller percentage of GDP than those in high-income countries. Chart 1 Tax revenue percentage to GDP in each type of countries Source: World Bank (2023b) However, from 2002 until 2019, the trend in the tax revenue of lower middle-income countries (including Indonesia) had increased (World Bank, 2023b). In his study, Arvin et al. (2021) stated that in low-income and lower middle-income countries, tax revenue and economic growth (measured by GDP per capita) had a causal relationship both in short term and long term. High tax revenue is supported by high economic growth, and vice versa, high economic growth supports high tax revenue. In most low-income and middle-income countries, the economy depends on agriculture and other primary product activities, so that since 1990, the expansion of agricultural land has been increasing (Barbier, 2020). Middle-income countries contribute around 40 percent of global agricultural production and growth reaching more than 3.5 percent annually (Economic Study Service, 2017). According to World Bank (2023a), agriculture is a sector that has the potential to eradicate extreme poverty projected to provide welfare for around 9.7 billion people in 2050. The ability to increase the income of the poor from agriculture sector is considered two to four times more effective than the other sectors. Agriculture contributes around 4 percent to global GDP and in some developing countries, and more than 25 percent to GDP (World Bank, 2023a). Furthermore, large agricultural output is one of the reasons for a country to have a large tax potential (Mawejje & Sebudde, 2019). Besides agriculture, the manufacturing sector in lower middle-income countries also gets the attention. From 2002 until 2019, in lower middle-income countries, the average contribution of the manufacturing sector to GDP was greater than that of the agricultural sector to GDP (World Bank, 2023b). Chart 3 The growth of agriculture and manufacturing sectors' contribution to GDP in lower middle-income countries from 2002-2019 Source: World Bank (2023b) In addition, Eichengreen et al. (2017) revealed that the manufacturing sector in middle-income countries was superior to the service sector. Followed by the findings of Mazhar & Rehman (2020) that an increase in the service sector reduces the growth rate of national income per capita both in low-income countries and middle-income countries. On the other hand, the manufacturing sector acts as a growth escalator. A study conducted in high-income countries in Asia when entering and leaving middle-income countries, it was found that the value-added contribution of the manufacturing sector to GDP increases the income per capita in a sustainable manner. (Huang et al., 2017). Apart from the positive contribution and potential of agriculture to the country's economy, studies on examining the effect of the agricultural sector on tax revenues conducted by Karagöz (2013), Castro & Camarillo (2014), Gaalya (2015), Alabede (2018), dan Rodriguez (2018) indicated a significant negative effect. On the other hand, the researches by Karagöz (2013), Castro & Camarillo (2014), Gaalya (2015), dan Minh Ha et al. (2022) showed a significant positive effect from the manufacturing sector on tax revenues. However, these studies have not been carried out specifically for lower middle-income countries. Furthermore, no study has been conducted using the exchange rate as a moderating variable and foreign direct investment (FDI) as a control variable. Exchange rate can also affect tax revenue. Seade (1990) in his Fiscal Policy of Open Developing Economies stated that devaluation would increase tax revenue if import activities of a country were dominated by necessities when income is inelastic. However, devaluation will also tend to reduce tax revenue or will even cause losses if tax revenue is more dominantly dependent on taxes on wages (Seade, 1990). In his study, Gaalya (2015) concluded that the exchange rate had a significant positive effect on tax revenues, whereas Ofori et al. (2018), Rutto (2020), and Tsaurai (2021) concluded the opposite i.e., the exchange rate had a significant negative effect on tax revenue. Similar to the exchange rate, FDI affects tax revenue. A number of studies have been conducted to examine this effect. Okey (2013), Castro & Camarillo (2014), Pratomo (2020), Tsaurai (2021), Camara (2022), Gaspareniene et al. (2022), dan Minh Ha et al. (2022) concluded that FDI had a significant effect on tax revenue. Only Castro & Camarillo (2014) found a negative effect, while the others found a positive one. However, Gaspareniene et al. (2022) indicated a positive effect from outward FDI and a negative effect from inward FDI on tax revenues. To describe further, the exchange rate and FDI also affect the tax revenue in addition to the agriculture and the manufacturing sectors. Nevertheless, there haven't been any researches conducted specifically to study those effects in lower middle-income countries. Thus, this study will examine the effect of the agriculture and manufacturing sector on tax revenue in some lower middle-income countries for the last 18 years. The exchange rate as a moderating variable and FDI as a control variable employing panel data multiple linear regression analysis method were selected. The years 2020 to 2022 were excluded in this study due to the 2019 (COVID-19) pandemic in which Corona Virus impacted the consistency of the observed data. The results of this study are to describe the strategies set by lower middle-income countries in optimizing their tax revenue, either from the agricultural sector or from the manufacturing sector by considering the exchange rate. METHOD This is a quantitative study that analyzes the data by using multiple linear regression panel data. The quantitative study uses statistical procedures or other methods of quantification in analyzing the relationship between study variables (Jaya, 2020). To support the research, the multiple linear regression method is used to determine the linear relationship between the dependent variable and more than one independent variable (Suyono, 2018). The data used is panel data since it is a combination of time series and cross section data (Sihombing, 2022). Panel data comprises agriculture, manufacturing, exchange rate, tax revenue, and the numeric data of several countries from 2002 to 2019. These items are categorized as panel data as these can only be studied by using the quantitative method. Additionally, the secondary data were taken from World Development Indicators data of World Bank (2023b). The observed countries are selected based on stratified sampling i.e., the samples selection which is done by dividing the population into subgroups (Fink, 2005). Firstly, the entire population i.e., all countries in the world, is categorized into high-income economies, upper middle-income economies, lower middle-income economies, and low-income economies based on World Bank (2023b). After selecting the lower middle-income countries, the data are then filtered by the country region and chosen for the East Asia and Pacific region. There are five countries chosen for this study i.e., Cambodia, Indonesia, the Philippines, Timor Leste, and Vietnam. The dependent variable is the percentage of tax revenue to Gross Domestic Product (GDP), while the independent variables are the percentage of agriculture to GDP and the percentage of manufacturing to GDP. It can be summed up that the dependent variable is a variable affected by the independent variable and vice versa. (Mukhid, 2021). Moreover, this study uses a moderating variable i.e., the exchange rate and control variable i.e., foreign direct investment (FDI). Moderating variable is the variable that strengthen or weaken the relationship between the dependent variable and the independent variable, while the control variable is a variable determined by the researcher so that non-study variables do not affect the relationship between the dependent variable and the independent variable (Duli, 2019). The details of the variables used in this study are as follows. (2023b) Before carrying out the regression analysis, the classical assumptions tests are made to measure normality, heteroscedasticity, multicollinearity, and autocorrelation. This test was carried out to obtain a regression model that produces reliable and unbiased estimates (Purnomo, 2017). The classic assumption tests performed are described in the table below. Performed to choose which model is better: the common effect model or the fixed effect by using F test. H0 : common effect model is better than fixed effect model (Prob>F > α) H1 : fixed effect model is better than common effect model (Prob>F < α) Lagrange Multiplier Breucsh Pagan Performed to choose which model is better: the common effect model or random effect model by using chi square LM test. H0 : common effect model is better than random effect model (Prob>chibar2 > α) H1 : random effect model is better than common effect model (Prob>chibar2 < α) Hausman Performed by using chi square test to choose which model is better: the random effect model or the fixed effect model . H0 : random effect model is better than fixed effect model (Prob>chi2 > α) H1 : fixed effect model is better than random effect model (Prob>chibar2 < α) Source: Sihombing (2021) and Sihombing (2022) In conducting the analysis, the ridge regression model is used. Ridge regression is a regression model indicated whether the independent variables in the study are collinear or multicollinearity (Chatterjee & Hadi, 2006). This model demonstrates slight changes in the standard regression model is as follows: by estimating the equation for the coefficient to be: (1 + ) 1 + 12 2 + ⋯ + 1 = 1 , where is the correlation between the i th independent variable and the j th independent variable and is the correlation between the i th independent variable and Y dependent variable. The solution for equation (2), ̂1 , … ,̂ is the estimated ridge regression coefficient (Chatterjee & Hadi, 2006). The study method used to test the hypothesis formulated are as follows: The effect of agriculture on tax revenue Although Asai & Malgioglio (2019) stated that agriculture is more developed in countries with lowincome residents, all literature states that agriculture has a negative relationship with tax revenue. Agriculture is mostly a selected sector only for subsistence purpose (subsistent agriculture). Besides, agriculture is difficult to tax because it operates informally (Alabede, 2018). It also has a small-scale production (Castro & Camarillo, 2014) so that there are only a small number of taxpayers who pay income tax from agricultural (Gaalya, 2015). Most agricultural products are exempt from indirect taxes due to their characteristics (Rodriguez, 2018). In addition to that, there will be very high costs required to verify income from the agricultural sector to be taxed (Gaalya, 2015). Consequently, the first hypothesis in this study is: H1 : Agriculture has a negative effect on tax revenue. The effect of manufacturing on tax revenue From the existing literature, there are two different views regarding the effect of manufacturing on tax revenue. However, most of the literature stated that this effect is positive. It is supported by the fact explained Su & Yao (2016) that the manufacturing sector is the main driver of economic growth in middle-income countries. The manufacturing sector allows businesses to generate a lot of profit (Karagöz, 2013) in addition to the added value of the products (Gaalya, 2015). These conditions make the manufacturing sector easier to tax than the agricultural sector (Castro & Camarillo, 2014;Minh Ha et al., 2022). Therefore, the second hypothesis of this study is: H2 : Manufacturing has a positive effect on tax revenue. The effect of exchange rate on tax revenue Exchange rate plays an important role in economy because its changes can affect prices and costs in the foreign exchange market, thus it will have an impact on exports (Zhao, 2020). In general, the existing literature state that the exchange rate has a negative effect on tax revenue. An increase or decrease in the exchange rate will affect the amount of goods exported which will ultimately cause the opposite impact to tax revenue (Gaalya, 2015;Ofori et al., 2018;Rutto, 2020;Tsaurai, 2021). Therefore, the third hypothesis regarding the effect of the moderating variable in this study is: H3 : Exchange rate has a negative effect on tax revenue. The effect of interaction of agriculture and the exchange rate on tax revenue By considering the effect of the exchange rate as a moderating variable, the interaction between agriculture and the exchange rate is expected to clarify the effect of the agricultural sector on tax revenues, before interacting with the moderating variable. In other words, the arising effect is similar to the previous one, thus the fourth hypothesis in this study is: H4 : The interaction of agriculture and exchange rate have a positive effect on tax revenue. The effect of interaction of manufacturing and the exchange rate on tax revenue From the fourth hypothesis, the fifth one is similar to the fourth one. H5 : The interaction of manufacturing and exchange rate have a positive effect on tax revenue. RESULTS AND DISCUSSIONS Based on the data collected in this study, tax revenue, agriculture, and manufacturing have higher average values than the standard deviation. It indicates that the distribution of independent variables and the dependent variable is even. In contrast, the moderating variable of the exchange rate -the interaction variable between agriculture and the exchange rate-as well as the interaction variable between manufacturing and the exchange rate indicate higher standard deviation than the average value. It shows that the data distribution of the moderating variable and the interaction variable is not even. In addition, the analysis of the data distribution in all variables can also be seen from the range between the average value and the minimum value and the range between the average value and the maximum value. The independent variables and all dependent variables have balanced ranges between the average values and their minimum and maximum values. Conversely, the moderating variable and all interaction variables have an unequal range between the average value and the minimum and maximum values. These results indicate that the independent variables and dependent variable used in this study show an adequate data distribution. The classical assumptions tests demonstrates that the model passes the heteroscedasticity test and the autocorrelation test, but it fails to pass the normality test and the multicollinearity test. In other words, the variance of the data of the model is homogeneous (the model is free from the heteroscedasticity assumption) and the value of the variable at t-time is not correlated with the variable at t-1 time (the model is free from the autocorrelation assumption). However, the data are not normally distributed (the model is not free from the normality assumption) and there is a correlation between variables (the model is not free from the multicollinearity assumption). Autocorrelation Prob>chi2 = 0,0545 > α = 5% H0 is not rejected; the model is free from autocorrelation assumption Source: Processed by using STATA application Hence, to determine the best regression model, the Chow Likelihood test, the Lagrange Multiplier Breusch Pagan test, and the Hausman test are conducted. The results of the tests indicate that the best model is the fixed effect model with details as follows: The fixed effect model shows the overall R-squared of 0.5293 which indicates that the independent variables of agriculture and manufacturing affect the dependent variable of tax revenue by 52.93%. Other independent variables that are not included in the study model only affect the dependent variable by 47.07%. The data which are not distributed normally in accordance to the results of the normality assumption test are ignored because basically the normality assumption test on panel data can be taken away (Kusumaningtyas et al., 2022). However, to overcome the multicollinearity in the data, the ridge regression model is used. This is in line with Suyono (2018) who stated i.e., the multicollinearity can be overcome by using a ridge regression model. The table below shows the ridge regression results. The model implies that: 1) an increase in agriculture by 1 percent of GDP will reduce tax revenue by 0.4014813 percent of GDP and vice versa; 2) an increase in manufacturing by 1 percent of GDP will increase tax revenues by 0.0699612 percent of GDP and vice versa; 3) an increase in the exchange rate by 1 LCU per USD will decrease tax revenue by 0.0002728 percent of GDP and vice versa; 4) an increase in the interaction between agriculture and the exchange rate by 1 unit will increase tax revenue by 0.0000353 percent of GDP and vice versa; and 5) an increase in the interaction between manufacturing and the exchange rate by 1 unit will decrease tax revenue by 0.00000587 percent of GDP and vice versa. The P>|t| and Prob>F values listed in the table have been adjusted by dividing P>|t| and Prob>F from the STATA application with two. It is such performed since the study conducted a one-tailed statistical test (one tail), illustrated by the study hypothesis used. The estimation shows whether or not the independent variables affect the dependent variable, and if the effect is positive or negative. This is in line with a statement by Kasim (2008) i.e., a one-tailed statistical test occurred when the alternative hypothesis determine one thing is higher or lower than the others. The pvalue in one-tailed statistical test is twice the p-value in two-tailed statistical test (Kasim, 2008). The above explanation summarizes, the t-test results before moderation show that all independent variables have a significant effect on tax revenue. In this case, agriculture has a negative effect on tax revenue, while manufacturing has a positive effect. However, after moderation with the exchange rate variable, the interaction between agriculture and the exchange rate has a positive effect on tax revenue, while the interaction between manufacturing and the exchange rate has a negative effect. Furthermore, the F-test results show that all independent variables have a significant effect on the dependent variable of tax revenue simultaneously. The effect of agriculture on tax revenue The first hypothesis testing results indicate that before moderating the exchange rate variable, the agricultural sector has a negative effect on tax revenue. This is similar to the study conducted by Alabede (2018), Castro & Camarillo (2014), Gaalya (2015), and Rodriguez (2018). The agricultural sector in lower middle-income countries is generally carried out only for livelihood purposes (subsistence agriculture) (Alabede, 2018). This sector is difficult to tax because it generally operates informally (Alabede, 2018) and tends to be a small-scale, resulting in a small number of taxpayers paying income tax on agricultural sector (Castro & Camarillo, 2014;Gaalya, 2015). Moreover, most agriculture products are exempt from value added tax due to their characteristics (Rodriguez, 2018). As a result, the cost of verifying agricultural sector income to be taxed is very high (Gaalya, 2015). Despite the fact that the agricultural sector is the low-income countries' icon due to its major contributions to the economies (Asai & Malgioglio, 2019;Economic Study Service, 2017), the potential problems that can occur due to limited access to technology, high investment costs, lack of skills, knowledge, and a supportive environment (McCampbell, 2022) and the conditions mentioned, draw the conclusion that the agricultural sector has a negative effect on tax revenue. The effect of manufacturing on tax revenue The second hypothesis testing results show that before moderating the exchange rate variables, the manufacturing sector has a positive effect on tax revenue. This is in line with the study conducted by Castro & Camarillo (2014), Karagöz (2013), Gaalya (2015), and Minh Ha et al. (2022). In contrast to the agricultural sector, the manufacturing sector is one of the sectors easier to tax. Manufacturing allows businesses to generate a lot of profit besides adding value to the products they produce. Even though this effect will depend on the conditions of the manufacturing sector in each country (Hamdan & Rana, 2021), manufacturing serves as the basic capital for a country to create added value to the raw materials. They are processed into finished products and profits generation is made possible. Therefore, manufacturing can boost economic growth both from the sector itself and from its contribution to drive other sectors (Huang et al., 2017;Mital et al., 2014;Okeyo, 2022;Su & Yao, 2016;Wicaksono et al., 2023). The effect of exchange rate on tax revenue The third hypothesis testing results show that the exchange rate has no effect on tax revenue. This is not in line with the study conducted by Gaalya (2015), Rutto (2020), Tsaurai (2021), and Ofori et al. (2018) who stated that the exchange rate has a negative effect on tax revenue. Apart from the statement that the exchange rate have an impact on the economy (Gragnon & Hinterschweiger, 2011;Zhao, 2020), the study results indicate that this role does not make the exchange rate affect tax revenues. However, when the exchange rate moderates the effect between the agricultural sector and the manufacturing sector on tax revenues, there is a change in the direction of the effect of the two sectors. The effect of the interaction between agriculture and the exchange rate on tax revenue The fourth hypothesis testing results indicate that the interaction between the exchange rate and the agricultural sector has a positive effect on tax revenues. This is the inverse effect of the previous direction before the moderation. This condition can occur as stated by Seade (1990) that if imports of a country rely on the income-inelastic goods i.e., the basic agricultural goods, imports of these goods will not undergo changes when a devaluation occurred. Consequently, if devaluation strikes, the increase in the price of imported goods will boost the import duty revenue which in turn can enhance tax revenue. The underlying reason behind the changing of the effect of the agricultural sector on tax revenue is one condition, from negative to positive. The effect of the interaction between manufacturing and the exchange rate on tax revenue The fifth hypothesis testing results show that the interaction between the exchange rate and the manufacturing sector has a negative effect on tax revenue. This is also the inverse effect of the previous direction before the moderation. Exchange rate volatility creates uncertainty and costs of the avoidance risks of the business actors (Ofori et al., 2018). In this case, the overvaluation causes the price of goods to be more expensive. Despite high manufacturing production, high raw material prices due to the overvaluation lower the profits. As a result, tax revenues will also be low. Furthermore, the interaction between the manufacturing sector and the exchange rate changes the direction of the effect before the moderation i.e., from positive to negative.
2023-09-08T15:18:10.582Z
2023-07-31T00:00:00.000
{ "year": 2023, "sha1": "72c6deb6dcbd55aa84ffbb0808072d58d73e6906", "oa_license": "CCBY", "oa_url": "https://doi.org/10.52728/ijtc.v4i3.798", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a3385530d6cd8fbbb87df052536983f06e69b61d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
234422274
pes2o/s2orc
v3-fos-license
Effects of Admixtures on the Self Compacting Concrete State of the Art Report In concrete, many properties influence the strength parameters, among that most important parameter is proper compaction. Complete compaction is not possible using conventional concrete with hand compaction or machine vibration in structural element like long column and beam column joints. In these cases, self-compacting concrete (SCC) plays a vital role. It gives full compaction without any external effort. This type of concrete compacts by its own weight, another important advantage with SCC is the absence of segregation. Hence SCC usage has increased enormously in the last few decades. Having lot of merits SCC also has few demerits. One important demerit is that, to prepare SCC lot of cement content is needed compare to conventional concrete; this may increase the CO2 emission also it leads to higher heat of hydration this may cause shrinkage cracks if adequate curing is not provided. This study is an attempt to review the influence of different types of mineral admixtures and chemical admixtures used in self compacting concrete along with a brief knowledge about the change in mechanical behaviour with respect to the influence of these admixtures. Introduction Concrete, being one of most extensively used construction material because of its durability and cost effectiveness, however, concrete structure has some demerits also like weak tensile force, emitting more amount of CO 2 , optimum strength will not obtain if it is not fully compacted and properly cured. To overcome this issues lot of research taking place to enhance the specialty of concrete. One of such research ended up with a new innovation called SCC in which full compaction will be achieved without any external effect. SCC has high workability which makes it pump able and SCC have high resistance against segregation. Chemical admixtures such as super plasticizer and viscosity modifying agent enhances the workability of concrete by minimizing the segregation and bleeding in fresh concrete. The chemical admixtures also improves the pumpability of the freshly prepared concrete and thereby mitigating the water to cement ratio, which results in highly impermeable concrete paving way for the overall resistance [1]. Mineral admixtures like fly ash and silica fume improves the compressive strength and flexural strength [2]. The yielding stress of a concrete is elevated with silica fume up to 16%, where as the plastic viscosity decreases at the start (0% -4%) and then increases with respect to increase in silica fume content up to 16% [3]. Mechanical properties of a concrete can also be increased with the series effects of silica fume and recycled steel fiber [4]. Addition ofsilica fume develops the strength, porosity, bound water of the IOP Publishing doi: 10.1088/1757-899X/1006/1/012038 2 concrete specimen [5]. In SCC combined effect of fly ash and silica fume has high reduction in water absorption comparatively to that of SCC prepared with fly ash alone [6]. Rice husk ash as an alternative to cement also increase the mechanical behaviour of hardened concrete up to 15% replacement [7]. High degree of fine marble powder produces very good cohesiveness of concrete and Limestone powder enriches the mechanical as well as the durability parameters of concrete by exhibiting a good compact network [8]. Concrete mixture with more silica fume has smaller diameter of slump flow [9]. SCC mixes developed with bottom ash exhibits a minimal resistance to chloride permeability, abrasion resistance, water absorption and sorptivity at earlier age of concrete, however these properties decreases with increase in age and 30% rise in fly ash improved its flow properties [10]. The objective of this paper is to revels the effect of various chemical and mineral admixtures in the performance of the fresh and hardens concrete. In the production process of SCC Chemical admixtures has a vital role. The commonly used chemical admixtures are Polycarboxyle ether, synthetic copolymer, poly alcohol which technically know as super plasticizer, viscosity modifying agent, and anti-forming admixture respectively. These Chemical admixtures increase the workability of concrete and it also affect the strength parameter significantly. Classification of these admixtures, their physical and chemical parameters and the major effects with cement paste were been analysed in several studies [11,12]. Influence of these chemical admixtures increases the pump-ability, workability and resistance against segregation and bleeding and optimum compaction level. It also reduces the required water content, and air voids. This makes the concrete more impermeable, denser and durable [11].In the process of preparation of SCC we need to use more cement content which may cause high heat of hydration this may result in shrinkage cracks and high CO 2 emission. But by means of using mineral admixtures such as fly ash, GGBS, silica fume, Metakaolin may reduce these effects and also achieves more durable and denser concrete. Supposedly, the incorporation of metakaolin improves the penetration of chloride ions in the SCC specimens. When metakaolin is included as an alternative to cement content in different percentages, the mechanical properties such as the compressive strength and the tensile strength of the concrete specimens were noted to improve well at lower W/B ratio. Metakaolin tends to have better performance by means of splitting tensile strength. The absorption capacity of the control specimens were visualized to decrease with the addition of metakaolin. SCC is also developed with metakaolin by not including any viscosity modifying agent also [12,13] 2. Literature analysis Erhan et al, 2015 [14] in their experimental study detailed the fresh as well as the rheological behavior of the SCC when it is merged with nano silica and fly ash. Four different mixes with a w/c ratio of 0.3 and nano silica replaced with Portland cement at 0%, 2%, 4% and 6% to the weight of binder were formulated. Fly ash is incorporated for the 2%, 4% and 6% of nano silica containing self compacting concrete at 25%, 50% and 75% respectively to the weight of the total binder ratio. The incorporation of the nano silica in SCC is noted to enhance the slump flow and the V funnel flow at 25%, 50% and 75% also. Whereas for the L-Box test, the L-Box height ratio tend to increase with the fly ash content. The rheological tests for the same exposed the shear thickening of the self-compacting concrete by theaddition of the nano silica whereas the addition of the fly ash decreased the same. The fly ash content in SCC is also noted to decrease the mechanical properties of the SCC developed in this investigation Benaicha et al,2015 [15] experimentally studied the effects of silica fume and viscous modifying agents on the rheological and the mechanicalbehaviour of the SCC. In order to ascertain the rheological parameters slump test, L-box test, segregation test, V-Funnel test were conducted. In case of the mechanical properties, the compressive strength, tensile strength and modulus of elasticity were analysed. This study revealed the improvement in the plastic viscosity and the yield stress under a regulated condition of constant water/cement ratio along with the super plasticizer. In case of the mechanical properties, the SCC obtained with silica fume is noted to show better results than the SCC with VMA. Henceforth the paper concludes that based on the easy availability of a Divya et al, 2015 [7] experimentally investigated the fresh concrete studies, mechanical properties and durabiity properties of SCC developed with replacement of the binder with 0%, 5%,10%,15% and 20 % of rice husk ash (RHA). A rise of 33% is noted in the compressive strength whereas the split tensile strength is noted to elevate up to 15% of replacement with RHA. Notable reduction is spotted in chloride ion permeability. Adequate non permeability is visualized for the SCC developed with 15% of RHA replacement. Porosity of the specimens considerably reduced with longer curing time owing to the higher percentage of rate of hydration with the longer concrete age. Henceforth a much denser concrete is obtained by 15% replacement of RHA which in turn developed the maximum compressive strength than all the other specimens casted. Rahmat et al,2015 [16] detailed the influence of nano silica and carbon nano fibers in the self compacting concrete. A combined study with fresh and hardened characterization along with the hardened concrete study is conducted in this study. The SCC thus developed in this study is continuously monitored for the first 24 hours to clearly observe the accurate characterization of the SCC. The carbon nano fiber in the cement paste elevated the flexural strength by 30% which also caused surface cracking. However maximum shrinkage is observed for an ultrasonic pulse velocity value between 1500-2000m/s. The nano silica in SCC notably improved the compressive strength also improved the early age cracking and thereby affecting the durability of the SCC developed too. Mehmet et al, 2012 [8] included the industrial waste materials such as marble powder, limestone powder and fly ash. In this experimental analysis, industrial waste materials were included in SCC replacing the total binding material at 5%, i10% and 20% by weight. Though the use of marble powder and lime stone powder tend to increase the water content, the addition of fly ash neutralized the same in turn achieving a target slump flow. Thus, the fillers elevated the percentage of super plasticizer in concrete and also increased the initial and final setting time of SCC. The 28 days compressive strength of the ternary mixes were comparatively lower than that of the binary mixtures. And a much similar pattern is noted for the spit tensile strength of SCC also. Both binary and ternary mixtures developed lower chloride ion penetration properties. Mostafa et al,2015 [17] studied the influence of class F fly ash, nano silica and silica fume on the SCC with high performance. In this analysis, adequate fraction of these admixtures was replaced with the total cement content in SCC. The rheological properties, thermal properties, transport properties and the mechanical properties were detailed in this study. The fly ash content is noted to improve the rheological parameters whereas the silica nano particles and silica fume improved the transport as well as the mechanical properties. The larger portion of mineral admixtures blended with minimum fraction of the nano powders is a promising combination for a high performance self-compacting concrete. Rahmat et al, 2012 [11] predicted the fresh and hardened properties of SCC incorporated with metakaolin. A total of 15 mixes with MK content of 0%, 5%, 10%, 15% and 20% were developed by varying the w/c ratio as 0.32, 0.38 and 0.45.With no VMA, the various mixes showed up good workability and rheological properties. The compressive strength of the concrete specimens were noted to increase up to 27% upon the 14 days of curing. The compressive strength of the same were also predicted by multiple regression analysis in terms of ultrasonic pulse velocity. The tensile strength gained up to 11 % than the control specimen. Lower absorption is recorded in the SCC specimens with good electrical resistivity also. Altogether a SCC mix with 10% of metakaolin replacement is an ideal proportion for developing economically efficient concrete with better fresh and hardened concrete properties. Beata et al, 2013 [12] experimentally investigated the effects of chemical admixtures on the hydration of cement and mixture properties in SCC with very high performance. With few admixtures, micro cracking is developed whereas with the analyzed mixtures, the C-S-H gel showed improvement. The workability loss is visible according to the type of admixture that is chosen, but it does not affect the air content of very high-performance self -compacting concrete. The HRWR admixtures based cement pastes also developed bubble bridges that overtook any other Rafat et al,2013 [18] investigated the properties of SCC developed with coal bottom ash. Fine aggregate is replaced with 10%, 20% and 30% of coal bottom ash to investigate the fresh concrete properties such as slump flow, U-funnel test, L-box test, and J-ring test, along with the hardened concrete parameters such as abrasion resistance, compressive strength, chloride permeability and sorptivity. Betterment in compressive strength is observed at i28 days whereas the chloride permeability resistance decreases for 90 days and 365 days. A co relation between the abrasion resistance and the compressive strength is developed such that both are directly proportional, since the abrasion resistance highly influenced the compressive strength of the concrete. Ha Thanh et al,2016 [19] presented theinfluence of the super plasticizer and mineral admixtures (rice husk ash, silica fume and fly ash) on the self -compacting high-performance concrete in terms of the compressive strength of the mortar specimens. Expect the rice husk ash (RHA), both the fly ash and silica fume notedly decreased the filling and passing ability along with a hike in resistance to segregation and plastic viscosity. The bleeding in concrete is observed to mitigate with the addition of the rice husk ash. Thus, the macro-mesoporous nature of the rice husk ash can also be used as viscous modifying agent to improve the robustness of high-performance self -compacting concrete (HPSCC) at higher super plasticizer dosages. The coarser particles in RHA with large specificsurface area which thereby stipulated the attraction of the inter molecular forces and improved the compressive strength of the HPSCC. Therefore, a maximum compressive strength is visualized for SCC with 20% of fly ash and 20% of RHA. Ali sadramomtazi et al, 2016 [20] investigated the rheological, mechanical and durability characteristics of SCC developed with polyethylene terephthalatei (iPET) particles combined with the pozzolanic materials (fly ash and silica fume). These PET particles in SCC minimized the mechanical properties such as compressive strength, tensilestrength and flexural strength. To counterbalance the same, fly ash and silica fume were introduced into the mix. The lower specific gravity of these PET aggregates in concrete also reduced the density of concrete owing to the weak bonding between the PET aggregates and the cement paste. These parameters also reduced the ultra-sonic pulse velocity values due to the larger number of pores in the mix. Wongkro et al,2014 [21] studied the effects on the compressive strength and chloride resistance of the SCC developed with high volume fly ash (HVFA) and silica fume in binary blended cement and terenary blended cement. Both HVFA and silica fume were replaced with cement at various proportions each and the tests were conducted. The binary blended cement with HVFA is noted to reduce the mechanical property (compressive strength). In contrary to the binary blended cement, the terenery blended cement improved the compressive strength. Nevertheless, both fly ash and silica fume were noted to increase the chloride resistivity at higher levels. Navid et al, 2016 [22] analysed the influence of industrial waste (palm oil fuel ash) from power plant of palm oil industry in the SCC due to its high pozzolanic characteristics and abundance as an industrial waste. This industrial waste is replaced with OPC at 10%, 15% and i20% to investigate the mechanical and durability properties of the SCC thus developed. Both the mechanical and durability properties were noted to improve due to the reduced amount of portlandite in the system leading to the production of the C-S-H gel thereby densification of the matrix. This phenomenon also mitigated the open pores by blocking the networks. Badogiannis et al, 2015 [23] studied the effects of metakaolin on the durability properties of self -compacting concrete. The Portland composite cement and limestone powder of high fineness were replaced with high purity metakaolin. Four mixtures were developed to find the blending nature of metakaolin with cement thereby to study the enhancement of the packing density. And another four mixes replacing metakaolin with lime -stone powder at different percentages were obtained to procure the same results. These two categories were compared with another one control mix, thus developing a total of nine mixes in this study. Crushed calcareous lime-stone aggregate and polycarboxylic based super plasticizer is involved to develop SCC at a w/c ratio of 0.6. to produce SCC. Various durability study such as open porosity, sorptivity, near surface water IOP Publishing doi:10.1088/1757-899X/1006/1/012038 5 penetrability, Gas permeability and chloride penetrability were studied. Bases on the study the following conclusions were made. Slight improvement is found in open porosity percentage for both the replacement of cement with metakaolin and limestone powder with metakaolin. And the highest improvement is found at 14% replacement of cement with metakaolin and in limestone powder replacement as the metakaolin increases the results were getting reduced .In Sorptivity when higher order replacement metakaolin to cement and limestone powder the sorptivity value getting reduced. And the improvement in results were in between 6% to 41% for cement replaced by metakaolin and for limestone powder replaced by metakaolin the improvement was found to be in between 22% to 53%.In water permeability test as the metakaolin replacement level increase instead of cement the water permeability decreases. And for limestone powder replacement with metakaolin as the metakaolin percentage increases water permeability got reduced to some extent further increase it starts increasing the permeability value .Regarding Gas permeability slight improvement is found in both the replacement of cement with metakaolin and limestone powder with metakaolin. And the maximum reduction of 34% is obtained in higher order replacement level of cement with metakaolin. Whereas for chloride penetrability test it is found that in higher order replacement of metakaolin with cement and limestone powder resistance to chloride penetrability increases and the best results as obtained for higher order replacement of limestone powder with metakaolin. Conclusions Based on the analysis of the literatures, it is clearly evident that both chemical and mineral admixtures play a key role in the betterment of the self-compacting concrete in the following parameters.  When silica fume is included in SCC it improved the compressive strength of the concrete whereas the workability is also noted to improve when silica fume is included in SCC along with fly ash.  Fly ash in SCC decreased the compressive strength of the concrete, whereas fly ash along with silica fume and PET particles improved the compressive strength to a greater extend.  The nano silica and nano fly ash in contrary to the fly ash and silica fume highly improved the compressive strength of SCC by densifying the pore structure. But the nano silica and nano fly ash reduced the workability owing to the finer nature of these mineral admixtures.  The incorporation of admixtures such as marble powder and lime stone powder increased the water content in the concrete mix thus affecting the rheological properties of the SCC.  The addition of mineral admixture like rice husk ash improved the mechanical properties of the SCC up to 15% of replacement with ordinary Portland cement. However the same exhibited low chloride ion resistance and developed to be a highly flowable matrix.  The carbon nano fibers in SCC developed surface cracks along with improved mechanical properties in concrete.  Metakaolin with SCC considerably improved the mechanical properties up to a certain percentage of replacement and also decreased the absorption of the concrete specimens. It also highly improved the durability of SCC when it is included along with lime stone powder.  The palm oil fuel ash, a natural mineral admixture highly improved the mechanical properties as well as the durability properties of the SCC when included at adequate level.  Coal bottom ash, an industrial by-product elevated the compressive strength of the SCC specimens. But it tends to decrease the chloride ion resistance in the SCC.  Chemical admixtures like HRWR admixtures behaved better with the SCC by improving the C-S-H gel comparatively than the mineral admixtures in SCC.
2020-12-31T09:06:32.432Z
2020-12-25T00:00:00.000
{ "year": 2020, "sha1": "923fff6c091bf4e0eb35cbe547113c29efe06dec", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1006/1/012038", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6308c70eb90442bb12a4db263982babc32e2992c", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Materials Science" ] }
271565266
pes2o/s2orc
v3-fos-license
Completeness of variables in Hospital-Based Cancer Registries for prostatic malignant neoplasm ABSTRACT Objectives: to analyze the completeness of variables from Hospital-Based Cancer Registries of cases of prostate neoplasm in the Oncology Care Network of a Brazilian state between 2000 and 2020. Methods: an ecological time series study, based on secondary data on prostate cancer Hospital-Based Cancer Registries prostate. Data incompleteness was classified as excellent (<5%), good (between 5%-10%), fair (10%-20%), poor (20%-50%) and very poor (>50%), according to the percentage of lack of information. Results: there were 13,519 cases of prostate cancer in the Hospital-Based Cancer Registries analyzed. The variables “family history of cancer” (p<0.001), “alcoholism” (p<0.001), “smoking” (p<0.001), “TNM staging” (p<0.001) had a decreasing trend, while “clinical start of treatment” (p<0.001), “origin” (p=0.008) and “occupation” (p<0.001) indicated an increasing trend. Conclusions: most Hospital-Based Cancer Registries variables showed excellent completeness, but important variables had high percentages of incompleteness, such as TNM and clinical staging, in addition to alcoholism and smoking. INTRODUCTION Cancer is a term that covers more than a hundred malignant diseases that have in common uncontrolled cell growth, which can invade adjacent tissues or distant organs (1) , claiming the lives of around 9.3 million people annually (2)(3) .Specifically, prostate cancer is one of the most common cancers in the world, being one of the main causes of premature death in men (3)(4) . In Brazil, the Brazilian National Cancer Institute (INCA -Instituto Nacional do Câncer) estimates that, for each year of the 2023-2025 triennium, there will be almost 72 thousand new cases of the disease, with an estimated risk of 67.86 new cases and a mortality rate of 13.7 deaths for every 100,000 men (1) .In the state of Espírito Santo, prostate cancer is the most common, representing 84.36 new cases for every 100,000 men, according to the latest INCA estimate (1) . Risk factors are well established and include advanced age, ethnicity, genetic factors, family history of cancer and hormonal factors (1,3,(5)(6) , in addition to environmental factors, such as exposure to pesticides, which are still under investigation (7)(8)(9) .Although there is still little robust evidence for prostate cancer prevention (5) , it is possible to reduce the risk by reducing fatty foods, increasing the intake of vegetables and fruits and including physical activity in daily routines (1,5,10) . Hospital-Based Cancer Registries (HBCR) are systematic sources of information, installed in general hospitals or specialized in oncology, with the aim of collecting data regarding diagnosis, treatment and evolution attended in these institutions (11) .HBCR provide assistance in collecting and processing information about cancer patients, up to the analysis and dissemination of the bases obtained through consultation of medical records, and, therefore, make a great contribution to Epidemiological Surveillance (12) .The information produced makes it possible to analyze the performance and quality of each institution in providing care to cancer patients as well as contributing to prognostic and survival studies (13) .They also contribute to individual patient care, as they ensure the followup of these patients (14)(15) . A recent study by our group on HBCR of a single High Complexity Oncology Care Center (CACON -Centro de Assistência de Alta Complexidade em Oncologia) in the state of Espírito Santo showed that most of variables relating to prostate cancer cases, in the time series from 2000 to 2016, had excellent levels of completeness, but several clinical variables, important for a better understanding of the health-disease process, present a high number of missing data, highlighting the need for higher quality data (16) .However, an analysis of a more recent time series, that is, until 2020, encompassing the entire Espírito Santo Oncology Care Network, composed of a CACON and seven High Complexity Oncology Care Units (UNACON -Unidades de Assistência de Alta Complexidade em Oncologia), in order to direct Cancer Surveillance actions in the Espírito Santo territory regarding HBCR monitoring and assessment of hospitals in the State Oncological Care Network, has not yet been elucidated. OBJECTIVES To analyze the completeness of the HBCR variables of cases of prostate neoplasms in the Oncology Care Network of a Brazilian state between 2000 and 2020. Ethical aspects The study was approved by the Universidade Federal do Espírito Santo Health Sciences Center Research Ethics Committee (CEP-CCS-UFES).Patient consent was waived, as this was a retrospective research based on secondary data.Moreover, consent and authorization were obtained from the State Department of Health of Espírito Santo (SESA/ES), based in Vitória, capital, to collect secondary data and access restricted data from this research. Study design, period and place This is an ecological time series study according to STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) recommendations.The study was conducted using secondary data from the HBCR prostate cancer database in the state of Espírito Santo between 2000 and 2020.The secondary data were obtained from SESA/ES Cancer Surveillance and consolidated by INCA. The Espírito Santo Oncology Care Network covers three health regions: Metropolitan Region; South region; and North/ Midwest region (15) .This Oncology Care Network is made up of a CACON represented by Hospital Santa Rita de Cássia, located in the capital, Vitória, as well as the seven UNACON authorized by the Ministry of Health (MoH): Hospital Evangélico de Cachoeiro de Itapemirim, located in the municipality of Cachoeiro from Itapemirim; Hospital Evangélico de Vila Velha, located in the city of Vila Velha; Hospital Universitário Antônio Cassiano de Moraes, Hospital Santa Casa de Misericórdia de Vitória and Hospital Estadual Infantil Nossa Senhora da Glória, located in the capital, Vitória; Hospital São José, located in Colatina; and Hospital Rio Doce, northern state, located in Linhares.All oncology hospital units in the state have HBCR structured and in operation, with their databases being sent annually to the Brazilian Cancer Hospital Registry Integrating System (SisRHC -Sistema Integrador do Registro Hospitalar de Câncer Brasileiro) (17)(18) .We emphasize that the Hospital Estadual Infantil Nossa Senhora da Glória's HBCR do not present data regarding diagnoses for prostate cancer. Data were collected between February and June 2023 from SESA/ ES.We chose the period from 2000 to 2020 because it is a more recent period and because all the hospitals that make up the Oncology Care Network in the state of Espírito Santo had already sent, at the time of data collection, records of historical series that we proposed to analyze from the respective HBCR, which were processed and consolidated by the Espírito Santo Epidemiological Surveillance. Population, inclusion and exclusion criteria A total of 13,519 observations (registration of patients diagnosed with prostate cancer) were extracted from the HBCR database in the state of Espírito Santo via SESA/ES in the historical series studied, i.e., from 2000 to 2020, including all cases registered as analytical (whose planning and treatment are carried out in the hospital where registration took place) and non-analytical (those who arrive at the hospital already treated or who do not carry out the recommended treatment, mainly) (11) . Study protocol The epidemiological variables contained in the SisRHC tumor registry (11) and analyzed in the present study were: (1) The HBCR tumor registry form is used to gather information from medical records, provide a case summary and as a data entry document to enter information into the SisRHC computerized databases (11) .The content of this form is defined based on the information needs of hospitals with a hospital cancer registry and follows the standardization guidelines recommended by the International Agency for Research on Cancer, validated by consensus by meetings coordinated by INCA (11) . The definition of quality dimensions proposed by Lima et al. was used (2009) (19) , in which completeness is translated by the proportion of fields filled with non-zero values.Furthermore, as a reference for the analysis of completeness, we adopted the classification proposed by Romero and Cunha (2006) (20) .The percentage of missing data was classified as 1 -excellent (<5%), 2 -good (5-10%), 3 -fair (10-20%), 4 -poor (20-50%), or 5 -very poor (≥50%).Thus, the term "completeness" refers to the degree of completion of the analyzed field, measured by the proportion of reports with a field filled in with a different category from those that indicate absence of data.A field filled in the database with the category "ignored", the numeral zero, unknown date or term indicating absence of data was considered incomplete in this study. Analysis of results, and statistics For statistical analyses, the free software RStudio (version 2023.03.1) and R (version 4.2.2) were used.Completeness description was presented by the relative frequency observed and their respective completeness scores.The Friedman test (21) was used to compare score classifications between years, whereas the Mann-Kendall test (22)(23) assessed whether there was a statistically significant temporal trend between the years assessed.A statistical significance level of 0.05 was adopted. RESULTS During the study period, a total of 13,519 cases of prostate cancer recovered from HBCR in the state of Espírito Santo were recorded, as can be seen in Figure 1. The variable "sex" was the only sociodemographic variable to present 100% completeness, followed by the variable "age", which presented 0.26% of missing data in 2016, and "origin", which had incompleteness ranging from 0.20% to 2.34% between 2012 and 2019, therefore, classified as excellent throughout the period studied. The variable "place of birth" had an average incompleteness of 5.91% in the period, with emphasis on 2000, 2018 and 2020, which presented, respectively, 14.29%, 12.45% and 16.88% of data missing, being classified as fair.The variable "race/skin color" was classified as excellent or good in most of the years studied; however, in 2006 and 2007, it was classified as poor, with 31.67% and 23.78% of incompleteness, respectively."Marital status" was a variable with an excellent or good score in more than 90% of the years studied, with emphasis on 2012 and 2013, classified as poor, showing incompleteness of 11.08% and 11.48%, respectively. The variable "education" obtained an excellent score from the years 2000 to 2004, with an average of 2.74% of missing data, however from 2005 to 2020 most years were classified as poor, with emphasis on the year 2010, where almost 50% of observations were missing.Similarly, the variable "occupation" presented an average of 2.20% incompleteness from 2000 to 2004, and from 2005, classified as poor in most of the following years, obtaining in 2018, 22.13% of missing data and classified as poor.Both variables "alcoholism" and "smoking" showed high rates of incompleteness, being classified as very poor and poor in most years of the 2000-2020 historical series studied.Table 1 presents details of year-by-year completeness classifications. The variable "disease status at the end of first treatment in hospital", from 2000 to 2009, was classified as very poor, with an average of 72.11% of missing observations, but from 2010 onwards it presented better classifications, being poor or fair.and an average of 28.61% incompleteness.The variable "main reason for not carrying out antineoplastic treatment in hospital", its score varied from excellent to very poor in the period studied, with highlights for the year 2003, which presented incompleteness of just 0.44%, and for the year 2006, reaching almost 72% of missing data."Referral origin" was a variable classified as poor and fair in most of the years studied, obtaining lower incompleteness rates at the end of the historical series, where in 2020 it presented 7.15% of missing data.The variables "primary tumor laterality" and "examinations relevant to tumor therapy diagnosis and planning" presented an excellent score at the beginning of the study period, being classified as poor and even very poor in the following years. The variable "previous diagnosis and treatment" and "screening date" presented an excellent classification in almost the entire period, changing the score to good in 2006 and 2007 for the first variable and in 2012 and 2013 for the second.The variable "date of start of treatment" was classified as excellent, except for 2009 to 2012 and 2018, where its score was good or fair. The other variables in the bank presented excellent scores in all years studied, with emphasis on the variables "type of case", "date of first consultation", "primary tumor location", "detailed primary tumor location", "primary tumor histological type", "Brazilian National Registry of Health Establishments", "Hospital Unit Federative Unit" and "Hospital Unit municipality" which were 100% complete.Table 2 presents in detail and chronologically the incompleteness for clinical variables in the historical series studied. Regarding the comparison of the scores of the HBCR epidemiological variables in the state of Espírito Santo, the Friedman test showed that there was no significant difference (p value = 0.324) in score classification; therefore, classification was similar between 2000 and 2020. In Table 3, the Mann-Kendall test shows significant trends towards a decrease in incompleteness for the variables "family history of cancer", "alcoholism", "smoking", "source of referral", "TNM staging", "clinical tumor staging by group (TNM)" and "disease status at the end of first treatment in hospital".The variables "place of birth", "first care clinic", "clinic at the start of treatment", "origin", "primary tumor laterality" and "occupation" showed an increasing trend in the incompleteness rate.The variables that presented 100% completeness in all years studied were not included in the Mann-Kendall test and, therefore, do not appear in Table 3. Figure 2 shows the graphs of historical series from 2000 to 2020 with the percentage of incompleteness of the variables that showed significant trends according to the Mann-Kendall test for the period studied.Time series with incomplete data are represented by solid lines, while dashed lines represent the temporal trend. DISCUSSION The results showed that, with regard to cases of malignant prostatic neoplasm in the state of Espírito Santo recovered and analyzed in HBCR, the majority of epidemiological variables were classified as having excellent and/or good completeness, highlighting the variables "sex", "age", "origin", "date of first consultation", "date of diagnosis", "previous diagnosis and treatment", "most important basis for tumor diagnosis", "primary tumor location", "detailed primary tumor location", "primary tumor histological type" and "first treatment received in hospital".However, other variables were classified in some years as fair and poor, such as "place of birth", "race/skin color", "education", "occupation", "marital status", "date of start of treatment", "examinations relevant to tumor therapy diagnosis and planning".Furthermore, there was weakness in the information on important clinical-epidemiological variables, with incompleteness above 50%, such as "TNM staging", "clinical tumor staging by group (TNM)", "family history", in addition to "alcoholism" and "smoking".Supporting our results, a study carried out with data from HBCR in the state of Mato Grosso showed that the variables "education", "TNM staging", "family history of cancer", "alcoholism" and "smoking" exhibited incompleteness above 50% (24) . The variables "sex" and "age" presented completeness classified as excellent in the analyzed database, as found in other studies from HBCR in other Brazilian states (13,(15)(16)24) . It i believed that the low interpretative subjectivity required to record this information corroborates the reason for this good result. The variable "place of birth" obtained 5.91% of incompleteness, leaving it with a good score, but it showed a tendency to increase in incompleteness in the period analyzed.In other studies conducted in HBCR in the state of Espírito Santo, this variable presented an average incompleteness of 10.33% (15) and 3.51% (16) . "Race/skin color" is an important variable in the study of prostate cancer, as some ethnicities are risk factors for the development of this cancer, such as Africans and Asians, presenting higher incidence rates and shorter survival times for this neoplasm (5,25) .In other words, this variable merely transcends a biological distinction.In fact, it encompasses a complexity that represents a set of economic and cultural connotations, which denote inequalities in access to medical care, especially in the context of cancer diagnosis and treatment.Our findings support other research carried out in Brazil (15,24,(26)(27) .It is important to highlight that the lack of completeness in the collection of this variable, combined with possible erroneous records, makes it difficult to obtain a clear understanding of the real need for health promotion and disease prevention programs in vulnerable communities (27) .Additionally, the variable that considers race/ethnicity gains relevance by expanding debates to health inequities and individual, social and political-programmatic vulnerability (15,(28)(29) . "Education" was classified as poor in more than 50% of the study period, with an average incompleteness of 31%, a result similar to that found in other studies (12,15,24,27,30) .The result found in HBCR of Hospital Santa Rita de Cássia, the only CACON in the state of Espírito Santo, presented 9.12% of missing data, which implies that the other HBCR in Espírito Santo have greater incompleteness for this variable (16) .This variable has a great impact on patients' prognosis, and its low completeness is of clinical and epidemiological relevance (15) . The variable "occupation" presented, at the beginning of historical series, an excellent classification, however, from 2005 to 2020, there was an increase in the percentage of missing data, becoming a fair classification, with an average incompleteness of 14.57% in the period.In a study carried out in HBCR in 21 Brazilian states regarding the occupation variable, 46% of missing observations were identified (31) .Other studies find similar percentages (12,(15)(16)24,27) . The variables "TNM staging", "clinical tumor staging by group (TNM)" and "pathological TNM staging" presented a poor or very poor completeness score in almost all years.These results corroborate other studies using data from HBCR across Brazil (12,(14)(15)(16)32) . On te other hand, a study using a database from a public hospital in São Paulo showed the variable "TNM staging" with excellent levels of completeness (27) .Staging variables are extremely important, as they provide information on the extent of the disease.This information helps in defining the therapeutic plan for people with cancer, which facilitates the standardization of procedures and the exchange of experiences between institutions that offer cancer treatment (11,15,24,33) .The variables "alcoholism" and "smoking" were classified as poor or very poor in almost the entire study period.This is a poor result, given the carcinogenic potential of alcohol and tobacco (34) .Furthermore, the variable "family history of cancer" was also classified as very poor in all years of the period, representing almost 80% of average incompleteness.This probably occurred because they are optional variables in the tumor form, and their completion varies substantially between hospital institutions.Such incompleteness is a worrying factor, as this variable is a risk factor for prostate cancer (2,(5)(6)(35)(36)(37) . Study limitations The present study has some limitations, such as the exclusive use of data obtained from all HBCR in a single Brazilian state.Consequently, caution must be taken when interpreting the findings in relation to their external validity and generalization to other Brazilian states and regions.Although HBCR provide valuable information about the quality of services offered, they do not comprehensively represent the underlying regional or national cancer epidemiology. Contributions to nursing, health or public policy To the best of our knowledge, this is the first study in a recent historical series that reports completeness of HBCR epidemiological variables on cases of malignant prostatic neoplasm across the Espírito Santo (ES) Oncological Care Network between 2000 and 2020, bringing valuable information for Epidemiological Surveillance and, specifically, for Cancer Surveillance in the Espírito Santo territory.It should be noted that, in 80% of countries, there is a growing trend in premature mortality from cancer, which is impacting the achievement of target 3.4 of the Sustainable Development Goals, which refers to the reduction of at least one third in premature mortality due to chronic non-communicable diseases by 2030 (38) .Thus, the importance of implementing, maintaining, updating and making available HBCR data is evident for a better understanding of cancer overview for its monitoring and control. CONCLUSIONS Summing up, we verified that, in fact, most of the revised HBCR epidemiological variables in the state of Espírito Santo, Brazil, were classified with excellent completeness, although important variables, such as "TNM staging" and "clinical tumor staging by group (TNM)", had high incompleteness rates for all years between 2000 and 2020.There is a pressing need for consistent and high-quality HBCR data to better monitoring of epidemiological variables in the tumor registry.HBCR contributions can greatly contribute to the structuring, formulation and planning of public policies aimed at improving early diagnosis, treatment and quality of life of the population. Figure 1 - Figure 1 -Historical series of the number of prostate cancer cases diagnosed from 2000 to 2020 registered in Hospital-Based Cancer Registries of the state of Espírito Santo (N=13,519) Figure 2 - Figure 2 -Trend of incompleteness of sociodemographic and clinical variables with a significant trend according to the Hospital-Based Cancer Registries Mann-Kendall test regarding prostate cancer cases in the Oncological Care Network of the state of Espírito Santo from 2000 to 2020 (N = 13,519) Table 2 - Percentage of incompleteness and classification of completeness of the Hospital-Based Cancer Registries clinical variables referring to prostate cancer cases in the Oncological Care Network of the state of Espírito Santo from 2000 to 2020 (N = 13,519) Table 3 - Analysis of the trend of incompleteness of the Hospital-Based Cancer Registries epidemiological and clinical variables regarding prostate cancer cases in the Oncological Care Network of the state of Espírito Santo from 2000 to 2020 (N = 13.519) *For significance, p value < 0.05.ofCompleteness of variables in Hospital-Based Cancer Registries for prostatic malignant neoplasmGrippa WR, Pessanha RM, Dell'Antonio LS, Dell'Antonio CSS, Salaroli LB, Lopes-Júnior LC.
2024-07-31T15:18:48.447Z
2024-07-29T00:00:00.000
{ "year": 2024, "sha1": "adc26898155b69f27e0b83a8082c06f988308909", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1590/0034-7167-2023-0467", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c619848bb803f7eabece3e1e944b59efb846b7b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226633567
pes2o/s2orc
v3-fos-license
“We Will Re-distribute Land When You Vote Us into Power...” Electoral Politics and Peasant Struggles Amidst Unresolved Land Question in Southern Malawi The people of tea growing districts of Thyolo and Mulanje have been at constant loggerheads with estate owners for decades now. The borne of contention is that the tea estates own huge thatches of land and utilize less than 50% for plantation agriculture while the majority remain land-poor. The objective of the research was to understand the role of political parties in supporting or resisting land reform initiatives in Thyolo and Mulanje, Malawi. Political Parties in Malawi since 1994 have promised to redistribute land to peasants, but little action has been taken on the same. Using qualitative research approach, data was collected from peasants, local leaders, political parties and other secondary sources. Four main political party manifestos were studied: Democratic Progressive Party (DPP), Malawi Congress Party (MCP), United Transformation Movement (UTM); and United Democratic Front UDF). The study finds out that political parties in Malawi uses the land struggle to woo votes from the electorates; promise them land redistribution and after they get into power the issue is forgotten. Party Manifestos from 1994 to 2019 have either included the issue in their blueprint, or omitted entirely. The study finds out that during the 2019 tripartite elections, no single party outlined what procedure it would use to free up land in the estate lands. This paper suggests that a successful reform in the area starts with political will within the ruling party, with governing institutions having a carefully mapped plan on how they would go about land reform; rather than mere propaganda on the matter. DOI: 10.7176/DCS/10-5-08 Publication date:May 31 2020 Setting the Scene On 30 April 2019, opposition Party president Lazarus Chakwera for Malawi Congress Party (MCP) promised the people of Thyolo and Mulanje that once he and his party is voted into power, he will solve the outstanding problem of land shortage affecting thousands of people in Thyolo and Mulanje districts. The opposition party leader was speaking during whistle stop tours in Thyolo and Mulanje, convincing the masses in the area to vote for his party MCP in the May 2019 Presidential and Parliamentary Elections. Ironically, another opposition party President in the same race Saulos Klaus Chilima promised the same people a week earlier that his administration would stimulate land reforms in the Southern region where many Malawians were turning into tenants in their own land. The UTM president narrated that he was aware that ordinary locals had been at in conflict with huge tea estate owners over the issue of land. According to Chilima, land was a very important aspect in the UTMs agenda to transform the country’s’ agriculture. These sentiments have been a song sung by many politicians who have gone to campaign in the area since Malawi adopted multiparty democracy in 1994. After gaining independence in 1964, land reform was not on agenda. Kamuzu Banda promoted dual agriculture; where cash crops were for estate farmers. Locals were banned from cultivating cash crops like tea and coffee. From 1994-2005 (under multiparty democracy) United Democratic Party (UDF) with Bakili Muluzi as President ruled Malawi. Bingu Wa Mutharika resigned from UDF in 2005 and formed his party that ruled Malawi from 2005-2012. After his sudden death in office, his vice president Joyce Mtila Banda and her Peoples Party ruled Malawi from 2012-2014. Peter Mutharika, brother to Bingu wa Mutharika was elected president from 2014 defeating incumbent Joyce Mtila Banda and her Peoples Party (PP). Almost all these governments had in one way or the other promised land redistribution to peasants in Thyolo and Mulanje. In 2001 when Bakili Muluzi was ruling, Malawi Government commissioned a land inquiry, which revealed massive skewness in land ownership. Land for most peasants in Thyolo and Mulanje had been dwindling. In areas where peasants have some land, it is unproductive as the estates consumed all almost prime land in the area. In 2003, Malawi in conjunction with World Bank commissioned a Community Based Rural Land Development Programme (CBRLDP) popularly known in vernacular ‘Chewa’ as Kuzigulira Malo. The plan for this project was to relocate 15,000 families to other districts like Mangochi, Balaka and Machinga under a willing buyer willing seller philosophy. The World Bank considers this project a success though a score of studies have disputed the same. Chinsinga (2008) notes that Majority of the relocated families went back to their original districts of Thyolo and Mulanje. Instead of solving land conflicts between the peasants and the estates, the CBRLDP bred new land 1 Also known popularly known as SKC by his supporters Developing Country Studies www.iiste.org ISSN 2224-607X (Paper) ISSN 2225-0565 (Online) conflicts between returnees and those who stayed behind 1 . What has been perceived as a successful reform by international organization like World Bank has proved to be a failure on the ground where the reform occurred. Since then, little has been done to plan and implement a land reform in Thyolo and Mulanje. It has been highlighted by scholars that the estate owners in Thyolo and Mulanje got land fraudulently 2 from locals in the 1880s. They then converted the ownership from customary to freehold land tenure. The MCP government from 1964 maintained the colonial dual agriculture systems: the estate agriculture and subsistence agriculture. Most of the estate owners were white British, descendants of colonizers. The subsistence farmers were restricted to cultivate cash crops. Surplus grain from the small-scale farmers was sold at government markets at lower prices. This was done to accumulate capital to provide the estate owners some soft loans. The philosophy was that the estate agriculture would generate more income and develop Malawi. Since multiparty democracy, political parties have gone to town to promise peasants in Thyolo and Mulanje that once voted into power they would solve the Malawian land question. It's been about two decades with only promises to the landless poor and no tangible steps to a workable land question. Poor people who have close and intimate relationship to land continue to live in abject poverty. A successful land reform might need a stronger political will from a governing political party. Theoretical Background Being a liberal multiparty democratic country, political parties in Malawi draft manifestos to outline their intended policies they would wish to implement once successful in an election. These policies are presented to people to choose their leaders. Scholars agree that many factors are at play that makes people vote for a party. There is a huge debate on whether in Africa manifestos has an impact on voting patterns or not. Some scholars have argued that people in Malawi do not vote on issues, policies, but regionalist grounds, patronage and charismatic personalities. However, the 2009 elections proved otherwise (Mpesi, 2011). Bingu Wa Mutharika was voted into power and won with an overwhelming majority because of his pro-poor policies like the fertilizer subsidy programme. Patel (2015) agrees that food security policies by the DPP played a major role in catapulting its torchbearer (Arthur Peter Mutharika) into power in 2014. The food security and economic successes of the Bingu Wa Mutharika government made the DPP so popular hence winning with a landslide. We can therefore argue that policies especially those to do with welfare of local people have a play in deciding whom to vote for. This actually is more evident on the majority of voters. For rural votes, party manifestos might not be such relevant. For the middle-class, most of which are educated, manifestos play a key role. However, manifestos play as a yardstick on which electorates can hold leaders accountable when they acquire legal mandate to rule. Land reform is a contentious issue. In any land reform there are winners and loosers. There are also groups of people that reap benefits of status quo before any land reform. Chances are that landowners (mostly elites and well connected politically) have to loose power over land if a reform is to be successful; difficult route but manageable. Due to the contentious nature of land reform, there is need for through clarification on how land redistribution might occur. Chinsinga (2019) shed more light on how stakeholders can approach a land reform. Stakeholders need to understand the clarity of reform (Land reforms can take different forms: restitution; redistribution; and tenure reform). Land reforms are often disagreements between and among stakeholders regarding the exact nature land reforms should take in a particular context.). The second consideration is the disjuncture between domestic and international political economy perspectives-multinational development partners have pushed for 'one-size-fits-all' perspective on land reform, which might not work in all contexts. Thirdly, there is need to understand the complexity of the land reform in practice-land reform is more complex; and it 'involves identities not just mere economics and material wealth. Lastly, challenges with data to guide land reform need to be taken in consideration. Land reforms cannot be successfully implemented in the absence of data that is deemed credible by the eyes of all stakeholders who often have fundamentally competing interests (Ibid, 2019). These concepts need to be clear enough in any attempt to face a reform especially in southern Africa. A Political will from ruling government is significant for a successful land reform. Does Malawi Need a Land Reform? Scholars agree that landholdings in southern Malawi have been dwindling. This has in part been caused by increase in population; and also highly skewed land ownership and usage. As Demographic Health Data (DHS) indicates, many Malawians are young. With the slow expansion of the manufacturing and service industry, it has not been easy for them to attain non-agricultural jobs. For this reason, many rural Malawians continue to have a special www.iiste.org ISSN 2224-607X (Paper) ISSN 2225-0565 (Online) Vol.10, No.5, 2020 'dependency on and attachment to land…' It is for this reason that policies concerning land in Malawi should transcend a mere popular lip service to convince land-poor voters; but thorough enough to address the route and procedure on how a successful land reform might be done. Poor need land more than the rich. Malawian Political Parties, 2019 Manifestos and Land Reform Policies Land in southern Malawi is one of the highly politicized topics. When it comes to political parties (both in ruling and opposition), its all rhetoric in order to get legitimacy to rule. Once voted into poor, all the promises on restitution are no longer fulfilled. This has happened since 1994 to date. Political cadres vying for political office from ward councilors to presidents have been using the same tactics over and over again to get votes from the unsuspecting poor in the area. What many studies have not done is to scrutinize the philosophy of Malawi's main political party manifestos towards land question in southern Malawi. The researcher interviewed political party leaders, read party manifestos for the 2019 tripartite elections to understand better about party views on land conflicts between peasants and estates in Thyolo and Mulanje. The researcher also analyzed how feasible are the aspirations of the said parties in solving the long-time land question in Thyolo and Mulanje. The United Democratic Front (UDF) The United Democratic Front was the first political party to rule Malawi after the MCP one party rule. UDF under its president and founder Bakili Muluzi ruled Malawi from 1994-2004. The current president of UDF (by the time the thesis was written) was Atupele Muluzi, son of founder Bakili Muluzi. The 2019 UDF manifesto was silent on what it would do it case it was ushered into power after the May 21 elections in 2019. United Transformation Movement (UTM) Despite the fact that the UTM manifesto is silent on land reforms in Thyolo and Mulanje, its presidential aspirant for the 2019 general elections held many rallies in Thyolo and Mulanje promising land reforms. Dr Saulos Chilima, UTM presidential candidate had this to say in Bvumbwe when he had a campaign rally as quoted in one online local newspaper: "Front-runner in next months tripartite elections, the UTM, says its administration will stimulate land reforms in the Southern region where many Malawians are turning into tenants in their own land. UTM President Dr Saulos Chilima made the pledge yesterday during a rally at Bvumbwe in Thyolo district where ordinary locals have been at constant loggerheads with huge tea estate owners over the issue of land. According to Chilima, land is very important aspect in the UTMs agenda to transform the countrys' Agriculture. People of Bvumbwe and many other parts of the southern region are very hardworking farmers. However, for sometime now their potential has considerably declined basically because there is no land for them to work on. When we get your mandate on May 21 we will effect reforms to free up land for local Malawians. Chilima said to loud cheers from the crowd… (Maravi Post, 2019)." Based on the above-mentioned remarks, I argue that it is really a commendable and bold to talk about land reform in Thyolo and Mulanje. However, the researcher highlights some key issues on the way the quest for land reform was being pursued by the opposition party UTM: i. The UTM does not explain how it will free up land from the estate owners to the land hunger in southern Malawi. This is a very critical issue to solve this land question. There are the peasants and the estate owners. The latter is mostly the winner of the status quo. It has used all mechanisms possible to further alienate peasants from the land. Since this notion is unavailable in the UTM manifesto, it is quite difficult to deduce where the freed up land is the 'idle' land from the estates, or the land being covered by tea and m academia nuts in Thyolo and Mulanje. ii. The UTM claims it puts land as an important aspect in the UTMs agenda, but its conspicuously missing in its blueprint, governing plan, rendering the people of Mulanje and Thyolo with no paperwork to follow-up on the progress if UTM is given the mandate to rule. Governments of ruling parties have come and go; with little to mention on the progress on land reform in Thyolo and Mulanje. From MCP government that emphasized on dual agriculture (estate and smallholder) to the current DPP government. The people of Thyolo and Mulanje are now aware that the state would not assist, and resort to encroachment. One peasant notes "party politics in this country only need us to vote for them. They promise us to restitute land to the people. When in power all is no longer implemented. That is the main reason why me and my fellow villagers decided a decade ago to go ourselves and start cultivating in the idle land. If we wait for any politician or party, we will die of hunger. We are tired of empty promises for restitution. " We can see that the people are aware that party politics is just there to hoodwink peasants as if they are fighting together for the land. In natural sense, the peasants are on their own. Some will argue that the UTM might choose to put the issue in its manifesto but not implement it. This line of Developing Country Studies www.iiste.org ISSN 2224-607X (Paper) ISSN 2225-0565 (Online) Vol.10, No.5, 2020 argument follows that Malawi is a poor country, and we have had different parties ruling it with very brilliant manufestos. After being ushered into power all the promises are never fulfilled. The poor get poorer, the selected and connected few to the politicians get richer and richer. However, this sound a fair assessment, but the people of Thyolo and Mulanje deserve leaders who have done some homework on how land can be freed from the estates. What direction can it take? Willing seller willing buyer like the CBRLDP? Would it be in form of restitution? The whole tea estates or only unused land in the estates? How will the UTM make sure that all landless would have access to the land if the reform is effected? What will the UTM government deal with the external forces that shapes the contested land in Thyolo and Mulanje? How much land do the estates own in this area? How much is under cultivation? All these would need to be thought through. It was a campaign period, talk is cheap but all these need a through analysis. iii. The UTM need to dig deep into the land issue in Thyolo and Mulanje, engage with experts to be at par or close to solving the issue. For instance, local social movement PLO did a great move to sensitize people around the area that politicians are there just to win votes, and that alone (as people) they can retake the land. Though not through PLO, individual farmers have managed to encroach and cultivate on estate land for close to a decade now (some farmers its now two decades). Empirical evidence is available. iv. Most huge estates in Thyolo and Mulanje are owned by white Malawians, land was obtained fraudulently and the tenure deed are all freehold. As discussed in chapter 2, this was colonial land grab. What steps will the UTM government follow to solve the long-standing land question? v. Huge thatches of land owned by the estates in Thyolo and Mulanje have been idle for more than 4 decades. The estates keep this land not to use for tea or mainly for collateral. The estates are able to collect loans from banks putting the land as collateral hence they cannot just give it back to the landless poor without huge resistance. The UTM also has to know that the estate owners are wealth and connected, both at local and international levels. vi. There is Zimbophobia among many Malawian elite who can help the poor to reshape the power in land ownership especially in Thyolo and Mulanje. In most circumstances the land question in Thyolo and Mulanje is talked, there are always people preaching how Zimbabwe struggled (was struggling) economically when it effected its ambitious and successful land reform in 1980s. I would argue though that the conditions in which Zimbabwe was and Malawi are really different. In southern Malawi, huge areas of land in the estates remain unused, and the peasants are demanding the same. This would not directly affect the tea economy (tea production, value chains, contribution to GDP) among many others. vii. Too much donor meddling in the land issues, incomplete decolonization of Malawi from it's former british colonies In light of this, it is not just the populism employed by the UTM in the land issues in Thyolo and Mulanje, but also lack of its inclusion in the manifesto, and the step-by-step procedure on how land will be freed from the estates to the land-poor in Thyolo and Mulanje. The Democratic Progressive Party (DPP) The Democratic Progressive Party is the ruling party, since May 2014. It initially started ruling in 2005 after its founder Professor Bingu Wa Mutharika resigned from UDF and formed his own party after winning on UDF ticket in 2004. The DPP Manifesto on Malawian Land Question The manifesto is quoted as saying '…for the poor who have no land or ability to farm government will give them social cash transfer. This will give the recipients choice to spend the cash on food or any other activity that will give them livelihood.' Page 28. Members of the DPP including the president himself Peter Wa Mutharika had conducted numerous political campaigns promising peasants that they will re-distribute land after being re-elected into power. During the 2019 tripartite elections, the DPP was the ruling party, with all the resources to bring all stakeholders at one place and re-think the course and direction of land question in Thyolo and Mulanje i. One would argue that why would a ruling party choose to give people in Thyolo and Mulanje 'money' and not the 'land'? does this imply that it has failed to solve the land question? ii. The DPP does not stipulate how much money will be given to the peasants. Is the money enough to purchase land from anywhere away from the plantations. iii. Giving people money each and every year, I argue that it might not be sustainable. All know that Malawi is a youth population. This means that many would prefer jobs whether on the farm or industries for livelihoods. Governments can provide an environment where its people are able to economically prosper. The rich does not necessarily need land. The poor needs land. If given land, (which is lying idle anyway) we argue that they can be able to cultivate, get their own money and support all the necessary livelihood activities. iv. Poverty levels in Malawi are high; with almost 52% of its 17 million Malawians in poverty (HIS, 2016). The Malawi government claims to give poor money. Causes of poverty are numerous; so are the solutions to the same. However, empowering the poor should actually start from giving them means of production. Teaching the poor to fish, rather than giving them fish might be the way to go. The DPP might need to go back to the drawing board and learn major performers in poverty reduction to see how they managed. They might not copy all methods but try to learn the useful lessons and modify them to fit the Malawi context. v. The issue about giving the able-bodied landless poor some money would be adequate if substantiated with giving the poor land (land reform/ restitution) and a starter pack for the farmers to be able to cultivate, buy inputs, livestock and all necessary requirements needed to farming. This would give true meaning and pathway to teach the villagers a sense of independence. vi. I argue to let the landless poor to sweat for their efforts so that they will be able to independently sustain their livelihoods standards whether this government continue to rule or not vii. The land, if restitution is done, can be the wealth of the beneficiary families. Can be transferred from one generation to the other and create room for breaking poverty cycle among the rural dwellers in Thyolo and Mulanje viii. I argue that the mere mention of handouts by the DPP leaves out a crucial group of even the new land occupants in the estate lands. This research has found out that even the people who have been able to encroach land from the estates face numerous problems, chief among them being security of tenure. They are uncertainties involved in encroaching the estate lands. For instance, they are unable to invest irrigation equipment in the estate land. This translates that they are unable to grow more than one crop in a year. They only rely on rain-fed agriculture, which is prone to weather and climatic hazards. With their security of tenure threatened, they cannot utilize the land to its full potential. I argue therefore that just mentioning about social cash transfer in the ruling DPP manifesto is not only a threat to the landless poor, but also those that are cultivation n the estate land for over a decade now. As seen from the explanation above, the ruling DPP was not ready to solve to land question in Thyolo and Mulanje. Neither its written manifesto nor campaign messages outlined what kind of reform; how it can occur and in Thyolo and Mulanje. This implies that the status quo would continue to prevail, benefiting the estate owners even if the DPP gets fresh mandate to rule from 2019-2024. I argue that social cash transfers might not assist in any way to deal with the land-poverty in Thyolo and Mulanje. I argue that empowerment of the poor should start with providing the same with means of production (principally land and capital) for the said empowerment to be sustainable for generations and break poverty cycles. The Malawi Congress Party (MCP) For MCP, it tackles the matter at length. Apart from promising the people of Thyolo and Mulanje land redistribution, the MCP also tackles foreigners buying huge patches of land in cities and towns. On its page 28, the MCP Official Blueprint says it will only'…review land act to ensure that access to land benefits Malawians and not foreigners who are acquiring land fraudulently….' The MCP, though close to tackle the Malawi's Land Question leaves behind the people of Thyolo and Mulanje in 3 ways: i. Many estate owners are Malawians with British background. This research has found out that there is 'Zimbabwephobia' among many elites. The British still controls land ownership decisions in Thyolo and Mulanje. What the MCP is saying might as well be the same song as other parties just to get votes from unsuspecting voters. ii. It's a colonial land question. The MCP is geared to deal with the statues to deal with people "currently obtaining land fraudulently.' This does not however explain further how the colonial land question in Thyolo and Mulanje will be solved. I argue that the whole MCP statement on land governance when ushered into power leaves out this whole contentious issue in Thyolo and Mulanje as this land was obtained "fraudulently" in the 1890s. iii. The land owned by the estates in Thyolo and Mulanje is under freehold. This translates that the estate owners are free to sale it to whosoever want at will. The MCP need not only to include the Thyolo/Mulanje into its agenda; and also explain how it has planned to free up whether 'used' or 'unused' land from the estates to the peasants in southern Malawi. Conclusion This paper argues that almost all the political parties that participated in the 2019 Malawi tripartite elections had an idea that there is need for a reform especially in the tea growing areas of Thyolo and Mulanje. While the DPP chooses to give people with no land money for their livelihood, the UDF was silent on the issue. The MCP and Developing Country Studies www.iiste.org ISSN 2224-607X (Paper) ISSN 2225-0565 (Online) Vol.10, No.5, 2020 UTM claim to deal with the problem by freeing up land in Thyolo and Mulanje without explaining the nitty-gritties. Based on the manifestos of the four major political parties, one would argue that the land-poor in Thyolo and Mulanje need to wait longer as none of the major parties portrayed distinctive solution to the land woes in the area. In simple terms, no political party was explaining the type of land reform that can be done in Thyolo and Mulanje. None of the parties were clear on whether it would be restitution, redistribution or tenure reform that might work in the area. In addition, there was little or no reference to the past attempts for reform, like the CBRLDP where scholars argue it was a flop while international organizations like the World Bank flags it as a success story. The lack of how land will be freed for the locals cast more doubts on solving the land question in Malawi. There was no explanation on how different stakeholders perceive the issue of land reform in area.
2020-06-04T09:08:57.216Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "0ab887cf1d11aced1de60e116b091a0bd9eeb8ec", "oa_license": "CCBY", "oa_url": "https://iiste.org/Journals/index.php/DCS/article/download/52569/54329", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a6fb7d4bf23292f0167b0a9a5aee3b3ad0af8914", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
2617767
pes2o/s2orc
v3-fos-license
l-carnosine enhanced reproductive potential of the Saccharomyces cerevisiae yeast growing on medium containing glucose as a source of carbon Carnosine is an endogenous dipeptide composed of β-alanine and l-histidine, which occurs in vertebrates, including humans. It has a number of favorable properties including buffering, chelating, antioxidant, anti-glycation and anti-aging activities. In our study we used the Saccharomyces cerevisiae yeast as a model organism to examine the impact of l-carnosine on the cell lifespan. We demonstrated that l-carnosine slowed down the growth and decreased the metabolic activity of cells as well as prolonged their generation time. On the other hand, it allowed for enhancement of the yeast reproductive potential and extended its reproductive lifespan. These changes may be a result of the reduced mitochondrial membrane potential and decreased ATP content in the yeast cells. However, due to reduction of the post-reproductive lifespan, l-carnosine did not have an influence on the total lifespan of yeast. In conclusion, l-carnosine does not extend the total lifespan of S. cerevisiae but rather it increases the yeast’s reproductive capacity by increasing the number of daughter cells produced. Introduction Carnosine (b-alanyl-L-histidine) is a water-soluble dipeptide which occurs naturally in the millimolar range in mammals, including humans. The highest concentrations of carnosine are observed in the skeletal muscle tissue, central nervous system and cardiac muscle, with lower concentrations found in stomach, liver and kidney. It can also be found in muscles of fish, amphibians, reptiles and birds, but never in plants, fungi or other eukaryotes (Boldyrev et al. 2013). Carnosine has three ionisable groups: the amino group of b-alanine as well as the carboxylic group and the nitrogens of imidazole ring of Lhistidine. This chemical structure of carnosine determines its properties. The nitrogen atoms of carnosine imidazole ring (pK a = 6.72) regulate the buffering activity of the dipeptide, which is particularly important in the skeletal and cardiac muscle. Carnosine also displays the metal ion chelating activity. It can form complexes with Cu 2? , Co 2? , Ni 2? , Cd 2? and Zn 2? , which have a wide range of biological relevance (Hill and Blikslager 2012;Mizuno et al. 2015;Mozdzan et al. 2005). Furthermore, carnosine is well-known for its antioxidant properties. It has both the ability to directly scavenge reactive oxygen species such as peroxynitrite and hypochlorite, and increase the content and/or regeneration of enzymatic and nonenzymatic antioxidants (Fontana et al. 2002;Hipkiss et al. 1998b;Kim et al. 2011;Klebanov et al. 1997). It was also confirmed that carnosine participates in prevention of the formation of advanced lipoxidation end-products (ALEs) and advanced glycation endproducts (AGEs). Carnosine is not only able to prevent protein carbonylation but can also react directly with protein carbonyl groups, producing protein-carbonylcarnosine adducts that prevent cross-linking to other unmodified proteins (Aldini et al. 2005;Brownson and Hipkiss 2000;Hipkiss et al. 1998a;Xie et al. 2013). Under physiological conditions, the endogenous carnosine plays a crucial role in the skeletal and cardiac muscle as well as the neuronal tissue. In turn, the exogenous carnosine is considered a potential therapeutic agent for many diseases such as diabetes, ischemia/reperfusion damage, ocular diseases and neurological disorders (Alzheimer and Parkinson disease, schizophrenia and autistic spectrum disorders) (Aldini et al. 2005;Bellia et al. 2011;Boldyrev et al. 2013;Hipkiss 2007). Carnosine also reveals anti-cancer and anti-aging activities. It inhibits selectively the growth of transformed cell lines and tumour cells by suppressing cellular ATP generation (Iovine et al. 2012;Renner et al. 2010;Shen et al. 2014). In contrast, McFerland and Holliday (1994) demonstrated that carnosine could increase the lifespan as well as chronological age of cultured human diploid fibroblasts. In addition, it was found to rejuvenate already senescent cells giving them a more juvenile appearance (Holliday and McFarland 2000). Significant increase in the lifespan was also reported in peripheral blood derived human CD4? T cell clones after long-term culture with carnosine (Hyland et al. 2000). The anti-aging effect of carnosine has been described both for the cell lines and animal models. The study by Boldyrev group reported that carnosine supplemented to a standard diet attenuated the development of senile features and increased the lifespan in senescence-accelerated mice (Boldyrev et al. 1999;Gallant et al. 2000). It proved to increase significantly the number of spermatogonia and Sertoli cells in mice prone to accelerated aging (Gopko et al. 2005), and furthermore, extended the lifespan of male Drosophila melanogaster flies (Yuneva et al. 2002) and Brachionus manjavacas rotifers (Snell et al. 2012). Studies of the influence of various chemical factors on the cellular or organismal level have been conducted on a wide range of model organisms. Among these, the yeast Saccharomyces cerevisiae is a commonly used organism to study the influence of such factors on the growth, lifespan and aging process (Krzepilko et al. 2004;Lam et al. 2010;Wu et al. 2014). This yeast has also been used to study the effect of carnosine. Cartwright et al. (2012) showed that carnosine exhibited either inhibitory, or stimulatory effects on yeast cells, depending on the carbon source in the growth medium. The aim of this study was to investigate the effect of L-carnosine on the rate of growth, reproductive potential, lifespan and metabolic activity of S. cerevisiae cells cultivated in the medium supplemented with glucose as a source of carbon. We also tested the cellular ATP content and mitochondrial membrane potential as affected by the presence of the studied dipeptide. Yeast strain, media and growth conditions In the study a wild-type strain BY4742 MATa his3D1 leu2D0 lys2D0 ura3D0 (EUROSCARF) was used. The yeast was grown in the standard liquid YPD medium (1 % Yeast Extract, 1 % Yeast Bacto-Peptone, 2 % glucose) on a rotary shaker at 150 rpm, or on the solid YPD medium containing 2 % agar, at the temperature of 28°C. Determination of cell growth Liquid yeast cultures (5 9 10 6 cells/ml in a total volume of 200 ll of cells) with or without 20 mM Lcarnosine were cultivated in a Heidolph incubator 1000 at 1200 rpm at 28°C. Their growth was monitored turbidimetrically at 600 nm in an Anthos 2010 type 17,550 microplate reader for 12 h (measured every 1 h), then after 24 and 48 h. The relative growth rate was calculated at the exponential growth phase using an appropriate formula (Hall et al. 2014). All the data represent mean values obtained in four independent experiments. Determination of cell lifespan Saccharomyces cerevisiae cell lifespan was determined as described previously (Minois et al. 2005;Zadrag et al. 2008). Overnight yeast cultures were dropped onto the YPD plates with the solid medium containing Phloxine B at the concentration of 10 ll/ ml. During the manipulation, the plates were kept at 28°C for 16 h and at 4°C overnight. The reproductive potential (the number of buds produced), reproductive lifespan (the time during which a yeast cell is able to reproduce), post-reproductive lifespan (yeast cell life duration after the cessation of reproduction) and total lifespan (sum of the reproductive and postreproductive lifespans) were analysed for forty single cells in each experiment. The data represent mean values from two separate experiments. Incubation and growth conditions Yeast cells from the exponential phase culture were centrifuged, washed with sterile water and suspended either to the final density of 10 8 cells/ml in 100 mM phosphate buffer with pH 7.0 containing 0.1 % glucose and 1 mM EDTA, or to the final density of 5 9 10 6 cells/ml in the YPD medium; in both cases with or without addition of 20 mM L-carnosine. After 1, 3 and 6 h of incubation in buffer or 3, 6, 12, 24, 48 and 72 h of growth in YPD medium, the cells were pelleted by centrifugation, then washed twice with sterile water and used for further analysis. Assessment of mitochondrial membrane potential The mitochondrial membrane potential was assessed with both rhodamine 123 and rhodamine B hexyl ester according to the manufacturer's protocol (Molecular Probes). The cells after growth in the presence or absence of L-carnosine were suspended either in 50 mM citrate buffer with pH 5.0 containing 2 % glucose, or in 10 mM HEPES buffer with pH 7.4 containing 5 % glucose, for the case of rhodamine 123 and rhodamine B hexyl ester staining, respectively. Rhodamine 123 was added to the final concentration of 5 lM, and after 15 min of incubation the fluorescence of the cell suspension was measured using the TECAN Infinite 200 microplate reader at k ex = 505 nm and k em = 534 nm. The data represent mean values obtained upon three independent experiments. The mitochondrial network was visualised by fluorescence microscopy Olympus BX-51 using 100 nM rhodamine B, the fluorescent dye in which the emission is dependent on the mitochondrial membrane potential, at k ex = 555 nm and k em = 579 nm. The photographs present a typical result of a duplicate experiment. Assessment of the cellular ATP content The level of ATP in the yeast cells was assessed with BacTiter-Glo TM Microbial Cell Viability Assay according to the manufacturer's protocol (Promega). The cells after incubation or growth in the presence or absence of L-carnosine were suspended in a 100 mM phosphate buffer with pH 7.0 containing 0.1 % glucose and 1 mM EDTA. A sample of cell suspension with a density of 10 6 cells/ml was used for determination purposes. The luminescent signal, as proportional to the amount of ATP present, was recorded after 5 min using the TECAN Infinite 200 microplate reader. The cellular ATP content was calculated from the standard curve. The data represent mean values from four independent experiments. Assessment of the metabolic activity The metabolic activity of the yeast cells was assessed with FUN-1 according to the manufacturer's protocol (Molecular Probes). The cells after incubation or growth in the presence or absence of L-carnosine were suspended in a 10 mM HEPES buffer with pH 7.2 containing 2 % glucose. The metabolic activity of cells was estimated with 0.5 lM FUN-1. The fluorescence of the cell suspension was measured after 15 min using the TECAN Infinite 200 microplate reader at k ex = 480 nm and k em = 500-650 nm. The metabolic activity of cells was expressed as a change in ratio of red (k = 575 nm) to green (k = 535 nm) fluorescence. The data represent mean values from four independent experiments. Statistical analysis Data are presented as mean values ± standard deviation (SD). The statistical analysis was performed using the SPSS 21.0 software. The statistical significance of the differences between the means of treated sample compared to untreated control was estimated using the t test for independent samples. The differences between samples obtained after various time of incubation/growth with or without addition of Lcarnosine were evaluated using one-way ANOVA and the Dunnet post hoc test. Homogeneity of variance was checked using Levene's test. Values were considered significant if p \ 0.05. Results L-carnosine slows down the growth rate of yeast cells Cartwright et al. (2012) have shown that L-carnosine decreased the growth of yeast cells in a medium containing a fermentable carbon source in a dosedependent manner. The results in Fig. 1 confirm that the addition of 20 mM L-carnosine to the liquid medium containing 2 % glucose slowed down the growth of the BY4742 strain yeast cells. This effect was visible both for the exponential phase of growth (Fig. 1a) and after 24 and 48 h of culture (Fig. 1c). We also observed a statistically significant decrease of the relative growth rate (Fig. 1b) and an increase in the average generation time as determined based on the growth curve by approx. 9 % (data not shown) in the case of yeast cells exposed to L-carnosine, compared to the untreated control. L-carnosine extends the reproductive potential but not the total lifespan of yeast cells The reproductive potential and the total lifespan of yeast cells are not synonymous. The reproductive potential is defined as the number of daughter cells produced by a single mother cell during its life while the total lifespan is means the cell life duration including both the reproductive and post-reproductive phases. Therefore, we examined the effect of 20 mM L-carnosine on these two parameters of the BY4742 yeast. L-carnosine significantly enhanced the reproductive potential of yeast cells, increasing the average number of daughters produced from 20 to 26 ( Fig. 2a; Table 1). The positive effect of L-carnosine was even more pronounced when the time-parameter was taken into account. An increase in the average generation time (determined based on the reproductive lifespan) was observed and thus the average reproductive lifespan was extended by approx. 16 and 49 %, respectively ( Fig. 2b; Table 1). On the other hand, Lcarnosine was found to reduce the average postreproductive lifespan of yeast cells (during this time the cells were still alive but not able to reproduce) by approx. 46 % ( Fig. 2c; Table 1). Interestingly, the total lifespan in the case of both the control conditions and the cells exposed to L-carnosine was almost the same ( Fig. 2d; Table 1). These results show that Lcarnosine is not able to extend the total lifespan of yeast cells; however, it can significantly influence their reproductive potential. L-carnosine decreases the cellular energy level and the metabolic activity The yeast S. cerevisiae receives energy from the fermentation process in the presence of fermentable carbon sources such as glucose or from the Fig. 1 Effect of L-carnosine on growth of the BY4742 strain in liquid YPD media. Kinetics of growth was monitored turbidimetrically at 600 nm every 1 h for 12 h (a), and after 24 and 48 h (c). The relative growth rate (b) was calculated using appropriate formula. Data are presented as mean ± SD from four independent experiments. *p \ 0.05; ***p \ 0.001 as compared to the untreated control aerobic respiration in the presence of non-fermentable carbon sources such as glycerol. Therefore, mitochondria are an important cellular energy centre, and their morphology and number may change under different conditions and along with age. The effect of 20 mM L-carnosine on the mitochondrial membrane potential of the BY4742 cells was examined after short-and long-term culture. We observed that mitochondrial membrane potential after 3, 6 and 12 h of cultivation with L-carnosine remained at a relatively constant level, significantly lower than in the case of the untreated control (Fig. 3a). The level of mitochondrial membrane potential was almost the same regardless the time of cultivation (sample after 24 and 48 h) (Fig. 3a, b). Moreover, L-carnosine did not change the morphology of mitochondria and only slightly affected the development of mitochondrial network (Fig. 3b). The ATP yeast cell content is directly related to the level of glucose in the medium and the mitochondrial activity. 20 mM L-carnosine decreased the level of ATP, but only after a short time of incubation/culture (Fig. 4a, b). In turn, after 24, 48 and 72 h of cultivation an opposite reaction was observed in the presence of Lcarnosine: the cellular ATP content was significantly higher in comparison to the untreated control (Fig. 4b). The cellular energy level has an impact on the vitality and viability of cells. Therefore, the effect of 20 mM L-carnosine on the metabolic activity of the BY4742 cells was determined using a FUN-1 stain. Lcarnosine decreased the metabolic activity, both after a short time of incubation (3 and 6 h) and after a longterm culture (12, 24, 48 and 72 h). For each case studied these values were significantly lower compared to the untreated control (Fig. 4c, d). These results show that L-carnosine can alter the cellular energy level, thereby reducing the metabolic activity of cells. Discussion The search for compounds that improve quality and prolong the time of human life has been conducted for many years in numerous laboratories around the world. Carnosine appears as one of such type of compounds with potential anti-aging properties. Previous studies show that L-carnosine could increase the lifespan as well as chronological age of human fibroblasts (Holliday and McFarland 2000;McFarland and Holliday 1994) and extend the lifespan of selected animals such as mice (Boldyrev et al. 1999;Gallant et al. 2000), D. melanogaster flies (Yuneva et al. 2002) and B. manjavacas rotifers (Snell et al. 2012). The S. cerevisiae yeast is commonly used as a model organism to study the influence of various factors on the growth, lifespan and aging process (Krzepilko et al. 2004;Lam et al. 2010;Wu et al. 2014). For the studies concerning carnosine, usefulness of the S. cerevisiae is especially important because as a fungus this yeast does not produce L-carnosine and its metabolites, which enables us to examine the effect of extracellular L-carnosine on the cells using various doses and growth conditions. S. cerevisiae was first subjected to the research on properties of carnosine by Cartwright et al. (2012). Their study showed that Lcarnosine decreased the yeast growth in media containing fermentable carbon sources in a dose-dependent manner. This effect was more pronounced in the case of glucose than mannose, galactose or fructose. Lcarnosine did not have an inhibitory effect on the growth on the contrary provoked significant increase in the growth rate of yeast in media with nonfermentable carbon sources in the presence of oxygen. The results showed that L-carnosine decreased Fig. 3 Mitochondrial membrane potential of the BY4742 yeast cells after cultivation in YPD media with or without addition of 20 mM L-carnosine. The cells were stained both with rhodamine 123 (a) and rhodamine B (b). Data are presented as mean ± SD from three independent experiments. *p \ 0.05; **p \ 0.01; ***p \ 0.001 as compared to the untreated control. Letters a, b and c on the graph indicate differences between the initial and subsequent times of experiment at p \ 0.05; p \ 0.01; p \ 0.001, respectively viability of cells but only under reduced oxidative phosphorylation conditions. Our results confirm that Lcarnosine slows down the growth of yeast on the YPD medium, decreases the relative growth rate and increases the generation time (Fig. 1a-c). However, the observed growth rate changes are the result of slowdown of cell reproduction cycle rather than the cell death. Natural compounds with anti-aging properties are discovered relatively rare. Literature data indicate that L-carnosine may indeed exhibit such features (Boldyrev et al. 1999;McFarland and Holliday 1994;Snell et al. 2012;Yuneva et al. 2002). As the available references do not seem to fully confirm the case of the yeast S. cerevisiae (Cartwright et al. 2012; Fig. 1), we decided to investigate the effect of L-carnosine on individual cells of the mentioned species during the culture on solid YPD medium. Our study proves that 20 mM L-carnosine significantly enhanced the reproductive cell potential (the number of buds produced, which in the literature is termed as a replicative lifespan) and extended the reproductive lifespan (the time during which a yeast cell is able to reproduce) of the BY4742 strain (Fig. 2a, b; Table 1). In the literature, only few compounds were reported as having the ability to increase the reproductive potential of yeast, such as resveratrol (Howitz et al. 2003), ascorbate (Krzepilko et al. 2004), diazaborine (Steffen et al. 2008) and ibuprofen (He et al. 2014). More information can be found regarding the effects of various factors on prolonging the chronological lifespan of yeast (Georgieva et al. 2015;Nakaya et al. 2014;Rockenfeller et al. 2015;Wanke et al. 2008;Wu et al. 2014). However, one should not compare these two types of experiments since the chronological lifespan and the total lifespan are not the same. Yeast cells do not end their life immediately after the reproduction phase; therefore, an analysis of the postreproductive phase makes it possible to determine their total lifespan. Our results indicate that 20 mM Lcarnosine reduced the average post-reproductive lifespan of yeast ( Fig. 2c; Table 1). It follows that the addition of L-carnosine extends the reproductive lifespan and thereby shortens the post-reproductive lifespan of yeast. The negative correlation between the post-reproductive and reproductive lifespans was presented in our previous study for a number of yeast strains (Molon et al. 2015;Zadrag-Tecza et al. 2013). Here, the total lifespan was shown to be almost the same, both for the case of the control and the cells exposed to L-carnosine ( Fig. 2d; Table 1). These results prove that L-carnosine has no pro-longevity effect because it does not extend the total yeast lifespan but rather causes an increase in its reproductive potential. Enhancing the reproductive potential of the yeast treated with L-carnosine may be associated with the regulation of energy metabolism. Previous studies report that L-carnosine inhibits the ATP production and thus reduces the proliferative capacity of cancer cells (Iovine et al. 2012;Renner et al. 2010;Shen et al. 2014). Furthermore, Cartwright et al. (2012) demonstrated that L-carnosine caused changes in the metabolic activity of the yeast grown on the fermentable carbon source. Addition of L-carnosine significantly affected the heat output profiles of the cultures, measured using on-line flow microcalorimetry, in a dose-dependent manner (Cartwright et al. 2012). Based on our work, it is clearly seen that 20 mM L-carnosine decreases mitochondrial membrane potential during the culture on YPD medium compared to the control, but only in the exponential phase of growth. The differences in the mitochondrial membrane potential are not visible in the stationary phase of growth when most of the glucose supply has been consumed and the yeast cells have transferred to the aerobic respiration (Fig. 3a). A mild decrease in the mitochondrial membrane potential is considered beneficial for cells and whole organism (Knorre and Severin 2012). This has been confirmed by the results of Barros et al. (Barros et al. 2004) with the use of low doses of protonophore dinitrophenol which caused an increase in the number of daughters produced (replicative lifespan). The proposed mechanism assumes that lowering the mitochondrial membrane potential may lead to preventing mitochondrial production of reactive oxygen species (ROS) but also may activate the retrograde response, which by transcriptional changes can result in an increase of the lifespan (Miceli et al. 2011). We also observed a significant reduction of the ATP content in the cells after short incubation time in the presence of 20 mM L-carnosine (Fig. 4a, b). In turn, after 24, 48 and 72 h of culture we observed an Fig. 4 Effect of L-carnosine on ATP content and metabolic activity of the BY4742 yeast cells. Cellular ATP content (a, b) determined using BactTiter-Glo TM Microbial Cell Viability Assay and metabolic activity (c, d) using FUN-1 stain. a, c results for short time of incubation, b, d results for long-term of culture. Data are presented as mean ± SD from four independent experiments. *p \ 0.05; **p \ 0.01; ***p \ 0.001 as compared to untreated control. Letters a, b and c on the graph indicate differences between the initial and subsequent times of experiment at p \ 0.05; p \ 0.01; p \ 0.001, respectively opposite reaction and a significantly higher level of cellular ATP compared to the control (Fig. 4b). This effect may be a result of lower cell energy requirements in the presence of L-carnosine, its protective properties or amino acid supplementation deriving from carnosine metabolism. Such an increase in the ATP level enables the cells to increase the number of daughters produced (reproductive potential), and in terms of the time of a single reproduction cycle, to extend the reproductive lifespan (Fig. 2a, b; Table 1). The observed changes in ATP levels during prolonged culture can result from a changes in a way of energy production by yeast cells. At a high level of glucose it dominates the fermentative metabolism, which is changed into a respiration after consumption the majority of available glucose. The initial decrease in ATP level may result from the fact, that the addition of L-carnosine reduces the ATP production from glycolysis, as is demonstrated in the case of tumour cells (Renner et al. 2010). In turn the observed increase in the ATP level occurs after diauxic shift, where metabolism becomes aerobic. However, a higher level of ATP (which was still very low) made it impossible to maintain metabolic activity of the cells at a constant level (Fig. 4d). The addition of 20 mM L-carnosine decreased the metabolic activity both at the exponential and stationary growth phases as compared to the control (Fig. 4c, d). These results confirm the earlier observations of Cartwright et al. (2012). In summary, L-carnosine can change the cellular ATP content by decreasing the intensity of glycolysis which results in a reduction of the mitochondrial membrane potential, a decrease in the metabolic activity of cells and the extended time of generation. Reduction of the energy level in the cells leads to the enhanced reproductive potential and extended reproductive lifespan of yeast. As a regulator of cell energy metabolism, L-carnosine causes an efficient increase in the reproductive capacity as typically observed in the case of caloric restriction and repression of the enhanced gluconeogenesis (Evans et al. 2010;Hachinohe et al. 2013;Lin et al. 2002;Medvedik et al. 2007;Wierman and Smith 2014). On the other hand, we must not forget that due to reduction of the postreproductive lifespan the effect of L-carnosine on the total lifespan of yeast is not significant. L-carnosine does not extend the lifetime of a single yeast cell but can rather increase the chances for population survival by increasing the number of offspring.
2017-08-02T21:54:07.529Z
2016-04-04T00:00:00.000
{ "year": 2016, "sha1": "2cdb5ffcb24fb98dc1bee772d095bf5c34e29613", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10522-016-9645-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0227b6244f88ad286cba6eb55762b50038f6f404", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
32690391
pes2o/s2orc
v3-fos-license
Artificial Chiral Nanostructure at Oblique Incidence We propose in this paper the design of artificial nanostructure chirality obtained by oblique illumination. This structure is based on anisotropic metamaterial having an optical activity induced by the special geometry of the pattern and the incident beam. Starting from a non-chiral material, the artificial chirality is obtained thanks to the rectangular apertures which form the periodic perfect metal nanostructure (one layer) and the oblique incidence of the light beam. An extraordinary light transmission (93%) through the metal nanostructure is achieved by exciting the cavity modes. The extrinsic chirality obtained can be granted to the desired value by appropriately adjusting the geometric parameters and the angle of incidence. Introduction  It is well-known that light can propagate through apertures whose dimensions are larger than the wavelength of the incident wave.When the apertures are smaller than the wavelength of the incident wave r << λ, the transmitted intensity is negligible.But we can improve the transmission if an array of apertures is periodically structured [1].When the transverse spatial dimensions of the apertures are smaller than the incident wavelength only the fundamental mode of the waveguides has a significant contribution to the diffracted amplitudes.Thus if we choose the period less than all the incident wavelengths (p < λ), only the fundamental cavity mode can be efficiently excited.The light transmission through an array of periodic sub-wavelength apertures is strongly linked to the shapes of the apertures.Great influence of the circular or rectangular nano-apertures on the transmission properties was observed [2].The study conducted by Koerkamp et al. [3] demonstrated that the rectangular geometry allows for a better transmission relative to the circular apertures, with a shift of the peak position to the red.In 2011, Baida et al. [4] presented an Corresponding author: Mohamed Boutria, Ph.D., research fields: microelectronics, optics and nano-photonics.original design of anisotropic metamaterial plates exhibiting extraordinary transmission through perfectly conductor metallic screens perforated by a subwavelength double-pattern rectangular aperture array.The emergence of nano-optics in recent decades has attracted scientists in the electromagnetic field and tried to generate artificial optical activity (chirality) using the original properties exhibited by metamaterials.Consequently, the main objective in the development of artificial chiral periodic structures is to produce simultaneously a large optical activity and a high transmittance [5].In 1920, Lindman [6] announced the first artificial chiral isotropic medium by studying the randomly oriented collections of metal helices size of the wavelength.Since then, researchers continue to provide more in this area.The tremendous interest given to chiral nanostructures has opened the way to many applications.Today, the optical activity is used as a diagnostic tool in spectroscopy to identify the spatial arrangement of atoms.Optical activity is even used in space missions as a life detection signature.With the ability to change the polarization state of diffracted light [8][9][10] and transmitted [11,12], artificial chiral planar structures have great potential to be used in the control of polarization.The optically active media are used as polarization rotators and also circular polarizers [7].In 2012 we designed an artificial chiral structure based on two identical half-wave plates (λ/2) after rotating one of them by an angle α with regard to the other (at normal incidence) [5].A tuneable rotation (Ø) corresponding to two times the angle (α) between the two plates is checked with enhanced transmission (> 80%) at the working wavelength. Although the optical activity is often associated with chiral-3D structures (intrinsic chirality).Such nanomaterial can be complex and difficult to fabricate [13].Other methods are used for the design and production of chiral materials like the phenomena caused either by orbital hybridization [14][15][16] or near-field, dipole-dipole interactions between chiral molecules and particle plasmon [17,18].A hybrid nanoplasmonic material was suggested by which chiroptical behavior can be induced in the resonances of achiral plasmonic nanostructures, driven by radiative electromagnetic coupling between metallic particle plasmons and a surrounding chiral isotropic medium [19].The optical rotation can also occur at oblique incidence on flat achiral structures (extrinsic chirality) [20].Planar metamaterials should show optical activity of transmission and reflection, if the 3D extrinsic chirality is associated with the mutual orientation of the incident beam and metamaterial pattern.Team of Zhedulev stressed that under certain conditions, the circular birefringence and circular dichroism (3D effect) can be obtained with achiral planar metamaterials [21,22].It is well known that the negative-index mode in planar metal/insulator/metal structures can only be excited at oblique incidence angles and at a specific polarization.In 2010, Stanley et al. [23] demonstrated that a two-dimensional array of vertically oriented (MIM) coaxial waveguides, arranged in a dense hexagonal configuration, functions as a single-layer wide-angle negative index material down to the blue part of the visible spectrum.Since then, few studies have been undertaken on this topic. The effect reported in Ref. [19] is the incident and scattered light which drives a chiral polarization of the surrounding molecular material which subsequently couples electromagnetically to the plasmonic resonance of the metallic nanostructure.The aim of our study is to get an optical rotation with a single layer nanostructure with a maximum of transmission of light in the visible range.Starting from a non-chiral material, the artificial chirality is obtained thanks to the rectangular apertures which form the periodic nanostructure (one layer).The rotation is induced by the particular geometry of the configuration and the manner in which the structure is illuminated (oblique incidence). Presentation of the Structure The proposed structure is presented in Fig. 1.It is consisting on array of rectangular subwavelength apertures perforated in a perfectly conducting metal film (PEC) with a thickness (h) and a period p.Each cell consists of two rectangles having different geometric parameters (ax 1 = 0.1p, ay 1 = 0.76p, ax 2 = 0.71p, ay 2 = 0.1p). The structure is supposed to be illuminated by a linearly polarized plan wave at oblique incidence, freely suspended in vacuum and the cavities are fulfilled by air. Structure Optimization We well know that the optical activity of a medium is characterized by the rotation of the polarization plane of linearly polarized waves during its propagation.The optical rotation will be obtained after calculating the transmission spectra for two incidents orthogonal (p and s) polarization states [5].The transmitted field components are connected to the incident field ones by the transmission Jones matrix T (in the straight-line basis).The angle of rotation is then directly deduced from the diagonal elements of T c through [24]: Ø is the phase difference between the transmitted RCP and LCP waves. For the optimization of the structure, we used the BMM (bimodal modal method).First, we studied the influence of the thickness (h) on the rotation.We set the angle of incidence (θ = 20°) and we varied the thickness (from 0.1p to 0.6p) with an increment of 0.1p.The result is shown in Fig. 2. It is clear that the rotation slightly increases with the thickness function of the wavelength.This result is consistent with the theory which stipulates that the phase shift is related to the distance traveled by the wave along the optically active medium. Then we studied the influence of the illumination angle θ on the rotation while fixing the value of the thickness (h = 0.1p).Fig. 3 shows the spectra of the rotation function of the wavelength for each value θ varying from 5° to 30° with a pitch of 5°. We can see that the rotation increases with the incidence angle θ, and the peaks are shifted slightly to higher wave lengths. The various calculations have shown that the optimal values of the thickness and the angle of incidence for a better transmission of light through the structure are: h = 0.1p and θ = 20°. For the structure with the same geometrical parameters apertures we plotted the transmission spectrum and the rotation for the p and s polarizations for normal and oblique incidence (θ = 0° and θ = 20°) (see Fig. 4).Figs.4a and 4b show the transmission and the rotation in the case of normal incidence, while Figs.4c and 4d show the same spectra in the case of oblique incidence.For the oblique illumination we found a 75% transmission (greater than the transmission at normal incidence) of the incident wave and a rotation of 0.2733 radians (15.66°) from the plane of polarization at the working wavelength λ c = 1.522p, whereas in normal incidence the rotation is of the order of 10 -4 (negligible). Simulation with the FDTD Method In the plane (xy), the calculation window is equal to a square unit cell (px × py).The direction of propagation is along the z axis.The periodic boundary condition is used along x-and y-directions in order to create the array behaviour and perfectly matched layer boundary condition is used along z-direction.For the spatial discretization, a step of 2 nm was used in the three directions (δx = δy = δz = 2 nm).The period p is taken equal to 300 nm and the oblique incidence is θ = 20° for a thickness h = 0.1p.The structure is supposed suspended in the void.Fig. 5 shows the transmission of the incident wave through the two apertures for two incidents orthogonal (p and s) polarization states.One can see that at the wavelength λ = 1.607p, 93% of the incident light is transmitted. The calculation of rotation at the working wavelength with Eq. ( 3) gives the following result: Ø 0.4085 radians (23.4°). Conclusions We have presented a design of an extrinsic chiral nanostructure consisting of an array of rectangular apertures engraved in a perfectly conducting metallic film of thickness h = 0.1p.This subwavelength structure is constituted by a non-chiral material and which nevertheless has an artificial chirality induced through the structuring of the material and the nature of the incidence (oblique).The FDTD calculations showed optical activity of 23.4° under oblique incidence (θ = 20°).An extraordinary transmission (93%) through the metal nanostructure is obtained by exciting the cavity modes.With this work, we feel that we have contributed to the design of artificial chiral nanostructures and we showed the possibility of an extrinsic chirality from a non-chiral metallic structure.This opens the way to design a new type of chiral structures.Such a study can be extended to design artificial chiral structures operating in terahertz or microwave domains. ( 1 ) 1 The optical activity is then deduced by determining the rotation angle of the transmitted field with regard to the incident one.The rotation is calculated through the expression of the transmission Jones matrix T c (expressed in the circular basis).This latter is obtained by: is the basis change matrix.Now, the transmission matrix directly relates the incident and transmitted electric fields in terms of right-handed (RCP), and left-handed (LCP), circularly polarized components. Fig. 2 Fig. 3 Fig. 2 Spectrum representing the rotation function of the wavelength for different values of the thickness h and for an angle of incidence θ = 20°. Fig. 4 Fig. 4 Transmission spectrum as a function of lambda for the p and s polarizations. Fig. 5 Fig. 5 Transmission spectra function of lambda for p and s polarizations.
2017-08-27T12:34:31.323Z
2017-01-28T00:00:00.000
{ "year": 2017, "sha1": "b7157f03a57a6af7c1a4e968a204ab94458db352", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/586c6999c2466.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b7157f03a57a6af7c1a4e968a204ab94458db352", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
234082277
pes2o/s2orc
v3-fos-license
Research on the Relationship between Solid Physics and Quantum Mechanics Based on Computer Solid state physics is the basis of quantum mechanics to study the microstructure and macro properties of crystal materials. The combination of the two can promote the further improvement and development of the structure and properties of solid materials. Under the background of computer application, the development of quantum mechanics is inseparable from the effective support of solid-state physics. It can be seen that the relationship between solid-state physics and quantum mechanics is mutual promotion and correlation. Based on this, this paper first studies the key points of quantum mechanics and its relationship with solid state physics, and then analyses the concrete relationship between quantum mechanics and solid state physics based on computer. Introduction With the continuous iteration of computer technology, its in-depth application in the field of quantum physics has greatly promoted the research progress of force particle mechanics, and the research results of solid-state physics are also constantly emerging. As a discipline to study the structure and composition of solids, solid physics focuses on the analysis of the laws and phenomena of the motion and interaction of solid particles [1] . The analysis and research of solid-state physics is helpful to clarify the properties and uses of solids, thus promoting the development of many basic disciplines, such as metals, semiconductors and materials. In addition, some modern technologies based on the research results of solid state physics have also achieved many successes. These modern technologies are shown in Figure 1 below. Figure 1. Technologies based on the research results of solid state physics. In addition, solid-state physics promotes the continuous progress of many disciplines and fields, but also makes its research subject to many challenges, such as the connection between traditional solid-state physics and modern frontier physics, and the effective connection between the disciplines of quantum mechanics, etc [2] . These aspects need further research and development with the help of modern computer technology, so as to promote the continuous innovation and progress of corresponding disciplines and fields. On the one hand, quantum mechanics has obtained remarkable application and development achievements with the help of modern computer technology. For example, the typical applications of quantum mechanics in physics include but are not limited to thermoelectric materials, atomic clocks for precise timing of satellite navigation systems, quantum communication and encryption, which are closely related to people's daily life; On the other hand, the development of quantum mechanics is also inseparable from solid mechanics, so as to further lay the foundation for its development. As a new application of computer and quantum science, quantum computer has great advantages compared with traditional computer. With its powerful parallel processing ability, quantum computer can handle multiple tasks. The development of quantum computer is inseparable from the effective support of solid-state physics. It can be seen that the relationship between solid-state physics and quantum mechanics is mutual promotion and correlation. Therefore, it is of great practical value to study the relationship between solid state physics and quantum mechanics based on computer. Key points of quantum mechanics At present, the main points of quantum mechanics include wave function, wave interference, symmetry and homogeneity, and semi integer spin particles. At the wave function level, the quantum mechanical system needs wave function representation to calculate the possible value of observable quantity. Based on the probability of space volume, the analysis of momentum is obtained, and the fuzzy probability image is obtained. This is one of the core contents of quantum mechanics. Blackbody radiation In thermal equilibrium, the energy density of cavity radiation is the distribution curve of radiation wavelength. Its shape and position are only related to the absolute temperature T of blackbody, but not to the shape and material of blackbody [3] . The relationship between energy density and wavelength is shown in Figure 2 below. The radiation emitted by such holes in the cavity is called blackbody radiation. In addition, the vibration energy of the oscillator in the blackbody radiation cavity is not directly proportional to the square of the amplitude and changes continuously, but is proportional to the frequency of the oscillator and can only take discrete values, as shown in the following formula 1: In order to explain the experimental law of the interaction between the radiation field and the cavity wall material, it must be assumed that the energy exchanged between the cavity electromagnetic field and the cavity wall material is intermittent, part by part, hν, 2hν, 3hν, which should be assumed that the corresponding energy for all frequencies is quantized. Photoelectric effect For example, only when the frequency of light is greater than a certain value V 0 , can photoelectrons be emitted. If the light frequency is less than this value, no electron will be produced no matter how strong the light intensity is and how long the irradiation time is [4] . This frequency V 0 of light is called the critical frequency. The energy of electron is only related to the frequency of light, and has nothing to do with the light intensity. The light intensity only determines the number of electrons. According to the electromagnetic theory of light, the energy of light depends only on the intensity of light, but not on the frequency. The concept of light quantum can effectively explain the law of photoelectric effect, as shown in formula below: It can be seen from Formula 3 that the energy of photoelectron is only related to the frequency v of light, and the intensity of light only determines the number of photons, thus determining the number of photoelectrons. Therefore, the photoelectric effect has been effectively explained and explained. Photons have not only definite energy but also momentum. The velocity of V is based on the velocity of the particle as follows: Compton scattering problem Compton Effect refers to the effect of X-ray scattering by light element electrons. Classical electrodynamics cannot explain the emergence of this new wavelength [5] . It is necessary to regard the process of X-ray scattering by electrons as the collision process between photons and electrons, so the effect can be easily understood. Based on the energy and momentum conservation of the system in the collision process, there are: Wave function and Schrodinger equation If a particle moves in a force field which changes with time and position, its momentum and energy are no longer constant, and its state cannot be described by plane wave, but by more complex wave. The expression of the wave function is as follows: The wave particle duality observed in the electron diffraction experiment shows that the intensity of the incident electron flow is small, and the particle property of the electron is displayed, and the diffraction pattern is also displayed for a long time; the intensity of the incident electron flow is large, and the diffraction pattern is displayed quickly, as shown in Figure 3 below. The quantum state of micro particles is fully described by wave function. After the determination of wave function, the average value of any mechanical quantity of a particle, the possible value of its measurement and the corresponding probability distribution are also completely determined. The wave function completely describes the state of micro particles. The evolution of wave function follows Schrodinger equation. As an important branch of physics, quantum mechanics and solid state physics are closely related. As a basic discipline of natural science, physics reveals the most basic laws of nature. Quantum mechanics is one of the most basic theories of modern natural science and technology. The relationship between the basic properties of materials and the microstructure of solid is based on the fundamental research of material mechanics. The connection between quantum mechanics and solid state physics The basis of quantum mechanics includes wave function and Schrodinger equation, steady state Schrodinger equation and one-dimensional steady state problem, as well as the mechanical quantity and central force field problem in quantum mechanics and the approximate solution of hydrogen atom and Schrodinger equation. Solid state physics includes solid structure, crystal vibration, crystal combination and solid electronic theory. The current situation of solid state physics and quantum mechanics With the rapid iteration and development of computer technology, the current solid state physics mainly includes the basic theory part and the specialized part. The former mainly includes crystal structure and combination, vibration and thermodynamic properties, defects, band theory and free electron theory, while the latter includes the latest frontier of semiconductors, superconductors, amorphous solids and solid magnetism [6] . At present, the related content of solid state physics involves more contents in the field of materials science. However, there is a lack of connection between the physical knowledge of quantum mechanics and other aspects, which leads to some problems in the cognition of solid state physics. This requires that the content of solid physics and quantum mechanics should be further strengthened, especially the crystal structure and combination. In addition, the current content of solid-state physics pursues theoretical ideas too much and ignores the understanding and enough attention to the process of mathematical derivation, which limits the processing ideas of quantum mechanics. In the aspect of strengthening the connection between solid physics and quantum mechanics, it is necessary to further combine the frontier application and development of condensed matter physics and material science, so as to make the correlation and practical application of both closer. Connection between solid state physics and quantum mechanics based on computer With the gradual deepening of the application of computer technology in the field of solid state physics and quantum mechanics, the relationship between them is more and more close in material science and engineering application. For example, in the practical application of condensed matter physics, superconducting materials, semiconductors and other fields and engineering, the degree of correlation and research integration between the two sides has an important value and role in promoting the development of these fields and disciplines. The combination of quantum mechanics and solid-state physics can not only better reveal the laws of physics, but also promote the development of modern natural science and technology. In addition, solid-state physics is the basis of quantum mechanics to study the microstructure and macroscopic properties of crystal materials. The combination of the two can promote the further improvement and development of the structure and properties of solid materials. Conclusion In summary, the typical applications of quantum mechanics in physics include but are not limited to thermoelectric materials, atomic clocks for precise timing of satellite navigation systems, quantum communication and encryption, which are closely related to people's daily life. The development of quantum mechanics is also inseparable from solid mechanics as a support, so as to further lay the foundation for its development. It can be seen that solid physics is closely related to quantum mechanics. Through the analysis of the key contents of quantum mechanics and solid state physics, and the analysis of the relationship between them, this paper points out that under the background of
2021-05-10T00:03:33.058Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "bd00071f2af8bf6f9e47a093ebc26042ae8fe0e3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1744/3/032176", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "530b1a3f67e472f7a36bdbde6f1abfe9064ae8d6", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
17427254
pes2o/s2orc
v3-fos-license
Antimicrobial Susceptibility of Vibrio vulnificus and Vibrio parahaemolyticus Recovered from Recreational and Commercial Areas of Chesapeake Bay and Maryland Coastal Bays Vibrio vulnificus and V. parahaemolyticus in the estuarine-marine environment are of human health significance and may be increasing in pathogenicity and abundance. Vibrio illness originating from dermal contact with Vibrio laden waters or through ingestion of seafood originating from such waters can cause deleterious health effects, particularly if the strains involved are resistant to clinically important antibiotics. The purpose of this study was to evaluate antimicrobial susceptibility among these pathogens. Surface-water samples were collected from three sites of recreational and commercial importance from July to September 2009. Samples were plated onto species-specific media and resulting V. vulnificus and V. parahaemolyticus strains were confirmed using polymerase chain reaction assays and tested for antimicrobial susceptibility using the Sensititre® microbroth dilution system. Descriptive statistics, Friedman two-way Analysis of Variance (ANOVA) and Kruskal-Wallis one-way ANOVA were used to analyze the data. Vibrio vulnificus (n = 120) and V. parahaemolyticus (n = 77) were isolated from all sampling sites. Most isolates were susceptible to antibiotics recommended for treating Vibrio infections, although the majority of isolates expressed intermediate resistance to chloramphenicol (78% of V. vulnificus, 96% of V. parahaemolyticus). Vibrio parahaemolyticus also demonstrated resistance to penicillin (68%). Sampling location or month did not significantly impact V. parahaemolyticus resistance patterns, but V. vulnificus isolates from St. Martin's River had lower overall intermediate resistance than that of the other two sampling sites during the month of July (p = 0.0166). Antibiotics recommended to treat adult Vibrio infections were effective in suppressing bacterial growth, while some antibiotics recommended for pediatric treatment were not effective against some of the recovered isolates. To our knowledge, these are the first antimicrobial susceptibility data of V. vulnificus and V. parahaemolyticus recovered from the Chesapeake Bay. These data can serve as a baseline against which future studies can be compared to evaluate whether susceptibilities change over time. Introduction Bacterial antimicrobial resistance is a critical public health issue of increasing importance for those who recreate and work in coastal regions. Pathogenic bacteria and antimicrobial resistance genes are often released with wastewater discharges into aquatic environments [1]. Naturally occurring bacteria produce antibiotics in the environment for signaling and regulatory purposes in microbial communities [2]. Bacteria protect themselves from the toxicity of these antibiotics by acquiring and expressing antibiotic resistance genes [3]. As a result, naturally-occurring aquatic bacteria are capable of serving as reservoirs of resistance genes and those genes, coupled with the introduction and accumulation of antimicrobial agents, detergents, disinfectants, and residues from industrial processes, may play an important role in the evolution and spread of antibiotic resistance in aquatic environments [1]. Vibrio bacteria in the estuarine-marine environment are of particular concern for human health and may be increasing in pathogenicity and abundance [4]. Cases of vibriosis are rising in the United States, with Vibrio vulnificus and V. parahaemolyticus being two of the three most commonly reported sources of Vibrio infection [5]. V. parahaemolyticus is implicated as the primary source of escalation in vibriosis incidence [5] and highly pathogenic serotypes of this species are emerging on a global scale, including the Atlantic coasts of the United States and Spain [6]. It is estimated that only 1 in 142 cases of V. parahaemolyticus illness is detected [7]. Calculations based upon probable incidence of vibriosis have estimated that V. vulnificus and V. parahaemolyticus are the first and third most costly marine-borne pathogens, costing $233 and $20 million, respectively [8]. Antimicrobial susceptibility patterns among Vibrio spp. inhabiting estuarine-marine environments may have implications for recreational and commercial users of these environments, and for those who consume Vibrio-contaminated seafood. Previous studies exploring antimicrobial susceptibility of Vibrio vulnificus and V. parahaemolyticus have been conducted in South Carolina, the United States Gulf region and Italy [9,10,11,12]. However, to our knowledge, no similar studies have been completed in the Chesapeake Bay, the largest estuary in the U.S., which lies in a watershed where 17 million people work, live and play. The work of our group and others has demonstrated that concentrations of V. vulnificus and V. parahaemolyticus in the Chesapeake Bay are high enough to result in possible illnesses among exposed recreationists, particularly among those who are immunocompromised [13,14,15,16,17,18]. Moreover, current models predict that total tissue loading of shellfish and finfish with V. vulnificus and V. parahaemolyticus is associated not only with surface water concentrations but also with the risk of illness for those consuming contaminated seafood products [19,20,21]. Given these data, along with the knowledge that environmental conditions may be increasingly more favorable for Vibrio growth [22], it is not surprising that rates of Vibrio infections are increasing in Maryland and other U.S. states [23]. In this context, it is critical to gain a better understanding of the antimicrobial susceptibility patterns of V. vulnificus and V. parahaemolyticus originating from estuarine-marine environments. This study evaluated antimicrobial susceptibility patterns of V. vulnificus and V. parahaemolyticus recovered from the Chesapeake Bay and Maryland Coastal Bays. Our findings provide the first antimicrobial susceptibility data among Vibrio bacteria isolated from this region. These data will be helpful in short and long-term predictions of human health risks associated with exposures to Vibrio populations in the Chesapeake Bay area. Sampling sites Three sampling sites were selected based on their importance for human use in the Chesapeake Bay, Maryland Coastal Bays region. Two sites, Sandy Point State Park and St. Martin's River, were characterized by frequent recreational use; and one site, the Pocomoke Sound, was characterized by heavy commercial fishing use ( Figure 1 [24]. The Pocomoke Sound is a major embayment of the Chesapeake Bay's Eastern Shore. It is influenced by agricultural practices, including high-density concentrated poultry feeding operations, and is a popular destination for commercial and recreational fishing. No specific permissions were required for each sampling location, as they are public access waterways, and no endangered or protected species were involved in sampling activities. Sample collection Sampling dates were chosen to coincide with times of high recreational and/or commercial use. Surface water samples (n = 9) were collected during Summer 2009, once a month, at each site, for three consecutive months (July, August, September) within two hours of high tide and on approximately the same date each month. Water samples were collected just below the surface in sterile wide mouth polypropylene 1 L environmental sampling bottles (Nalgene Thermo Scientific, Waltham, MA). Bottles were rinsed three times with surface water and then dipped below the surface for a final 1 L collection volume. Samples collected for Vibrio culture were kept in insulated coolers, while water samples for enterococci culture were stored in an insulated container on ice (4uC) upon collection, returned to the laboratory within four hours and processed immediately upon arrival. Physical and chemical water quality measurements Water-column depth and surface-water salinity, temperature, dissolved oxygen, conductivity, and pH were measured on every sampling date and at each location with a YSI 556 Multi-probe system (YSI Incorporated, Yellow Springs, OH) in accordance with the manufacturer's instructions. Fecal indicator measurements Fecal indicator measurements were conducted following the standard methods as described for enterococci in Standard Methods for the Examination of Water and Wastewater [25]. Briefly, surface-water samples were filtered in triplicate onto sterile 0.45 mm pore size, 47 mm diameter, nitrocellulose Fisherbrand water-testing membrane filters (Fisher Scientific, Pittsburgh, PA), and plated onto Difco m Enterococcus (BD, Franklin Lakes, NJ) agar. According to manufacturer's instructions, plates were incubated for 48 hours at 35uC. All light to dark red colonies were recorded as presumptive enterococci. Vibrio isolation Surface water samples (100 mL) were spread plated in triplicate onto Chromagar Vibrio media (DRG International, Mountainside, NJ) and incubated for 24 hours at 37uC. After incubation, each plate was observed for characteristically colored bacterial colonies associated with V. vulnificus (turquoise) or V. parahaemolyticus (mauve). As V. vulnificus and V. cholerae both appear as turquoise colonies on Chromagar Vibrio media, all turquoise colonies were replated onto cellobiose-collistin (CC) agar (FDA 2004) media to confirm V. vulnificus species. The CC agar cultures were incubated for 24 hours at 37uC and yellow-colored colonies were considered presumptive V. vulnificus. Tryptic soy broth (TSB), supplemented with 5% sodium chloride, was then inoculated with individual presumptive colonies of V. vulnificus or V. parahaemolyticus and incubated at 37uC for 24 hours and stored with 30% glycerol at 280uC. Vibrio species confirmation Vibrio DNA template was obtained by producing crude cell lysates by boiling 1 mL aliquots of TSB cultures in 2 mL microcentrifuge tubes at 100uC for 10 minutes. A Bio-rad CFX96 Touch Real-Time PCR Detection System (Bio-rad, Hercules, CA, USA) was used to confirm the species of isolates with primers designed to detect Vibrio vulnificus [26] or V. parahaemolyticus [27]. Following initial confirmation, samples testing positive for either species were subjected to further testing for virulence genes (V. vulnificus: virulence correlated gene clinical variant (vcgC) [28]; V. parahaemolyticus: thermostable direct hemolysin (tdh), and thermostable related hemolysin (trh) genes [27]) using real-time PCR. Real-time PCR was performed by using 1X PCR Buffer (Qiagen, Valencia, CA), (Qiagen), 0.2 mM dNTP's solution (Qiagen), 1X Q solution (Qiagen), 2.25U TopTaq DNA polymerase (Qiagen), 75 nM internal control primers (each), 150 nM internal control probe, 2 mL internal control DNA, target primer and probe concentrations as detailed in Table 1, and 3 mL DNA template per reaction, with the exception of the Vv vcgC assay, where 5 mL of DNA template was used and the internal control components were absent. DNase-RNase free water was added in a quantity sufficient for a 25 uL total reaction volume. Two-stage qPCR cycling parameters are presented in Table 1. A linear synthetic exogenous DNA internal control, including a primer set, probe and internal control DNA, was incorporated simultaneously into each assay (excluding the assays for the V. vulnificus vcgC target) to test for the presence and influence of inhibitors (Nordstrom et al., 2007). The following positive controls were used in each qPCR: Vibrio parahaemolyticus USFDA TX2103 and Vibrio vulnificus ATCC 27562. A randomly chosen subset of isolates were taxonomically identified with 16 S rRNA gene sequences. DNA extracted from cultures was PCR-amplified with bacteria-specific primers 27f (59-AGAGTTTGATCCTGGCTCAG- 39) and 907r (59-CCGTCAATTCCTTTRAGTTT-39) using the following conditions: 94uC for 2 min, followed by 25 cycles of 55uC for 30 s, 72uC for 30 s, and 94uC for 2 min, followed by 72uC for 5 min. The PCR products were sequenced bi-directionally using the same primers on an ABI 3730 XL Genetic Analyzer in the BioAnalytical Services Laboratory at the University of Maryland Center for Environmental Science. Paired reads for each organism were analyzed and assembled with Phred and Phrap [29,30], manually edited with Consed [31], and aligned and analyzed with the ARB sequence alignment program [32]. DNA sequences were deposited Clinical isolates Clinical isolates of V. parahaemolyticus (n = 8) were graciously provided by the State of Maryland's Department of Health and Mental Hygiene for comparison purposes with our environmental isolates. Sample type and source of infection are presented in Table 2. Antimicrobial susceptibility testing Antimicrobial susceptibility testing was performed using the SensititreH microbroth dilution system (Trek Diagnostic Systems, Westlake, Ohio) in accordance with the manufacturer's instructions on all PCR-confirmed V. vulnificus (n = 120 (3 vcgC+)) and V. parahaemolyticus (n = 77 (1 tdh+, 1 trh+)). Cultures were grown overnight on tryptic soy agar (TSA)+2.5% NaCl plates at 37uC. Vibrio cultures were transferred to sterile demineralized 2.5% saline solution to achieve a 0. Escherichia coli ATCC 25922 and E.coli ATCC 35218 were used as quality control strains. MICs were recorded as the lowest concentration of an antimicrobial that completely inhibited bacterial growth [33]. Resistance breakpoints published by the Clinical and Laboratory Standards Institute were used [33]. Breakpoints not available from CLSI (streptomycin, apramycin, penicillin) were derived from ranges used in similar studies [9,10,34,35]. Multidrug resistance (MDR) was defined as resistance to two or more antibiotics. Statistical analyses Descriptive statistics were used to compare the percentage of isolates demonstrating intermediate resistance or resistance to tested antibiotics at each sampling site and sampled month, as well as the average number of antibiotics that V. vulnificus and V. parahaemolyticus isolates were resistant to at each sampling location and during each month. Nonparametric Friedman two-way Analysis of Variance (ANOVA) was used to determine effects related to sampling site and month sampled. For samples for which month influenced percent resistance, stratified Kruskal-Wallis oneway ANOVA and pairwise post-hoc tests were conducted for each month separately to evaluate differences in the occurrence of antimicrobial susceptibility between strains that carried or did not carry virulence genes. All statistical analyses were performed using StataIC 12 and p-values of #0.05 were defined as statistically significant. (StatCorp LP, College Station, TX). Physical, chemical and bacterial water quality Water temperature, pH, and dissolved oxygen (DO) were uniform across the three sampling locations (Table 3). Average salinity (6 standard deviation) in St. Martin's River (24.5 ppt (61.07)) was approximately double that of the Pocomoke Sound (10.5 ppt (60.54)) and Sandy Point State Park (9.4 ppt (60.72)) sampling sites. Water depth at the Pocomoke Sound was approximately double that of Sandy Point State Park and three to four-fold that of St. Martin's River. Enterococci counts (colony forming units (CFU)) per 100 ml 21 were uniformly low at Sandy Point during each sampling time point and below the single sample regulatory closure level of 10 4 CFU per 100 ml 21 [36]. On one sampling occasion in St. Martin's River (August) enterococci counts exceeded closure levels ( Table 3). Presumptive Vibrio colonies isolated during this study indicated that V. vulnificus and V. parahaemolyticus were present in all tested water samples (Table 3). One-hundred twenty V. vulnificus and 77 V. parahaemolyticus isolates were purified, confirmed via PCR and tested for antimicrobial susceptibility. Vibrio species and virulence identification Sequence analysis (16S rRNA) of a selected subset of tested Vibrio isolates confirmed all isolates (Figure 2), except for two isolates with sequences similar to Photobacterium damselae. Virulence testing of all isolates identified three V. vulnificus isolates positive for vcgC, one V. parahaemolyticus isolate positive for tdh, and one V. parahaemolyticus isolate positive for trh. Prevalence of antimicrobial resistance in V. vulnificus All tested V. vulnificus isolates (n = 120) were susceptible to 14 of the 26 antibiotics tested, including the following drug classes that are recommended by the Centers for Disease Control and Prevention (CDC) for the treatment of V. vulnificus infections: tetracyclines, quinolones, and folate pathway inhibitors (Table 4, Figure 3). With regard to CDC recommended antimicrobial agents, 2% of the tested isolates exhibited intermediate resistance against ceftazidime, a third generation cephalosporin. Within the aminoglycoside class of antibiotics, isolates exhibited resistance to apramycin (1%) and streptomycin (4%). Intermediate resistance was expressed against amikacin (1%), apramycin (5%) and streptomycin (8%). Gentamicin was the only tested aminoglycoside to which all V. vulnificus isolates were completely susceptible. Prevalence of antimicrobial resistance in V. parahaemolyticus All tested V. parahaemolyticus isolates were susceptible to 11 of the 26 tested antibiotics and four (carbapenems, tetracyclines, quinolones and folate pathway inhibitors) of the eight tested antimicrobial classes (Table 4). Conversely, 96% of isolates were characterized by intermediate resistance to chloramphenicol, followed by ampicillin (25%), cephalothin (17%), penicillin (16%) and cefuroxime sodium (14%). A high percentage of resistance was observed against some of the penicillins (penicillin (68%); ampicillin (53%)), while a low percentage of resistance was seen against piperacillin (4%) and streptomycin (4%). Antimicrobial resistance in tdh/trh+ V. parahaemolyticus One V. parahaemolyticus isolate was tdh+ and one isolate was trh+. The trh+ V. parahaemolyticus isolate was resistant to ampicillin and penicillin and expressed intermediate resistance to chloramphenicol. The tdh+ V. parahaemolyticus isolate was resistant to ampicillin, ampicillin-sulbactam, penicillin, piperacillin-tazobactam, and amoxicillin-clavulanic acid and expressed intermediate resistance to chloramphenicol. Impact of sampling site and month on antimicrobial resistance Friedman two-way ANOVA:. The month when sampling occurred significantly influenced rates of antibiotic resistance and intermediate resistance among V. parahaemolyticus (p,0.0001, p,0.0001, respectively), as well as resistance and intermediate resistance among V. vulnificus (p = 0.0008, p = 0.0098, respectively). After adjusting for the repeated measures over time (month), sampling site also significantly influenced resistance and intermediate resistance among V. vulnificus (p = 0.0321, p = 0.0029, respectively), but not among V. parahaemolyticus (p = 0.6133, p = 0.7660, respectively). Kruskal-Wallis one-way ANOVA:. As there was a significant month effect in the Friedman two-way ANOVA for both V. vulnificus and V. parahaemolyticus isolates expressing antibiotic resistance and intermediate resistance, stratified Kruskal-Wallis one-way ANOVA and pairwise post-hoc tests were conducted on the sampling site differences for each month separately. Results showed no significant difference between sampling sites by month for V. vulnificus or V. parahaemolyticus expressing resistance ( Clinical V. parahaemolyticus Clinical isolates tested displayed comparable resistance profiles to environmental isolates tested (Table 5) Overall resistance profiles The percentage of isolate resistance, defined as resistance to any one antibiotic (AR) or resistance to two or more classes of antibiotics is depicted in Table 6. Resistance profiles were comparable for isolates with and without detected virulence genes (6A), isolates from varying sampling locations (6B), and isolates recovered in different sampling months (6C) for both V. parahaemolyticus and V. vulnificus. Treatability of Chesapeake Bay related Vibrio illness Vibrio vulnificus and V. parahaemolyticus are the causative agents for wound infections, primary septicemia, and gastroenteritis related to seafood and seawater exposure [37]. While antibiotic treatment is not typically necessary for gastroenteritis, it is required for wound infection and primary septicemia caused by both Vibrio species analyzed in this study. Most isolates tested in this study were susceptible to the antimicrobial agents recommended by the CDC for clinical treatment. Treatment recommendations for Vibrio infections include tetracyclines (doxycycline, tetracycline), flouroquinolones (ciprofloxacin, levofloxacin), third-generation cephalosporins (cefotaxime, ceftazidime, ceftriaxone), aminoglycosides (amikacin, apramycin, gentamicin, streptomycin) and folate pathway inhibitors (trimethoprim-sulfamethoxazole) [38,39]. The CDC recommends a treatment course of doxycycline (100 mg PO/IV twice a day for 7-14 days) and a third-generation cephalosporin (e.g.,ceftazidime 1-2 g IV/IM every eight hours), although they state that single agent regimens employing a fluoroquinolone have been reported to be at least as effective in an animal model as combination drug regimens with doxycycline and a cephalosporin [39]. All tested V. vulnificus isolates were susceptible to third and fourth generation cephalosporins, although two V. parahaemolyticus isolates (3%) demonstrated intermediate resistance to cefotaxime, a third-generation cephalosporin, and two isolates demonstrated a degree of resistance to cefepime, a fourth-generation cephalosporin. While the percentage of isolates expressing intermediate resistance and resistance to the newer generation cephalosporins was relatively low, these antibiotics are considered to be some of the best defenses against the severe infections that these organisms can elicit, so even a small percentage of resistant isolates could be cause for concern [39]. Due to the contraindication of doxycycline and fluoroquinolones in children, a combination of trimethoprim-sulfamethoxazole and an aminoglycoside antibiotic is recommended [39]. Given that three of the four tested aminoglycosides (amikacin, apramycin, streptomycin) were associated with intermediate resistance or resistance (e.g., streptomycin intermediate resistance and resistance in V. vulnificus: 17%, 7%, respectively; V. parahaemolyticus: 8%, 4%, respectively) in a subset of isolates, this may be a resistance pattern of concern. Conversely, for the aminoglycoside, gentamicin, all tested isolates were fully susceptible. Based on these data, physicians in the Bay region may consider focusing on gentamicin as the aminoglycoside of choice in multi-drug treatment regimens for Vibrio infections contracted by children recreating in the Chesapeake Bay. Comparison to other studies of V. vulnificus and V. parahaemolyticus antimicrobial susceptibility The percent resistance among Vibrios, in this study was comparable to a similar study conducted on Vibrios isolated from Gulf Coast oysters in Louisiana [11]. Han et al. (2007) also found higher levels of resistance among V. parahaemolyticus compared to V. vulnificus isolates. In addition, ampicillin was the only tested antimicrobial in the Gulf Coast study to which a large percentage of V. parahaemolyticus isolates demonstrated intermediate resistance to resistance (,81% of all tested isolates). This trend was seen as early as the 1970s in a study that tested resistance of V. parahaemolyticus to ampicillin and b-lactamase inhibitors [40], where over 90% of isolates were found to be resistant to ampicillin. In contrast to the present study, Han et al. (2007) found no resistance in either Vibrio species to chloramphenicol, cefotaxime, or ceftazidime, while we observed intermediate resistance against these three antimicrobial agents among a subset of V. vulnificus (78%, 0%, 2%, respectively) and V. parahaemolyticus (96%, 3%, 0%, respectively). Our findings are also in partial agreement with two large studies of V. vulnificus and V. parahaemolyticus isolates originating from the Georgia and South Carolina coastline of the United States [10]. While our Chesapeake Bay isolates did not show the same high prevalence of antimicrobial resistance, the antimicrobial agents to which isolates displayed resistance were similar (i.e., amoxicillin, apramycin, penicillin and streptomycin for V. parahaemolyticus). V. vulnificus isolates demonstrated similar resistance profiles, particularly with regard to percent intermediate resistance and resistance to the penicillin class and cefoxitin. Baker-Austin et al. (2009) reported higher percent intermediate resistance and resistance among V. vulnificus against apramycin and streptomycin compared to that of the isolates reported in our study. In addition, key antimicrobials to which V. parahaemolyticus isolates from Georgia/ South Carolina displayed susceptibility were also found to be susceptibile in our study (i.e., ceftriaxone, ciprofloxacin, imipenem, ofloxacin, meropenem, tetracycline), except in the case of Table 4. Antimicrobial resistance patterns among environmental Vibrio isolates. vulnificus isolate to be completely susceptible to all antimicrobials tested, while the present study found 15 (12.5%) isolates to be susceptible to all tested antimicrobials. A recent study of antimicrobial susceptibility in toxigenic and non-toxigenic V. parahaemolyticus isolates from shellfish and clinical samples in Italy [12] produced interesting comparisons to our findings. Similar to other studies, no intermediate resistance or resistance to chloramphenicol was found in Italian V. parahaemolyticus samples, whereas our study found high levels of intermediate resistance to this antibiotic. The Italian study found isolates to be 100% (n = 170) resistant to ampicillin, while our study detected 53% (n = 40) resistance and 25% (n = 25) intermediate resistance. Resistance to cefotaxime was found in approximately 20% (n = 21) of Italian samples, compared to 0% resistance and 4% (n = 3) intermediate resistance in this study. In contrast to our study, which detected full susceptibility to ciprofloxacin, the Italian study found resistance (9%, n = 10), particularly in clinical samples. Comparable susceptibility patterns are reported in these studies for trimethoprim-sulfamethoxazole, doxycycline and tetracycline, as all V. parahaemolyticus tested in this study were fully susceptible to these three antibiotics, while Italian isolates displayed intermediate resistance for trimethoprim-sulfamethoxazole (4%, n = 4) and tetracycline (11%, n = 12). Sampling sites and influences of pollution Each sampling site included in this study has a history of water pollution. Sandy Point State Park has historically been a site of low bacteriological water quality and is adjacent to the Magothy River, a site where there have been numerous wastewater treatment overflows. The Pocomoke River is located adjacent to many agricultural operations, including poultry concentrated animal feeding operations (CAFOs), which may increase the introduction of antimicrobial residues into the waterway due to runoff of fecal matter contaminated with antimicrobials used in poultry production [41]. Finally, St. Martin's River is adjacent to many homes on septic systems, notorious for leakage [42]. While each of the sampling sites has a history of contamination that may increase the incidence of antimicrobial residues and associated changes in resident bacteria in the estuarine environment, this study only detected a small difference in levels of antibiotic resistance between sites. Specifically, in the month of July, V. vulnificus recovered from St. Martin's River expressed higher percentages of intermediate resistance compared to isolates recovered from Sandy Point. However, it should be noted that this study was limited by the inability to culture V. vulnificus and V. parahaemolyticus from areas presumed to be void of contamination by human or animal sewage or industrial pollution. Due to this limitation, resistance levels detected at each of the three studied sites could not be compared to that of a local ''pristine'' site and this likely reduced our ability to differentiate pollution-related resistance from naturally occurring resistance among tested isolates. Antimicrobial susceptibility as compared to enterococci concentrations In this study, we also tested for enterococci as an indicator of fecal contamination in order to specifically evaluate whether areas that were characterized by higher levels of possible fecal contamination were also marked by higher levels of antibiotic resistance. We observed a range of enterococci concentrations over the course of the study, although most sampling sites were within the range of acceptable water quality for recreation on each sampling date. Interestingly, concentrations of enterococci were not correlated with percentages of antibiotic resistance in the studied environments. During the one instance that the geometric mean of enterococci was higher than regulation limits, there was no discernible difference in levels of resistance among isolates originating from that site (St. Martin's River -August). This is counter to previous observations where percent antimicrobial resistance was elevated at sites contaminated with higher levels of enterococci that may have originated from fecal waste of humans [43] and animals [44]. Conclusions This study represents the first investigation of antimicrobial susceptibility of Vibrio species recovered from the Chesapeake Bay and provides a baseline against which future studies can be compared to determine whether susceptibilities change over time. Isolates tested in this study displayed high intermediate resistance to chloramphenicol, when compared to similar studies. Isolates' intermediate resistance and resistance to some aminoglycosides should be noted because these antibiotics are used to treat pediatric Vibrio illnesses originating from the Chesapeake waters or seafood. Low-level intermediate resistance and resistance to third and fourth generation cephalosporins may also limit treatment effectiveness and should be monitored. As most of the antimicrobial agents recommended for treatment of Vibrio illnesses by CDC were fully effective against V. vulnificus and V. parahaemolyticus isolated from the Chesapeake Bay, treating infections contracted from the Bay, at least in adults, is not likely to be problematic. Based on our data, treatment of pediatric illnesses may benefit from the use of trimethoprim-sulfamethoxazole and the aminoglycoside, gentamicin, which was the only aminoglycoside that was 100% effective against Vibrios recovered in this study.
2017-05-30T18:10:39.986Z
2014-02-25T00:00:00.000
{ "year": 2014, "sha1": "828fe65ba6f768773d61a865e135dbf7968494b7", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0089616&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "828fe65ba6f768773d61a865e135dbf7968494b7", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219536246
pes2o/s2orc
v3-fos-license
β‐Lactam allergy testing and delabeling—Experiences and lessons from Singapore Abstract Background β‐Lactam allergy is over‐reported and this leads to greater healthcare costs. Allergy testing has inherent risks, yet patients who test negative may continue avoiding β‐lactams. Objective To evaluate the safety and diagnostic value of β‐lactams allergy testing locally and usage of antibiotics following negative testing. Methods We performed a retrospective medical record review and follow‐up survey of patients who underwent β‐lactam testing between 2010 and 2016 at the National Skin Centre, Singapore. Results We reviewed the records of 166 patients, with a total of 173 β‐lactam allergy labels. Eighty (46.2%) labels were to penicillin, 75 (43.1%) to amoxicillin/amoxicillin‐clavulanic acid, 11 (6.4%) to cephalexin, and 5 (2.9%) to others. Skin tests were performed in 142 patients and drug provocation tests (DPTs) in 141 patients. Eleven (6.6%) patients defaulted DPTs after skin testing. Out of 166 patients, 22 (13.3%) patients were proven allergic by either skin tests (16) or DPTs (6). Patients who tested positive had nonsevere reactions. Out of 155 patients who were conclusively evaluated, 133 (85.8%) were not allergic. Of these patients, 30 (22.6%) used the tested β‐lactam subsequently, with one reporting a mild reaction. Fifty‐one (38.3%) patients were uncontactable or uncertain if they consumed a β‐lactam since testing negative. Fifty‐two (39.1%) patients had no re‐exposure (35 had no indication, 17 were fearful of reactions). Conclusion Drug allergy testing was safe and removed inappropriate labels. Clinical Implication Allergy testing is efficacious, but fears of subsequent rechallenge should be addressed to maximize the effectiveness of allergy delabeling. | INTRODUCTION β-Lactams are the most commonly used antibiotics in the world. 1 Penicillin allergy is reported in 10% to 20% of patients in clinical practice [2][3][4] but it has been shown that most of these patients do not have true allergies and are able to tolerate penicillins after thorough allergy evaluation. 5 Patients who are labeled penicillin-allergic may be prescribed less effective, more expensive or more toxic drugs, leading to increased healthcare costs, and antimicrobialresistant infections. 6 Certain populations such as patients with malignancies, human immunodeficiency virus (HIV) infection and recurrent sinusitis or urinary tract infections are more likely to require multiple courses of antibiotics and benefit from appropriate allergy labeling. 7 The importance of penicillin allergy delabeling has been recognized by antibiotic stewardship programs. 8,9 However allergy testing is time-consuming and has inherent risks as even skin tests may trigger anaphylaxis. 10 Furthermore, despite efforts to remove allergy labels, patients who test negative may not subsequently receive the antibiotics tested due to accidental relabeling 11 or patients' and physicians' perceptions 12 which may result in persistence of incorrect allergy labels. In this study, our primary aim was to determine the clinical value and safety of β-lactam allergy evaluation performed in a dermatology outpatient clinic. Our secondary aims were to evaluate patient usage of antibiotics following negative testing and identify factors for nonusage. | METHODS We performed a 7-year, retrospective medical record review of patients over the age of 16 years old who underwent skin tests and/or drug provocation tests (DPT) to any β-lactam antibiotics at the drug eruption clinic in National Skin Centre, Singapore from 1st January 2010 to 31st December 2016. This study was performed as an audit on safety and quality of patient care. Careful history taking and examination were performed in all patients. We corroborated patients' drug histories with electronic medical records when available and by contacting physicians involved in the care of the patients when necessary. We determined the types of cutaneous and systemic reactions, time to onset of reaction after drug consumption and comorbidities. Drug hypersensitivity reactions were considered "immediate" if onset of symptoms occurred within 1 to 6 hours of last dose of drug, and "delayed" if occurred after 6 hours. Patients with reactions strongly suggestive of anaphylaxis were not evaluated further in our clinic due to safety reasons and centre's policy. These cases were referred to a general hospital with emergency or intensive care facilities. In patients with history suggestive of delayed reactions, patch tests (PTs) with crushed commercial tablets in 30% white soft paraffin were performed as per guidelines 13 and readings done at day 2 and 4 according to International Contact Dermatitis Research Group criteria. In patients with initial reaction of uncertain nature or suggestive of immediate hypersensitivity, skin prick tests (SPT) followed by intradermal tests (IDT) were performed in accordance with previous recommendations 14 with penicillin G, ampicillin, amoxicillin-clavulanic acid and DAP Penicillin Test Kit (Diater; Madrid, Spain) which consisted of benzylpenicilloyl poly-L-lysine (PPL) and minor determinant mixture (MDM). PPL and MDM were replaced on 1 December 2011 by benzylpenicilloyl octa-L-lysine and the minor determinant (sodium benzylpenilloate), respectively. SPT/IDT with delayed reading at 24 to 48 hours were performed in some patients with uncertain reaction or delayed type reaction. In patients labeled allergic to a cephalosporin, SPT/IDT to the above and the labeled cephalosporin (if available in intravenous form) was also done. If the cephalosporin did not exist in intravenous form (eg, cephalexin), only SPT to the pulverized commercial tablet was performed. Direct DPTs were performed in some cases if patients declined skin tests and their reactions were considered to be of low risk (ie, without signs of angioedema, anaphylaxis, pustulosis, mucositis, blisters, erosions, or painful skin lesions). Completion of DPT was necessary to conclude drug allergy evaluation. DPTs to the labeled β-lactam were performed in general accordance with guidelines of the European Network for Drug Allergy. 15 DPT was performed without blinding, either as a single therapeutic dose challenge or graded challenge given as one-quarter, one-half followed by full single therapeutic dose with 60 to 75 minutes intervals between doses. These dose escalation protocols, which differ from guidelines, were used as we only evaluated patients with low-risk reactions. Smaller starting doses such as 1% and 10% which are more appropriate for patients with anaphylaxis were hence not employed. Patients were observed in clinic for 120 to 150 minutes after the final dose. Extended DPT with normal therapeutic doses was performed if the drug was strongly suspected in the initial delayed-type reaction. The duration of extended challenge was not standardized and could be up to the number of days from initiation of the antibiotic to the index reaction. We considered DPT to be positive only if objective signs were elicited within a reasonable time frame. SPT/IDT positive patients did not proceed to DPT but given an option to evaluate for selective β-lactam hypersensitivity if SPT/IDT was positive to amino-penicillins. Conclusion of drug allergy evaluation included counseling of patients regarding antibiotic tolerance, modification of antibiotic allergy labels in patients' records in the nation-wide electronic allergy notification system and a letter to patients' managing physicians about their changed allergy status. Figure 1 shows how the patients in our study were evaluated. We performed a follow-up evaluation of β-lactam antibiotic use in patients who were proven nonallergic. This was performed by telephone call and/or through electronic medical records if available. If β-lactams had been used, we attempted to determine if any adverse reaction occurred. If β-lactams had not been used, we asked for reasons for nonuse, for example, patients' or physicians' concerns or absence of indication. Table 1 summarizes the characteristics of the 166 patients included. The median age of our patients was 42 (range, 14-76) years. There were 96 females (57.8%) and 70 males (42.2%). The ethnic distribution was as follows: 141 Chinese (84.9%), 10 Malay (6.0%), 5 Indian (3.0%), 4 Caucasian (2.4%), 2 Eurasian (1.2%) patients, and 4 (2.4%) others. | Baseline demographics There were 44 patients (26.5%) who had a history of recurrent infections (eg, urinary tract infections or sinusitis) or conditions which predisposed them to future F I G U R E 1 Flowchart of patients undergoing skin tests and oral provocation tests and outcomes. a One hundred twenty SPT/IDT, 15 SPT/IDT with delayed reading, eight PT, five SPT/IDT, and PT. b Six IDT amoxicillin-clavulanic acid only + (including one on delayed reading); four IDT amoxicillin-clavulanic acid+ and ampicillin+ (including one on delayed reading); one IDT amoxicillin-clavulanic acid+ and penicillin G+; one IDT amoxicillin-clavulanic acid+ , penicillin G + , ampicillin+ ; two IDT ampicillin+ (including one on delayed reading); one IDT penicillin G+; one IDT penicillin G+ and MDM+. c One hundred thirty seven graded challenges, five single dose challenges; 19 had extended DPTs (two up to 7 days duration). d Includes the two patients with positive IDT ampicillin (DPT penicillin V −ve). Number of tests exceeds number of patients as some patients had tests to multiple β-lactams. IDT, intradermal test; MDM, minor determinant mixture; SPT, skin prick test infection (eg, bronchiectasis, diabetes mellitus, HIV infection, malignancies, valvular heart disease, or eczema). Eleven patients who had negative skin tests did not return for DPT and evaluations were considered inconclusive. Out of the 115 patients with negative skin tests who proceeded to DPT, 110 patients tested negative. Negative predictive value for skin testing was 95.6%. There were 141 patients who underwent 142 DPT. This group comprised of 24 patients who proceeded to 25 DPT directly, including one patient who underwent two DPTs, Table 2. Nineteen patients (13.4%) underwent extended DPT lasting 2 to 7 days without reaction. In summary, we proved that 133 patients were not allergic to β-lactams, which constituted 85.8% of the 155 patients who were conclusively evaluated. Twenty-two (13.3%) patients had confirmed allergy to β-lactams −16 had immediate hypersensitivity (13 SPT/IDT+, three DPT+) and six had delayed hypersensitivity (three SPT/IDT with delayed reading+, three DPT+). Two patients with delayed hypersensitivity were tolerant of β-lactams other than amino-penicillins. Eleven (6.6%) patients did not complete their evaluation. Negative predictive value of skin testing was 95.9%. Reactivity of skin tests did not seem to be related to time lapsed since initial reaction. Out of 16 positive skin tests, 10 (seven immediate reactions and three IDT with delayed reading) were performed within 12 months of initial reaction, while six (all immediate reactions) after 3 years had passed. | Safety None of the reactions were life-threatening nor severe requiring admission or subcutaneous adrenaline. Three positive reactions to DPT occurred after or at the end of the period of clinic observation. One patient who reported mild urticaria as initial reaction declined skin tests and underwent amoxicillin DPT directly. He developed mild urticaria within 2 hours of starting DPT at a cumulative dose of 750 mg amoxicillin. Rescue treatment with cetirizine 10 mg was given with good response. Two patients labeled allergic to cephalexin, who had negative SPT to cephalexin and negative SPT/IDT to penicillins, developed generalized urticaria within 2.5 and 5 hours respectively after starting graded DPT. The first patient reacted after a cumulative dose of 375 mg, while the second patient reacted after a cumulative dose of 875 mg. Both patients remained hemodynamically stable. One patient required intramuscular diphenhydramine and systemic steroids and further observation in the emergency department. One patient who reported a vague initial reaction with possible lip swelling after taking amoxicillinclavulanic acid underwent evaluation as for immediate reaction with SPT/IDT and subsequent DPT, but was eventually diagnosed with amoxicillin-clavulanic acidinduced FDE after reproduction of the rash with DPT. | β-Lactam use after allergy evaluation Out of 135 patients who had β-lactam allergy label modification, 45 (33.3%) were not contactable and had no record of re-exposure according to available records in public medical institutions. Thirty (22.2%) patients had taken a β-lactam antibiotic in the posttesting period, with only one developing a reportedly "mild" reaction. Fifty-two patients had not used a β-lactam. Thirty-five (25.9%) patients had no indication, while 17 (12.6%) avoided β-lactam due to concerns of allergy (two on the part of the attending physician, 12 of the patient, and three of both the patient and physician). | DISCUSSION Inappropriate allergy labels result in increased healthcare costs and adverse events. 16,17 Our results show that more than 85% of penicillin allergy labels in our patient group were incorrect. This finding is consistent with those of other reports 2,5 and supports the need for allergy evaluation services in our healthcare system. Skin testing in our population had high NPV of 95.9% which is consistent with reported literature 18,19 and reassuring for patients who are afraid to proceed to direct DPT. Fourteen (87.5%) out of 16 positive skin reactions were to aminopenicillins. MDM testing was positive in only one patient, who also had a positive reaction to penicillin G. None of our patients tested positive to PPL. These results may demonstrate the decreasing importance of PPL and MDM testing, reflecting the dominant use of aminopenicillins and cephalosporins in current clinical practice. Bourke et al similarly reported that the use of PPL and MDM did not improve NPV in skin testing 20 and further studies are needed to evaluate this. In our cohort, these findings may be due to patient selection as the usefulness of minor determinant testing has been shown in more severe anaphylaxis cases. 21 Twelve positive skin test reactions were to aminopenicillins only but testing of tolerance of other β-lactams was done in only two of our patients. Improved understanding of side-chain allergy should reassure patients and allergists in further evaluation of β-lactams with different side chains if required. Skin testing for cephalosporin allergy is not as wellstandardized as for penicillins and may pose specific difficulties. In our two cephalexin-allergic patients, the lack of an intravenous form of cephalexin for IDT and the direct course to DPT after SPT resulted in the most severe reaction in our study population. Skin testing with amino-penicillins with similar side chains did not seem to be useful in our evaluation of cephalexin allergy. We recommend strict adherence to guidelines recommending lower starting doses of 10% (1%, if required) of full therapeutic dose in such situations. 15,22 We observed that positive reactions to DPT may occur several hours into the test procedure. This demonstrates the need for long observation periods which may also be reassuring for patients. Clear instructions need to be given to patients after completion of DPT doses and allergist should verify nonreaction several days after DPT before removal of allergy labels. We experienced no safety issues in our population as we excluded high-risk cases. However, the lack of reliable history of the initial reaction remains a problem as seen from four out of six of positive DPTs. FDE, in particular, may be mistaken for angioedema if perioral area is involved and post-inflammatory hyperpigmentation is not obvious, resulting in the wrong choice of skin test for evaluation as skin testing for FDE is only reactive in lesional skin. The limited clinical utility of patients' history in predicting skin test reactivity has been previously reported. 23,24 Eleven patients who underwent skin tests did not complete their evaluation by following up with DPT. This is a wasteful consumption of resources and patients need to be counseled that while skin testing has its value, DPT remains the conclusive step. It has been suggested that direct DPT should be considered in carefully selected patients in appropriate clinical settings. [25][26][27] Many patients did not take β-lactams despite negative tests often due to continued perceived intolerance. This lowers the effectiveness of allergy label modification which has also been reported in other studies. 11,20 There is a need for patient and clinician education as well as systemic measures (eg, electronic allergy record modification) to facilitate β-lactam use in proven-tolerant patients. 25 Improvements in electronic allergy records systems should also keep pace with developments in understanding of side-chain allergy and allow documentation of tolerated β-lactams. Our study is limited in some ways. We did not specifically collect data for in vitro tests in this study but we did not have positive results for IgE to penicilloyl G, penicilloly V, amoxicillin, or ampicillin during this period of time. The retrospective nature of the study may result in incomplete data but all patients were reviewed and had careful documentation performed by dermatologists (YKH and YLL) with expertise in drug hypersensitivity testing. Our follow-up survey of β-lactam use after testing was limited as not all patients were contactable but we were nevertheless able to recognize that patients and physicians feared re-exposure to β-lactams despite negative testing, a finding similar to other studies. 11,20 Our small sample size and retrospective study design limited the analysis of factors which may have predicted positive test reactions. | CONCLUSION We confirmed that β-lactam allergy evaluation and delabeling is safe in carefully selected cases and the proportion of patients who have true allergic reaction is low. There remains a need for faster diagnostic methods to evaluate β-lactam allergy. Skin tests are of value in cases with moderate risk or of uncertain reaction but direct DPT may be considered in low-risk cases. Starting doses in DPT should be low to minimize severe reactions, especially in unknown or severe reactions. Allergy label removal must be accompanied by patient education and adequate documentation to facilitate future use. More studies are needed to understand factors from patients' and physicians' perspectives which may impede allergy delabeling efforts.
2020-06-09T13:02:43.785Z
2020-06-07T00:00:00.000
{ "year": 2020, "sha1": "394350544284e6ec8cb6a1538e1601c892f7ce52", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/iid3.318", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57292f889f7cfb86dbeb77bf111c9caa5c1d8f3f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236864939
pes2o/s2orc
v3-fos-license
The Effect of Endorsers’ Source Credibility on Emotion Towards Youtube’s Advertisement This study examines the effect of endorsers' source credibility on emotion towards youtube's advertisement. We analyze the impact of social media influencer and celebrity's credibility on emotional responses of respondents, namely pleasure and arousal. The data were collected by a survey through google form related to source credibility and S-O-R theory. Three hundred and eighty-five people joined the survey distributed via Google Form. The amount of respondent is 385 people using Lemeshow formula with a Margin of Error 5% and purposive sampling technique. The study used Multivariate Regression Analysis and Independent Sample T-Test. Findings showed that there is a significant effect of social media influencer and celebrity's credibility to emotional pleasure and arousal towards the advertisement. It is also found that social media influencer's expertise influenced stronger on pleasure and arousal than attractiveness and trustworthiness. In the other hand, celebrity's trustworthiness has a stronger impact on emotion pleasure and arousal than attractiveness and expertise. We suggested that future research can also analyze purchase intention because some previous studies stated that emotional response could predict purchase intention. INTRODUCTION At this time, the internet is common for everyone because almost every level of society knows and uses the internet. According to research by We Are Social (2019), a British media company that works with Hootsuite, Indonesian people spend three hours 23 minutes a day to access social media. Indonesia's population are 265.4 million, and 130 million Indonesian using social media (49 percent). According to Kompas Tekno from We Are Social, in 2019, YouTube occupies the first position with a percentage of 43 percent. Facebook, Instagram, and Twitter trailed in second to fourth place in a row. Social media is not only used for sharing daily activities but also looking for income. In Indonesia, many companies have used celebrity as endorsers to advertising their products (Ramadani, 2013). Also, advertising is used as a promotional tool for innovation and market competition (Farida, 2018). Therefore, companies need to give attention to consumers' perceptions of endorsers' source credibility in advertising their products (Ramadani, 2013). Sales promotion is based on individual factors, such as consumers who behave affectively, that is pleasure refers to the level where the individual feels good, full of excitement, happy, or satisfied with the product (Situmorang, 2019). In the marketing domain, the PAD model (Pleasure, Arousal, and Dominance) has been used in assessing the emotions associated with the advertisement (Holbrook and Batra, 1987). Pleasure can be seen from how individuals feel like or dislike the product in the advertisement (Situmorang, 2019). Donovan, Rossiter, Marcoolyn and Nesdale (1994) chose to omit the dominance portion of the PAD in their model as did Bakker, Voordt, Vink, and Boon (2014). On the other side of the issue, Yani-de-Soriano and Foxall (2006) made convincing arguments for the continued inclusion of the dominance component. Endorsers' source credibility is a major determinant in advertising effectiveness (Ohanian, 1990) because endorsers have a beneficial impact on product brands (Spry, Pappu and Cornwell, 2011). The credibility of celebrities such as actors, actresses, entertainers, and athletes are usually used for promotional activities (Tanjung and Hudrasyah, 2016). However, there are also several potential risks of using celebrity endorsement. Celebrity endorsement can inflict high risk and 'no gain' situations such as the scandals' surroundings in celebrities' past (Roozen and Claeys, 2010) and also celebrity's behaviour (Till and Shimp, 1998). As a result, the trend of using non-celebrity to advertising is growing due to the negative effect of celebrity endorsement which could damage the brand image (Saeed, Naseer, Haider, and Naz, 2014). Social Media Influencers are individuals who aren't well-known on television. Usually, they are chosen by companies based on the demographics of existing target markets (Rodriguez, 2008). They are just ordinary people who have high social status on social media platforms such as Facebook, Instagram, Youtube and Twitter (Kaplan and Haenlein, 2010) so that the cost advantage and also possibility to have a better fit between the product and endorsers (Erdogan, 1999). Previous research found that the credibility of social media influencer has a more significant effect than the credibility of celebrity (Tanjung and Hudrasyah, 2016;Schouten, Janssen and Verspaget 2019). Although this makes for an experimentally valid comparison between endorser types, this is not how influencers on social media normally engage with a product. Usually, the product that is endorsed is part of a larger message and is integrated into a social media post, such as a vlog or an Instagram post (Kapitan and Silvera 2015). Because previous research only included experience goods, this study compares the effects of influencer vs celebrity endorsements on other types of products, as in the study of Schouten, Janssen and Verspaget (2019) who compared endorsements of traditional celebrities with endorsements by social media influencers. However, in reality, this distinction is not always so clear-cut. Numerous cases are known of successful social media influencers transgressing into more 'traditional' celebrities, pursuing a career as talk show presenter or fashion model and making their way to the general public and mass media. On the other hand, many traditional celebrities have become popular influencers on social media. This raises the question of which type of influencers are the most successful endorsers, and to what extent popularity of the endorser is an important variable in explaining endorser effectiveness. In our studies, we used well-known influencers with a large follower base, so-called 'micro-celebrities', but influencers who are relatively less popular maybe even more effective endorsers. Based on the background of the problems, the writer wants to find out the differences in credibility between the two types of endorsers on the Emotional Pleasure and Arousal with a study entitled "The Effect of Endorsers' Source Credibility on Emotion Toward Youtube's Advertisement". Mehrabian and Russell (1974) introduced the idea of using three emotional dimensions such as pleasure, arousal, and dominance (PAD), to describe perceptions of physical environments. PAD are three independent emotional dimensions to describe people's state of feeling. They conceived pleasure as a continuum ranging from extreme pain or unhappiness to extreme happiness. They used adjectives such as happy-unhappy, pleased-annoyed, and satisfied-unsatisfied to define a person's level of pleasure. Arousal was conceived as a mental activity describing the state of feeling along a single dimension ranging from sleep to frantic excitement and linked to adjectives such as stimulatedrelaxed, excited-calm and wide awake-sleepy to define arousal. Dominance was related to feelings of control and the extent to which an individual feels restricted in his behaviour (Bakker, Voordt, Vink, and Boon, 2014;Samudro, 2017;Farida, 2018). To define the degree of dominance, Mehrabian and Russell used a continuum ranging from dominance to submissiveness with adjectives such as controlling, influential and autonomous. Emotional Responses In the marketing, the PAD model has been used in assessing the emotions associated with television ads (Holbrook and Batra, 1987), the atmospherics in both retail (Donovan, Rossiter, Marcoolyn, and Nesdale, 1994;Turley and Milliman, 2000), online contexts (Chang et al., 2014;Hsieh et al., 2014), and various consumption experiences (Havlena and Holbrook, 1986). Hawkins et al. (2000), defines emotions as strong, relatively uncontrolled feeling that affects our behaviour. Even though the PAD model was originally configured with three components, pleasure and arousal than dominance seem to have been used to a greater extent by researchers (Bakker, Voordt, Vink, and Boon, 2014). Donovan and Rossiter (1994) chose to omit the dominance portion of the PAD in their model as did Bakker, Voordt, Vink, and Boon (2014). On the other side of the issue, Yani-de-Soriano and Foxall (2006) made convincing arguments for the continued inclusion of the dominance component. Emotional Pleasure Pleasure is subjective on how individuals feel like or dislike an environment (Situmorang, 2019). Pleasure refers to the level where the individual feels happy, full of excitement, happy related to the situation. According to Mehrabian and Russell (1974) and Bakker, Voordt, Vink, and Boon (2014), pleasure is measured by evaluating verbal reactions to the environment such as happiness-unhappy, happy-upset, satisfied-dissatisfied, happy-sad, hopeful-hopeless, relaxed-saturated. This study used measurements from previous research by Farida (2018). She used six semantic differentials that have been translated into Indonesian, namely tidak gembira-gembira, kesal-senang, tidak puas-puas, sedihsenang hati, hilang harap-penuh harapan, jenuh-santai by using 5 Likert scales. Emotional Arousal Arousal refers to where someone feels alert, excited or an active situation (Samudro, 2017). Mehrabian (1974) defines arousal (passion) as a combination of mental alertness and physical activity. He operationalizes passion by using passionate, excited, bored, sleepy, restless and high concentration. Arousal is measured by evaluating verbal reactions to the environment such as passionate-not excited, excited-calm, full of madness-lethargic, restless-bored, awake-sleepy, moved-not moved. This study also used measurements from previous research by Farida (2018). She used six semantic differentials that have been translated into Indonesian, namely tidak bergairah-bergairah, tenang-bersemangat, lesu-penuh kegilaan, jenuh-gelisah, mengantuk-tidak mengantuk, tidak tergugah-tergugah by using 5 Likert scales. Source Credibility Credibility is one of the most important criteria for assessing the quality of information (Bae and Taesik, 2011). Credibility can be defined as "the attitude towards the source of communication carried out at a certain time by the recipient" (McCroskey, 1966). According to Hovland, Janis, Kelley (1953), the Source Credibility Model fundamentally states that the effectiveness of a message depends on the perceived level of expertise and trustworthiness of an endorser or the source. Therefore, two fundamental dimensions of source credibility are expertise and trustworthiness. Besides those two dimensions, the attractiveness of the source is also accepted as a dimension of credibility (Ohanian, 1990). The source familiarity, likability and similarity were not used in this research. Hence, there are three dimensions of source credibility, which described as follow: Attractiveness Evaluations of the different research studies signify that the building block of charisma does not have only one dimension, and to facilitate here are several explanations that were used to categorize charisma. The building block has been described together in terms of facial and substantial charm (Baker and Churchill 1977;Patzer 1983). This measurement was adopted from previous research by Farida (2018) that has been translated into Indonesian, such as menarik-tidak menarik, berkelas-tidak berkelas, tampan-jelek, elegan-tidak elegan, seksi-tidak seksi (Goldsmith, Lafferty, Newell, 2000;Ohanian, 1990). Trust Trustworthiness is the spectator's extent of assurance and intensity of identification of the spokesperson as well as the communication. Some studies hold up the impact of trustworthiness on thoughts alter (Miller and Baseheart, 1969). To measure the trustworthiness aspect of the celebrity credibility, this study adopts from previous research by Farida (2018) that have been translated into Indonesian, such as bertanggung jawab -tidak bertanggung jawab, jujurtidak jujur, dapat diandalkan -tidak dapat diandalkan, dapat dipercaya -tidak dapat dipercaya, tulus -tidak tulus (Goldsmith, Lafferty, Newell, 2000;Ohanian, 1990). Credibility of Endorsers: Social Media Influencer VS. Celebrity Some studies found the effectiveness and positive influence of endorsers in advertising (Menon, Boone, and Rogers, 2001;Pornpitakpan, 2004;Pringle and Binet, 2005;Roy, 2006). A general study has been accomplished on the impact of endorser credibility on promotion usefulness. A convincing supporter can provide as an essential predecessor in the assessment of commercials and products. A particular variable measuring celebrity reliability is used by combining three celebrity trustworthiness subscales. People are more interested in getting recommendations from credible communicators because they are in accordance with community values and attitudes (Ahmed, 2012). Celebrities are well-known personalities in the community either because of their credibility or attractiveness. Attributes such as attractiveness, luxurious lifestyle or expertise are just some other examples of general characteristics that usually distinguish celebrities from the general public. At the same time, Social Media Influencer have grown into important marketing tools for companies to advertise the products (Jaakonmäki, Müller, & Brocke, 2017). Social Media Influencers are people who have high social status on social media platforms such as Facebook, Instagram, Youtube, Twitter. Social Media Influencer usually posts personal information about their daily lives (Kaplan and Haenlein, 2010). The attractiveness of celebrity is more influential than non-celebrity social media influencer. Still, social media influencer is considered to be more trustworthy in delivering statements in advertisements than celebrity endorser (Tanjung and Hudrasyah: 2016). Schouten, Janssen and Verspaget (2019) show that the public is more familiar with and believes social media influencer. Other than that, social media influencer considered to be more knowledgeable and their expertise is more effective than celebrity endorsers. Stimulus -Organism -Response (S -O -R) Theory Stimulus-Organism-Response (SOR) theory has long been used to understand consumer behaviour (Hoyer and MacInnis, 1997). According to Wilbur Schramm (1971), S-O-R is the basis of the hypodermic syringe theory, the classical theory regarding the process of the influential mass media effect process. Hovland et al. (1953) say that the behaviour change process is essentially the same as the process of learning. It illustrates the process of behaviour change at the individual learning process consisting of the stimulus were given to the organism can be accepted or rejected. If accepted or not rejected by the organism, it means the stimulus is effective to individual response. A rejected stimulus shows that it's an ineffective stimulus affecting individual response and stop here (Anggraini, Mustofa, and Sadewo, 2014). The purpose of this study is to develop a conceptual framework by extending the organism of PAD Theory by Mehrabian and Russell (1974) to assess environmental perception, experience, and psychological responses. The conceptual framework would illuminate how related credibility affect viewers' emotionally (Othman, Musa, Muda and Mohamed, 2016). Source Credibility Theory The source credibility theory as propounded by Hovland, Janis, Kelley (1953) stated that people or receivers are more likely to be persuaded when the source presents itself as credible. Furthermore, Hovland and Weiss (1951) later studied the influence of sources in persuasion. Message source credibility refers to how much the message receiver believes in the sender. It is an attitude towards the message source (Gunther, 1992) that affects the receiver level of belief about what the source claims (West, 1994). This credibility is also an important factor affecting persuasion effectiveness (Hovland and Weiss, 1951). If the source has credibility, the receivers will believe the message. Thus, credibility is how much the message reflects reality after the receivers' evaluation (Ling and Liu, 2008). Message source credibility is a high-order construct consisting of three sub-dimensions, trustworthiness, expertise, and attractiveness. Trustworthiness refers to the degree of confidence and acceptance receivers have towards the message sender. Expertise refers to professional knowledge the sender has about the product. Attractiveness refers to when the sender attracts receivers to consume products or services (Ohanian, 1990). RESPONSE Emotional Pleasure and Arousal Theoretical Framework Figure 2. Theoretical Framework Research Hypothesis H1: There is an effect of social media influencer's credibility on emotional pleasure H2: There is an effect of social media influencer's credibility on emotional Arousal H3: There is an effect of the celebrity's credibility on emotional pleasure H4: There is an effect of the celebrity's credibility on the emotional arousal METHODS The study used Multivariate Regression Analysis to examine the credibility effect of the social media influencer (Suhay Salim) and celebrity endorser (Syahrini) on the audience's pleasure and arousal emotion. Independent Sample T-Test is used to find out whether there is a difference between the influence of social media influencer and celebrity in promoting LAKMÉ Make Up products on pleasure and arousal emotional responses from YouTube channel's viewers. The population is women ranging between 17 to 30 years old. This research applied non-probability sampling, which is purposive sampling technique because the population was unknown. The researchers used the Lemeshow formula with the Margin of Error 5% and got 385 respondents. The population have to know about LAKMÉ Make-Up Products and are familiar with both of endorsers, namely Suhay Salim as a social media influencers and Syahrini as a celebrity. RESULT MANOVA, or Multivariate ANOVA, is a statistical test used to measure the effect of independent variables with a categorical scale on several dependent variables. The test is used to measure the effect of independent variables on several dependent variables simultaneously. The result of multivariate ANOVA test on the table below shows that the P-value of the four types of tests is < 0.05 (significant at the 95% confidence level). It can be concluded that there is a significant influence on social media influencer's credibility to emotional pleasure and arousal. Source: IBM SPSSS Version 20 The homogeneity test is a test of whether the variances of two or more distributions are equal. The homogeneity test is carried out to see whether the data in independent variables and dependent variables are homogeneous. Table 2 shows the results that all the dependent variables have different variants because of Sig. < 0.05, so the Post Hoc test used is Games-Howell. Source: IBM SPSSS Version 20 The table below shows the effect of one independent variable on each of the dependent variables. Table 3 shows significance value < 0.05. It can be concluded that the credibility of social media influencer has a significant influence on emotional pleasure and arousal responses. Then it can be said that: 1. The credibility of social media influencer affects emotional pleasure with a P-value of 0,000, which means H0 is rejected, or H1 is accepted. 2. The credibility of social media influencer affects emotional arousal with a P-value of 0,000, which means that H0 is rejected or H2 is accepted. 2. The celebrity's credibility affects emotional arousal with a P-value of 0,000, which means H0 is rejected or H4 is accepted. Source: IBM SPSSS Version 20 Based on results as shown in Table 7, the expertise of social media influencer's credibility is higher than the other two dimensions, with mean value at 22.69. Trustworthiness in celebrity's credibility has a higher value at 21.57 than attractiveness at 20.27, and expertise value at 20.66. DISCUSSION After conducting research with the existing methods, it can be concluded that the credibility of the two endorsers on emotional pleasure and arousal responses which is studied using the Stimulus -Organism -Respond (S-O-R) theory shows that respondents can feel emotional pleasure and arousal after watching the advertisements who displayed by social media influencer and celebrity. This can be seen from the results of the P-value is 0.000 < 0.05 to see the effects of the credibility of endorsers on the emotional responses namely pleasure and arousal. The results of the independent sample t-Test show that the expertise of social media influencer'credibility has more effect on respondents' emotions than trustworthiness and attractiveness. This can be seen from the mean value of 22.69. While the credibility of celebrity's trustworthiness have a higher value at 21.57 the other two dimensions, namely attractiveness at 20.27, and expertise value at 20.66. CONCLUSION This research concludes that there is a significant effect of social media influencer and celebrity's credibility to emotional pleasure and arousal. The homogeneity shows that all the dependent variables have different variants with Sig. < 0.05. After conducting research with existing methods, the authors conclude that the credibility of social media influencer and celebrity towards emotional pleasure and arousal hardly have significant comparison. This study used S-O-R approach to find out whether the credibility of social media influencer and celebrity as a stimulus can arise emotional response of the audience or not. The result proved that the stimulus is quite effective in influencing respondents' emotional pleasure and arousal. The expertise of social media influencer was found to have a higher significance level in comparison to trustworthiness and attractiveness. This result is contradictory to the previous research carried out by Tanjung and Hudrasyah (2016). They conclude that trustworthiness of non-celebrity endorsers, such as social media influencer, has more significant impact while attractiveness don't have a significant effect. Also, the trustworthiness of celebrity was perceived to have a higher significant impact on emotional pleasure and arousal. This finding doesn't contradict with the previous research of Tanjung and Hudrasyah (2016) that shows the trustworthiness of celebrity is more significant than the other two dimensions.
2021-08-04T00:04:09.797Z
2020-12-30T00:00:00.000
{ "year": 2020, "sha1": "cf5d0f8dd299d561d399078e1edbb6abfa5bf6df", "oa_license": "CCBYSA", "oa_url": "https://ejournals.umn.ac.id/index.php/FIKOM/article/download/1459/1062", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a40166df2535f18147b88d042d4faa5285b42493", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
202823930
pes2o/s2orc
v3-fos-license
Actions speak louder: Young female patients with acute ischemic stroke in the emergency department The aim was to explore the diagnostic cascade of young females with acute ischemic stroke (AIS) in the emergency department (ED) setting. A retrospective case series study was conducted between the years 2016-2018 in the ED of a tertiary hospital (N=10). We collected socio-demographic data, clinical risk factors and co-morbidities, ED characteristics and medical examination related data. Ten females presenting with AIS were identified. Results show that each case had a variety of characteristics, there are no similar medical background or clear-cut risk factors, and each case has been presented clinically different. All these factors, with the possible added effect of age and sex bias serve as possible hindrance for correct and efficient diagnosis of stroke in young females. In conclusion, clinical presentation of young female with AIS is misleading. Initial examination in the ED setting may appear to be the determining point of impact on the outcome severity in young females. Introduction Stroke in young females is an under-researched group with an increasing number of occurrences. [1][2][3] Few statistical and epidemiological studies have been published regarding this subject. 1,4,5 Yet, none of these studies have shed light on emergency department (ED) management and the critical time lags influence on the diagnosis of stroke in young females. A large scale meta-analysis confirms that females generally experience poorer function outcome and lower quality of life after stroke in comparison to men. 2 In the ED, the probability of misdiagnosis of acute ischemic stroke (AIS) is high, and up to 30% of suspected stroke presentations have a different diagnosis. [6][7][8] In addition, females have increased probability of a stroke mimic, delays in acute imaging, and a lower likelihood of receiving treatment, due to unusual presentation of symptoms and timely recognition. 1 Data on females shows that each year approximately 55,000 more females than males experience stroke, females have a higher lifetime risk of stroke in comparison with males and the frequency of poor outcomes is higher in females. 4,9,10 Despite preexisting knowledge regarding female risk factors and presentation in stroke, there are no prior studies examining the ED management and critical time lags influencing the diagnosis of AIS in young females. The purpose of this case series is to explore the diagnosis and treatment cascade in young females presenting with AIS to the ED. Case Reports During the study, 10 young female patients suspected and diagnosed with AIS were included. The patients' clinical characteristics are summarized in Table 1. The average age is 41.5 years old (range 28-55), eight of them were born in Israel, seven are of Jewish nationality, nine are married, and eight of the patients have between one and seven children. Clinical risk factors such as diabetes mellitus, hypertension, atrial fibrillation (AF), prior stroke, valvular disease and psychiatric disorders were presented in some of the cases without a clear mutual relation among the patients (cases 1, 2, 3, 5, 7, 8, 9 and 10). Mode of arrival to the ED was relatively conclusive, nine females arrived by emergency medical services (EMS) and only one arrived independently (case 2). All patients except two (cases 4 and 10) were highly prioritized according to Canadian triage and acuity scale, with a level of 1. Seven patients were primarily examined by a neurologist (cases 1, 2, 3, 5, 6, 7 and 8), whereas the other three were examined by an internist, gynecologist, psychiatrist, followed by a neurological assessment (cases 3, 9 and 10). National institute health stroke scale (NIHSS) determined by the neurologists was high in all cases (13.9±11.05). However, the range was heterogeneous (1-42). Time delay from stroke onset to ED arrival was 148±84.54 minutes, whereas ED to computed tomography (CT) time was 98±196.95 minutes. In regard to treatment type, two patients received intra venous tissue plasminogen activator (IV. tPA) (cases 5 and 9), six patients undergoing mechanical embolus removal in cerebral ischemia (MERCI) (cases 1, 3, 4, 7, 8 and 10) while the remaining received combined treatment (cases 2 and 6). Medical examinations during hospitalization showed new findings in several females. Trans esophageal echo (TEE) examination demonstrated mild mitral regurgitation found in two cases (cases 2 and 5) and patent foramen ovale (PFO) found in three cases (cases 4, 6 and 9). In addition, elevated levels of thyroid stimulating hormone (cases 3 and 8) as well as dyslipidemia in some of the females (cases 3 and 5) were found. In regard to hospitalization outcomes, majority of patients did not suffer any complications whereas two patients showed aspiration pneumonia and presented with cerebral hemorrhage (cases 3, 6 and 9). Length of stay was 43.5 (±76.59) days. Most females were discharged to rehabilitation (cases 1, 3, 5, 6, 7, 8 and 10) while the remaining were discharged home with average modified Rankin scale (mRS) score of 3 ( Table 2). Case #6 A 44-year-old female, mother of five, arrived to the ED by an EMS team. The team reported that the patient was found on the ground with decreased level of consciousness after preforming physical exercise one hour prior to the team arrival. On admission, clinical presentation included combined neurological symptoms with NIHSS 10. A history of osteoarthritis without medication use. Fast-neurological evaluation was performed and the patient was sent to CT, demonstrating complete obstruction of right M1, as well as impression of A1 segmental filling problem of the same side without distal anterior cerebral artery filling. Computed tomography angiography showed close to a complete obstruction. In light of these findings tPA was administered followed by MERCI under general anesthesia due to massive vomiting. Time from CT to groin puncture was 64 minutes. During MERCI, four unsuccessful attempts were conducted to clear the occlusion. During TEE examination PFO was found. In addition, two decompression craniotomies were performed in light of repeated herniation. Several complications were documented -aspiration pneumonia, urinary tract infection and nosocomial infections. Following 271 days of hospitalization, patient was discharged for rehabilitation. Case #9 A 39-year-old female, with a history of migraine without a history of medication use, arrived to the ED of a secondary hospital during evening time with a pounding headache and a nonspecific weakness complaint. Immediate neurological evaluation was performed, followed by a CT examination and administration of tPA. Patient claimed to improve after treatment and upon her request was discharged home. A few hours later, felt unwell and was transported to a tertiary hospital by an EMS team. New neurological evaluation was performed by a neurologist that concluded a mental component was present. Due to the patient behavior a referral for psychiatric evaluation was made, which showed no findings. CT examination was performed with signs of occlusion. As such, admitted for observation. During night of admission patient status deteriorated, presenting with irritability and side complains. On the following morning an MRI was conducted, showing a massive hemispheric infarct. Due to these findings the patient was not eligible for a MERCI. Extensive evaluation was made during hospi-talization including TEE, demonstrating a large PFO. All other examination appeared normal and the patient was discharge home with mRS 1. Case #10 A 28-year-old female, with history of epilepsy, bipolar disorder, discopathy, heavy smoking and a long-standing use of cannabis. Medication history included lithium, oxazepam, carbamazepine and an intrauterine device. Patient was brought to the urgent area by an EMS team presenting postictal phase, remained unconscious for several hours and, due to patient history, a psychiatric evaluation was requested. After additional hours without improvement in consciousness followed by neurological deterioration the patient was admitted to the resuscitation bay and fast tract protocol has been activated. Findings showed extensive basilar infarction prompting patient transport for MERCI (CT to groin time was 131 minutes) where the clot was successfully extracted. Yet, extensive brainstem damage remains. Discussion Stroke related risk factors and their impact on young adults has been demonstrated in several publications. 11,12 Yet, to the best of our knowledge, the challenge of diagnosing AIS in young females has not been addressed in relation to clinical presentation and patient outcome at the ED. Detecting and diagnosing AIS is a major challenge for medicine in light of the importance of fast treatment. 6,11 Of the cases we reviewed, we found that the challenge is even greater and significant when it comes to young females. Acceptance that delay in presentation, evaluation, diagnosis and treatment of females with AIS may contribute to the association between female sex and more severe stroke is present. 13 One important finding is that the vast majority of females arrived by EMS, which means that the patients and their families perceived the event as an urgent and life-threatening situation. As such, these females were classified as high priority at triage, were received and treated in the resuscitation bay. Yet, time for CT was longer than recommended (≤25 min). 14 In addition, cases 4, 9 and 10 underwent primary evaluation by either a gynecologist, psychiatrist or internist physician, followed by a neurological examination. Paradoxically, there is a supposed phenomenon in which the diagnostic stage is delayed despite a rapid prehospital and triage stage. When viewing the process as a whole, it appears that preliminary diagnostic stages are the point of bottleneck formation. We offer two possible explanations for these paradoxical occurrences. First, heterogenic clinical features in distinct ED applicants such as females present occasionally with atypical symptoms. 15 The high range of NIHSS scores in our cases can be explained by current literature. In fact, limited evidence suggests that females with AIS present different symptoms than men, as it is known for acute coronary syndrome. [16][17][18] Many of the factors that could cause worse stroke-related outcomes in females, result from delays in arrival or delays in acute treatment after stroke. 13,19 Other studies have reported impaired levels of consciousness among females more frequently than men, which could be a contributing factor for increased stroke severity. 16 Second, potential age and sex related biases, in line with current literature risk factors may lead to a gap in patient-personnel communication. Some studies of gender differences in the management of AIS have shown that females are investigated less thoroughly. 19 In a study conducted in 2016 results showed that sex-based stroke treatment disparity was noted among different ethnicities. Yet, these study findings emphasize that adherence to a stroke performance program not only improves care but can also resolve disparity. 18 During medical examination, PFO has been found in three cases (4, 6, and 9). Several publications have shown high prevalence of PFO in a population of young AIS patients. 10 While it has been found that closure of PFO is strongly associated with a lower risk of recurrent stroke, there is little evidence of it being a direct cause. 18 Stroke related cardiac risk factors such as AF, valvular malformation and ischemic heart disease, were present in four of the cases we examined (1, 5, 7 and 8). Sex differences in clinical features and management of AF suggest that female subjects are more symptomatic relative to men. 12,13 In regard to stroke, sex differences are, in part, explained by differences in pre-stroke characteristics and clinical presentation. Additionally, delayed diagnosis might be explained by the presence of psychiatric history and/or psychiatric evaluation as can be seen in cases 5, 8 and 10. This is in accordance to the presence of psychiatric causes in the differential diagnosis of acute neurological deficits. 16 However, viewing the cases we presented, there are no consistent patterns of hospitalization examination or treatment. While stroke protocol was indeed activated in all female patients, time lags leading to activation showed remarkable variations. Conclusions Young female AIS patients are associated with an atypical presentation that leads to delayed diagnosis and fatal outcomes in the ED. Two factors may be the reason for this delay. First, affiliation to the female gender and second, presentation in an early age. These factors are associated in medical literature with delayed diagnosis and treatment. As presented, several females were labeled as having a psychiatric event leading to time consuming consultations and thus delayed diagnosis. To the best of our knowledge, no other study has examined the complexity of early diagnosis of AIS in young female patients in the ED. Thus, further researches with a larger sample size are needed in order to explore diagnosis and treatment variables affecting young females with suspected AIS. Stroke researchers must consider the translational relevance of such sex differences as many are unaware of the potential confounding factors of sex differences. A greater understanding of the mechanisms underlying sex difference in stroke and responsiveness to neuroprotection will lead to more appropriate treatment strategies for patients of both sexes. Additionally, recognition of potential gender differences in stroke symptoms through education of EMS and ED teams aimed at both the public and health care professionals, could result in decreased out-of-hospital and in-hospital delays. Thus, increasing access to acute stroke therapy in women. Increasing ED staff awareness and performing medical simulation of such case studies may lead to a decrease in delayed or misdiagnosed patients.
2019-09-17T01:08:15.950Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "41ee92156f14ffb687422cd958ad067967fdea9a", "oa_license": "CCBYNC", "oa_url": "https://pagepressjournals.org/index.php/ecj/article/download/8300/8288", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5a87b036cd4361d0187e9eed52fb60a46e8aaf9b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270621625
pes2o/s2orc
v3-fos-license
Invasive Mammary Paget’s Disease Without Underlying Malignancy: A Case Report Background: Mammary Paget's disease (MPD) with skin invasion is a rare condition that is often associated with an underlying malignancy. However, invasive MPD without malignancy is even rarer and often misdiagnosed. Case presentation: This report presents the case of a 56-year-old woman who presented with a progressively enlarging scaly erythematous plaque on her left nipple for 6 months. Dermoscopy and histopathological examination confirmed the diagnosis of invasive MPD, but radiological examination did not reveal any malignancy. Conclusion: Invasive MPD without malignancy is a rare but important entity to recognize. Awareness of this condition can help prevent overdiagnosis and unnecessary treatment. Introduction Mammary Paget's disease (MPD) is a rare but clinically significant neoplastic entity, characterized by the proliferation of malignant cells in the epidermis of the nipple and areola.This condition was first described by Sir James Paget in 1874 and has since been the focus of extensive research and discussion in the fields of dermatology and oncology.MPD presents unique diagnostic and therapeutic challenges, especially when it presents in the absence of underlying malignancy.MPD is a rare condition, with prevalence varying depending on the population studied and the diagnostic method used.In general, MPD is estimated to occur in 0.7% to 4.3% of all breast cancer cases.However, the prevalence of invasive MPD, in which malignant cells invade the dermis, is lower, ranging from 4% to 7.8% of all MPD cases.MPD is classically classified into two main types.MPD with Underlying Malignancy is the most common type of MPD, in which the proliferation of Paget cells in the epidermis is associated with underlying ductal carcinoma in situ (DCIS) or invasive ductal carcinoma (IDC) in the breast tissue.MPD Without Underlying Malignancy is a rarer type of MPD, where there is no evidence of an underlying malignancy at the time of diagnosis.These types often cause diagnostic confusion and require different approaches to management. 1,2D usually appears as an eczematous lesion on the nipple or areola.These lesions can vary in appearance, ranging from scaly erythematous plaques to crusted ulcers.The most common symptoms are Physical examination revealed eczematous-like plaque with a thin, scaly surface on left retracted nipple. Histopathological examination using hematoxylineosin staining (A) revealed epidermis containing proliferation of large cells (Paget cells) with abundant cytoplasm, large nuclei, pleomorphic, vesicular, coarse chromatin, mitosis can be found, these cells appear to infiltrate the underlying dermis (Figure 2).This suggests that Paget's cells may have a different cellular origin than breast carcinoma.Some cases of invasive MPD without malignancy have been successfully treated with topical therapies, such as corticosteroids or calcineurin inhibitors, which target epidermal cells.This suggests that Paget's cells may be more responsive to therapy aimed at skin cells than to therapy aimed at carcinoma cells. 15e Toker Cell Migration Theory is one of the main eISSN (Online): 2598-0580 itching, burning, pain, and nipple discharge.The diagnosis of MPD requires a combination of clinical examination, dermoscopy, histopathology, and radiology.Clinical examination and dermoscopy can provide initial clues, but definitive diagnosis requires skin biopsy and histopathological examination.Paget's cells have distinctive characteristics, namely large cells with pale cytoplasm and large, hyperchromatic nuclei.Immunohistochemical stains, such as CK7, CEA, and HER2, can help confirm the diagnosis and differentiate Paget's cells from other malignant cells.Radiological examinations, such as mammography and breast ultrasound, are very important to rule out the possibility of underlying malignancy in cases of MPD with underlying malignancy.In cases of MPD without underlying malignancy, radiological examination can help confirm the absence of malignancy and guide treatment decisions. Histopathological examination using periodic acidschiff staining (B) revealed group of tumor cells with positive reactions (Figure2).No abnormalities were seen on the chest radiograph and bilateral mammary ultrasound examination.Only reretraction of the left mammary papilla was seen on mammography examination. Figure 1 . Figure 1.Erythematous scaly plaque on the left nipple. Figure 2 . Figure 2. Histopathological examination with hematoxylin-eosin staining (A) and periodic acid-Schiff staining (B) revealed epidermis and dermis containing paget cells (yellow arrow) group of tumor cells with positive reactions (white arrow). Invasive mammary Paget's disease (MPD) without underlying malignancy is a rare but clinically important subtype of MPD.This subtype presents unique diagnostic and therapeutic challenges due to the absence of obvious malignancy at the time of diagnosis.A comprehensive understanding of this entity is essential to ensure accurate diagnosis, appropriate treatment, and optimal long-term monitoring.Despite extensive research, the pathogenesis of invasive MPD without malignancy remains incompletely understood.Several theories have been proposed to explain the origin and development of Paget's cells in this condition. 9,10The epidermal transformation theory proposes that Paget's cells, which are characteristic of MPD, originate from the malignant transformation of normal epidermal cells located in the nipple or areola.In this scenario, initially, healthy skin cells undergo a series of genetic and molecular changes that drive them toward malignancy.One of the main pieces of evidence supporting this theory is the presence of specific genetic mutations in Paget's cells that are not found in the underlying breast carcinoma cells.Several genes that are frequently mutated in MPD include: PIK3CA.This gene encodes a protein involved in the PI3K/AKT/mTOR signaling pathway, which plays an important role in cell growth, proliferation, and survival.Mutations in PIK3CA can activate this pathway constitutively, leading to uncontrolled cell growth.TP53, this gene encodes the p53 protein, which is known as the "guardian of the genome" because of its role in preventing the growth of damaged or mutated cells.Mutations in TP53 can disrupt p53 function, allowing damaged cells to survive and reproduce.ERBB2, this gene encodes human epidermal growth factor receptor 2 (HER2), which is involved in cell growth and development.Overexpression of HER2 can cause uncontrolled cell growth.Mutations in these genes, along with other genetic changes, can disrupt the normal regulation of the cell cycle, apoptosis (programmed cell death), and cell differentiation, ultimately leading to the malignant transformation of epidermal cells into Paget cells. 11,12Although genetic mutations play an important role in epidermal transformation, other factors may also contribute to the development of invasive MPD without malignancy.These factors can be environmental or related to an individual's health condition.Exposure to environmental carcinogens, such as ultraviolet radiation, industrial chemicals, or air pollutants, can damage the DNA of skin cells and increase the risk of genetic mutations that can trigger malignant transformation.Chronic inflammation of the nipple or areola, which can be caused by various factors such as infection, trauma, or autoimmune disease, can create a microenvironment conducive to malignant transformation.Chronic inflammation can trigger the production of free radicals and pro-inflammatory cytokines, which can damage the DNA of skin cells and disrupt the normal regulation of cell growth.Some studies suggest that hormonal changes, such as those that occur during menopause, may increase the risk of MPD.This may be because certain hormones, such as estrogen, can stimulate the growth of epidermal cells and increase their susceptibility to malignant transformation. 13,14In addition to the presence of specific genetic mutations in Paget's cells, several other lines of evidence support the theory of epidermal transformation in the pathogenesis of invasive MPD without malignancy.In some cases of invasive MPD without malignancy, no evidence of underlying breast carcinoma is found even after several years of followup.This suggests that Paget's cells can originate from malignant transformation of epidermal cells in the absence of primary carcinoma in breast tissue.Gene expression studies have shown that Paget cells in invasive MPD without malignancy have a different gene expression profile than breast carcinoma cells. hypotheses explaining the origin of Paget's cells in MPD, including invasive MPD without underlying malignancy.This theory proposes that Paget's cells originate from undetected carcinoma in situ (CIS) within the mammary ducts, which then migrate to the nipple epidermis or areola.The cell migration process is a complex phenomenon involving interactions between cells and the surrounding microenvironment.In the context of MPD, CIS cells residing within the mammary ducts are thought to undergo a series of molecular and cellular changes that enable them to exit the ducts, penetrate the basement membrane, and migrate through the stroma toward the epidermis.Ecadherin is a cell adhesion protein that plays an important role in maintaining the integrity of epithelial tissue.Loss of E-cadherin expression in CIS cells may result in loss of adhesion of these cells to each other, making it easier for them to detach from the mammary ducts and migrate.Matrix metalloproteinases (MMPs) are proteolytic enzymes capable of degrading extracellular matrix components, such as collagen and laminin.Increased expression of MMPs in CIS cells may help them penetrate the basement membrane and migrate through the stroma.Growth factors, such as epidermal growth factor (EGF) and transforming growth factor-beta (TGF-β), can stimulate cell migration by activating intracellular signaling pathways that regulate cell motility.Chemotaxis is the movement of cells in response to concentration gradients of certain chemicals.CIS cells may be attracted to the epidermis by chemotactic factors released by epidermal cells or inflammatory cells.Several case reports have documented cases of invasive MPD without malignancy that later developed into invasive carcinoma over time.This suggests that Paget's cells in invasive MPD without malignancy may originate from carcinoma in situ that was not detected at the time of initial diagnosis.Histopathological examination of some cases of invasive MPD without malignancy has demonstrated the presence of Paget's cells along the mammary ducts, supporting the idea that these cells originate from carcinoma in situ within the ducts.Genetic analysis of Paget's cells and breast carcinoma cells has shown genetic similarities between the two, indicating that Paget's cells may originate from carcinoma cells that have migrated.The Toker Cell Migration Theory has several important clinical implications.First, this theory highlights the importance of careful histopathological examination in all cases of MPD, including invasive MPD without malignancy, to look for evidence of carcinoma in situ within the mammary ducts.Second, this theory suggests that patients with invasive MPD without malignancy may have a higher risk of developing invasive carcinoma later in life, thus requiring close long-term monitoring.Although the Toker Cell Migration Theory is supported by quite strong evidence, there are still many unanswered questions regarding the molecular mechanisms underlying Paget cell migration.Further research is needed to identify the specific factors that trigger and regulate Paget's cell migration, as well as to develop new therapeutic strategies that can inhibit this migration process.The Toker Cell Migration Theory provides a plausible explanation for the origin of Paget cells in invasive MPD without malignancy.This theory is supported by clinical, histopathological, and genetic evidence indicating that Paget's cells may originate from undetected carcinoma in situ within the mammary ducts.A better understanding of Paget's cell migration mechanisms may help improve early diagnosis, treatment, and monitoring of invasive MPD without malignancy, as well as develop preventive strategies to reduce the risk of future development of invasive carcinoma. 16,17The pluripotent stem cell theory offers an interesting perspective in understanding the origin of Paget cells in invasive MPD without malignancy.This theory proposes that Paget's cells originate from pluripotent stem cells located in the nipple or areola.Pluripotent stem cells are unique cells that have the ability to self-renew and differentiate into various cell types in the body.In the context of MPD, this theory states that pluripotent stem cells in the nipple or areola, for some reason, undergo abnormal differentiation and turn into Paget cells.These Paget cells then proliferate and invade surrounding tissue, causing the characteristic lesions of MPD.Studies have shown that Paget's cells express certain stem cell markers, such as OCT4, SOX2, and NANOG.This marker is usually found on embryonic stem cells and certain adult stem cells, and its expression on Paget's cells suggests that these cells may be derived from stem cells.Paget's cells show the ability to differentiate into various cell types, including glandular cells and squamous cells.This ability is characteristic of pluripotent stem cells and supports the idea that Paget's cells originate from stem cells.MPD lesions often show high cellular heterogeneity, with the presence of multiple cell types, including Paget's cells, explained by the ability of Paget's cells to differentiate into various cell types, which is a characteristic feature of pluripotent stem cells.Paget's cells are often resistant to conventional therapies, such as chemotherapy and radiation therapy.This resistance may be due to the stem cell properties of Paget cells, which are known to have the ability to avoid cell death and repair DNA damage.Although the evidence supporting the pluripotent stem cell theory is increasingly strong, the mechanisms underlying the abnormal differentiation of stem cells into Paget cells are still not fully understood. 18,19Mutations in certain genes that regulate stem cell differentiation can cause stem cells to differentiate abnormally into Paget cells.This mutation can occur spontaneously or be triggered by environmental factors, such as exposure to radiation or chemicals.Disruption of cellular signaling pathways important for stem cell differentiation can also lead to abnormal differentiation.This signaling pathway involves molecules such as Wnt, Notch, and Hedgehog, which regulate cell proliferation, differentiation, and migration.The microenvironment surrounding stem cells, including neighboring cells, extracellular matrix, and growth factors, can influence stem cell differentiation.Changes in the microenvironment, such as chronic inflammation or exposure to certain hormones, can trigger abnormal differentiation of stem cells into Paget cells.A better understanding of the role of pluripotent stem cells in the pathogenesis of invasive MPD without malignancy may pave the way for the development of new prevention and treatment strategies.For example, therapies targeting stem cells or signaling pathways involved in abnormal differentiation could be a promising approach to treating invasive MPD without malignancy.In addition, identification of stem cell biomarkers in Paget cells can help in early diagnosis and monitoring of disease progression.These biomarkers can be proteins, nucleic acids, or metabolites found in blood, urine, or skin tissue.Further research is needed to uncover the molecular mechanisms underlying the abnormal differentiation of stem cells into Paget cells and to develop therapies that target the stem cells or signaling pathways involved.This research may involve studies of animal models of MPD, Paget cell cultures, and analysis of patient tissue.The pluripotent stem cell theory offers an interesting and promising perspective in understanding the pathogenesis of invasive MPD without malignancy.Evidence supporting this theory is growing, but further research is needed to confirm the role of pluripotent stem cells and uncover the molecular mechanisms underlying abnormal differentiation.A better understanding of the role of stem cells in invasive MPD without malignancy may pave the way for the development of new and more effective prevention and treatment strategies.
2024-06-21T15:18:26.319Z
2024-06-19T00:00:00.000
{ "year": 2024, "sha1": "5bd98ac8a07f20e33879e23575d82abd4f5bfe76", "oa_license": "CCBYNCSA", "oa_url": "https://www.bioscmed.com/index.php/bsm/article/download/1073/1227", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "71a0b0a495e63dbcf7be8b5e2dfce884859c1f81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2182090
pes2o/s2orc
v3-fos-license
Loneliness and the rate of motor decline in old age: the rush memory and aging project, a community-based cohort study Background Being alone, as measured by less frequent social interactions, has been reported to be associated with a more rapid rate of motor decline in older persons. We tested the hypothesis that feeling alone is associated with the rate of motor decline in community-dwelling older persons. Methods At baseline, loneliness was assessed with a 5-item scale in 985 persons without dementia participating in the Rush Memory and Aging Project, a longitudinal community-based cohort study. Annual detailed assessment of 9 measures of muscle strength and 9 motor performances were summarized in a composite measure of global motor function. Results Linear mixed-effects models which controlled for age, sex and education, showed that the level of loneliness at baseline was associated with the rate of motor decline (Estimate, -0.016; S.E. 0.006, p = 0.005). For each 1-point higher level of loneliness at baseline, motor decline was 40% more rapid; this effect was similar to the rate of motor decline observed in an average participant 4 years older at baseline. Furthermore, this amount of motor decline per year was associated with about a 50% increased risk of death. When terms for both feeling alone (loneliness) and being alone were considered together in a single model, both were relatively independent predictors of motor decline. The association between loneliness and motor decline persisted even after controlling for depressive symptoms, cognition, physical and cognitive activities, chronic conditions, as well as baseline disability or a history of stroke or Parkinson's disease. Conclusions Among community-dwelling older persons, both feeling alone and being alone are associated with more rapid motor decline, underscoring the importance of psychosocial factors and motor decline in old age. Background Loss of motor function is a common consequence of aging and is associated with adverse health consequences [1][2][3][4][5]. The specific motor abilities impaired in old age vary and encompass a wide spectrum including loss of muscle strength and bulk, balance, dexterity and reduced gait speed which can occur even in the absence of overt diseases [6][7][8]. By 2030, 20% of Americans, roughly 72 million people, will be 65 years of age or older [9], and by the age of 80 years or older, the fastest growing segment, 40% or more will have some loss of motor abilities [10]. Identifying risk factors for agerelated motor decline is an essential first step for the rational development of therapeutic interventions to reduce the growing burden of motor impairment in our rapidly aging population. Although risk factors for common diseases known to cause motor dysfunction such as stroke are recognized, few risk factors for idiopathic motor decline in old age have been identified. While the benefits of physical activity on motor function is well-known [11][12][13][14], there is increasing recognition of the importance of lifestyle and psychosocial factors for healthy aging in older persons [15,16]. Increased social engagement as measured by the frequency of late-life social activities in older individuals is associated with longevity and a decreased risk of dementia, while being alone is associated with disability and a more rapid rate of motor decline [15][16][17][18]. Recent studies suggest that not only being alone, but also self-perceived isolation i.e., loneliness, has a detrimental effect on a wide range of physical functions including sleep, immune responses, level of physical activity, cognition and risk of Alzheimer's disease [19][20][21][22][23]. These reports suggest that not only being alone, but also loneliness might be related to motor decline in old age. Loneliness could serve as a marker for other processes such as inflammation or cardiovascular diseases which contribute to motor decline. Alternatively, loneliness may be a causal risk factor for motor decline. For example, since loneliness is associated with poor self-regulation, it may lead to behavioral changes such as decreased exercise or changes in eating habits which could in turn cause motor decline [24]. Furthermore, in addition to functional and structural links between social and motor behavior, social activitylike physical activity-may contribute to improved motor function by increasing neuronal plasticity and protecting against tissue damage [25]. Despite these reports, little is currently known about whether simply feeling lonely or disconnected from others and dissatisfied with social interactions is associated with motor decline in old age [23,24,26]. To test the hypothesis that feeling alone is associated with the rate of motor decline in old age, we used data from 985 older participants in the Rush Memory and Aging Project who underwent annual detailed examinations for up to 12 years [27]. At enrollment participants underwent assessment of loneliness with a modified version of the de Jong-Gierveld Loneliness Scale. They also underwent baseline and annual detailed exam which included assessment of motor strength and performances [18,23]. We used linear mixed-effect models to test the hypothesis that a higher level of loneliness at study entry was associated with a more rapid rate of motor decline during the course of the study. In further analyses, we examined whether including terms for both feeling alone and being alone (based on the frequency of participation in social activities and size of social network), showed separate effects with the rate of motor decline when considered together in a single model. Finally, we examined whether the association of loneliness and motor decline was confounded when controlling for depressive symptoms, cognition, other leisure activities and chronic conditions. Participants Participants were recruited from about 40 retirement facilities and subsidized housing facilities, as well as from church groups and social service agencies in northeastern Illinois. All participants signed an informed consent agreeing to annual clinical evaluation. In addition, all participants signed an anatomical gift act donating their entire brain and spinal cord, as well as selected nerves and muscles to Rush investigators at the time of death. The study was in accordance with the latest version of the Declaration of Helsinki and was approved by the Institutional Review Board of the Rush University Medical Center [27]. At the time of these analyses, 1201 participants had enrolled and completed a baseline evaluation. Eligibility for these analyses required 1) the absence of clinical dementia at the baseline evaluation; 2) a valid assessment of loneliness at baseline and 3) a baseline motor evaluation and at least one follow-up evaluation in order to assess change in motor function. We excluded 71 persons who met criteria for dementia at baseline and 86 persons who had completed a baseline evaluation but died before their first follow-up examination or had not been in the study long enough for follow-up evaluation. Of 1044 participants' eligible for these analyses 59 had missing data (5.7%). This left 985 persons for these analyses with a mean follow-up of 5.0 years (SD, 2.44; range 0.4, 12 years). Clinical Diagnoses Clinical diagnoses were made using a multi-step process, as previously described [27]. Cognitive function testing included 19 performance tests which were summarized into a composite measure of global cognition [27]. Participants were then evaluated in person by an experienced physician who used published criteria to diagnose dementia [28], stroke [29], or Parkinson's disease [30]. Assessment of Loneliness We assessed loneliness using a modified version of the de Jong-Gierveld Loneliness Scale [23]. The 5 items included: a) "I experience a general sense of emptiness," b) "I miss having people around," c) "I feel like I don't have enough friends," d) "I often feel abandoned," and e) "I miss having a really good friend." Item scores were averaged to yield a total score that could range from 1 to 5, with higher values indicating a higher level of loneliness. Assessment of Motor Function Grip and pinch strength were measured bilaterally using the Jamar hydraulic dynamometers (Lafayette Instruments, Lafayette, IN). Hand-held dynamometry (Lafayette Manual Muscle Test System, Model 01163, Lafayette, IN) was used to assess muscle strength in arm abduction, arm flexion, arm extension, hip flexion, knee extension, plantar flexion, and ankle dorsiflexion bilaterally. Time and number of steps to walk 8 feet and turn 360°were measured. Time to stand on each leg and then on toes for 10 seconds was recorded. We counted the number of steps off line when walking an 8 foot line in a heel to toe manner. We also measured the number of pegs that could be placed (Purdue pegboard) in 30 seconds and the rate of index finger tapping for 10 seconds (Western Psychological Services, Los Angeles, CA) bilaterally. A composite measure of global motor function was constructed by converting the raw score from each of the 18 motor measures to z scores using the mean and standard deviation from all participants at baseline and averaging z scores of all of the motor tests together [18]. Assessment of Other Covariates Two measures of social engagement were used as indicators of social isolation i.e. being alone. We used a previously established composite measure of late-life social activity in these analyses [23,31]. Frequency of participation in social activity was based on 6 items about activities involving social interaction. Each activity was rated on a 5-point scale with a higher number indicating higher frequency of participation with 1 indicating participation in the activity once a year or less; 2, several times a year; 3, several times a month; 4, several times a week; and 5, every day or almost every day. Responses on each item were averaged to yield the composite measure used in these analyses [23]. The second measure, social network size, quantified the number of children, family, and friends each person had and how often they interacted with them per month [32]. Sex was recorded at the baseline interview. Age in years was computed from self-reported date of birth, and date of the baseline clinical examination was that at which the strength measures were first collected. Education (reported highest grade or years of education) was obtained at the time of the baseline cognitive testing. Weight and height were measured and recorded at each visit by a trained technician blinded to previously collected data. Body mass index (BMI) was calculated as weight in kilograms divided by height in meters squared. Physical activity was assessed using questions adapted from the 1985 National Health Interview Survey [18]. Minutes spent engaged in each activity were summed and expressed as hours of activity/week. Frequency of participation in cognitively stimulating activities was quantified with a scale, wherein people rated how often they had participated in each of 7 cognitive activities (e. g., reading a newspaper) over the past year [33]. Disability was assessed at baseline with the 6-item Katz scale [34]. Depressive symptoms over the prior week were assessed with a 10-item version of the Center for Epidemiologic Studies Depression (CES-D) scale [35]. The sum of the number of vascular risk factors (i.e. the sum of hypertension, diabetes mellitus, and smoking), and vascular diseases (i.e., myocardial infarction, congestive heart failure, and claudication) were used in these analyses [36]. Statistical Analyses We examined the bivariate associations of loneliness and global motor with age, sex, education and other covariates. We used mixed-effect models [37] to assess the relation of loneliness with baseline level of global motor and its annual rate of change. The core model included terms for time in years since baseline as well as terms for loneliness at baseline which was centered at its mean and a term for its interaction with time since baseline. The term for time indicates the average annual rate of change in global motor scores for a typical participant with a median loneliness score; the term for loneliness indicates the average difference in motor function at baseline associated with a 1-point change in the level of loneliness score from the median; and the interaction of loneliness with time indicates the effect of a 1-point change in the level of loneliness score on the annual rate of change in global motor scores. To control for the effect of demographic variables, these and all subsequent models included terms for age, sex, and education and their interaction with time. In subsequent models, we added terms to determine if the association of loneliness and global motor scores might vary by age, sex, and education. Next we examined whether measures of social isolation or depression accounted for the association of loneliness with global motor scores. Then we examined whether several covariates which might affect motor function affected the association of loneliness and motor decline. To determine the clinical significance of the amount of change in global motor function, we constructed Cox proportional hazards models examining adverse health consequences of change in motor function and estimated the hazard ratios associated with a given unit of change. These models controlled for age, sex, education, and baseline global motor function. For these analyses we used ordinary least squares regression to estimate the annual rate of change in global motor function for each person. Models were examined graphically and analytically and assumptions were judged to be adequately met. A priori level of statistical significance was 0.05. Programming was done in SAS version 9.1.3 (SAS Institute Inc, Cary, NC) [38]. Descriptives of Loneliness The characteristics of the cohort at baseline are included in Table 1. Baseline loneliness scores were approximately normally distributed (mean, 2.26; SD, 0.65; Q 1-3 , 0.60). Scores ranged from 1.0 to 4.6 with higher values indicating more loneliness. Loneliness did not vary by sex (t [983] = -1.29, p = 0.199). Participants, who reported higher levels of loneliness at baseline were older, less educated, reported less frequent participation in social, physical, and cognitive activities, reported more disability, had lower cognitive function, and were more likely to have vascular diseases (Table 2). We used a linear mixed effect model controlled for age, sex, and education to test the hypothesis that baseline loneliness score is associated with the rate of motor decline. On average, global motor declined by about -0.04 unit/year (Time, Table 3, Model A). Baseline loneliness was associated with the global motor score at baseline (Loneliness, Table 3) as well as the annual rate of change in global motor score (Loneliness*Time, Table 3, Model A). A 1-Comparing the rate of motor decline in two participants with different loneliness scores at baseline, shows that the person with a 1-point higher loneliness score would exhibit a 40% more rapid annual rate of motor decline. This can be computed by dividing the estimate for the interaction term of loneliness and rate of motor decline (Loneliness * Time, Table 3, Model A) by the estimate for the term for the annual rate of motor decline (Time, Table 3, Model A). Figure 1, based on this model, compares the rate of motor decline in two participants with high and low baseline loneliness scores. The rate of motor decline for the lonely person (90 th percentile, score, 3.2) declined about 80% more rapidly as compared to a person who was not lonely (10 th percentile, score, 1.4). Since the term for age in the core model was also related to the rate of global motor decline, we could compare the amount of motor decline associated with increased age with the amount of motor decline associated with loneliness. For each additional year of age, global motor score declined an additional 0.004 standard units (Age*Time, Table 3, Model A). In contrast, each 1 point increase in baseline loneliness was associated with an additional 0.016 standard unit decline in global , a higher score indicates a higher level of cognition. Social Activity: Self-reported frequency of participation in six social activities a higher score indicates more frequent participation. Social Activity: Self-reported frequency of participation in 6 items about activities involving social interaction, a higher score indicates more frequent participation; Social Network Size: Self-reported number of children, family, and friends each person had and how often they interacted with them per month Physical Activity: Self-reported frequency of participation in 5 physical activities (hours/week), a higher score indicates more frequent participation. Cognitive Activity: Self reported frequency of participation in 7 cognitive activities, a higher score indicates more frequent participation. Katz Disability: 6 item measures of basic activities of daily living, a higher score indicates greater disability. Global Cognition: Composite measure of cognition based on performances on 18 cognitive tests, a higher score indicates a higher level of cognition. Depressive Symptoms: Modified 10 item CESD scale, a higher score indicates greater depressive symptomatology. BMI: Body mass index: weight in kilograms divided by height in meters squared. Vascular Risk Factors: sum of smoking, diabetes, and hypertension self-reported. Vascular Diseases: sum of myocardial infarction, congestive heart failure, claudication and stroke self-reported. motor (Loneliness*Time, Table 3, Model A). Thus, a 1point higher loneliness score was equivalent to an average participant being about 4 years older at baseline. Additional analyses showed that the association of loneliness with motor decline (Loneliness *Time) did not vary by age, sex or education (results not shown). Loneliness, Social Isolation and Change in Motor Function Indicators of social isolation such as frequency of social activity have been associated with disability, mortality and motor decline as previously reported [18]. Therefore, we repeated the core model adding terms for social isolation (i.e., late-life social activity and social network size) as well as their interaction with the annual rate of motor decline (Time). In this analysis, both loneliness and social isolation as measured by the frequency of social activities were relatively independently associated with the rate of motor decline (Table 3, Model B). Social network size was not related to motor function or its rate of decline in this same model (Table 3, Model B). Loneliness, Other Covariates and Change in Motor Function Because feeling lonely can be a symptom of depression and lonely persons are prone to experience depressive symptoms, we conducted additional analyses in an effort to disentangle these related constructs. In these analyses, we excluded 1 item about loneliness (ie, "I felt lonely") from the 10-item CES-D scale (9-item CES-D scale, mean, 1.15 SD, 1.57). Controlling for the 9-item CES-D score in the core model did not reduce the association of loneliness with motor decline [Loneliness *Time, Estimate, -0.018 (S.E. 0.006, p = 0.002)]. Including a term for global cognition in the core model reduced the association of loneliness with motor decline by about 18%, but the association remained significant (Loneliness * Time, Estimate, -0.013 (S.E. 0.005, p = 0.015). In subsequent analyses including terms for the frequency of cognitive and physical activities, body composition, vascular risk factors and vascular disease burden in combination with the other terms included in model A ( Table 3) described above did not affect the association of loneliness and the rate motor decline (results not shown). Next we determined that our results were not due to participants with baseline disability or a history of motor disorders due to neurologic disorders. The association between loneliness and the rate of motor decline was unchanged when we controlled for baseline disability using the Katz scale (Loneliness *Time, Estimate, -0.017 (S.E. 0.005, p = 0.001) or after excluding participants with a history of stroke or Parkinson's disease (Loneliness *Time, Estimate, -0.015 (S.E. 0.006, p = 0.005). Clinical Significance of the Change in Motor Function Associated with Loneliness To determine the clinical significance of the increased rate of decline of global motor scores associated with a 1-point increased loneliness score at baseline (Loneliness * Time, Model A, Table 3), we constructed Cox proportional hazards models examining the association of change in motor function with death and subsequently estimated the hazard ratios associated with a 40% increased annual decline, (i.e., the amount of change in global motor scores associated with a 1-point higher baseline loneliness score). From these models (data not shown), we calculated that the 40% increased rate of motor decline in a participant with a 1-point higher loneliness score at baseline was associated with about a 50%% increased risk of death as compared to a participant with an average loneliness score (Hazard Ratio: 1.21; 95% CI: 1.08, 1.35). Discussion In a cohort of nearly 1000 older persons free of dementia at baseline, we found that a higher level of loneliness (i.e., self-perceived isolation) was associated with a more rapid rate of motor decline in community-dwelling elders. This association persisted even after controlling for social isolation as measured by frequency of social activities and social network size, as well as a wide range of potential confounding variables including depression, cognition, physical and cognitive activities and chronic conditions. In several sensitivity analyses, this association was unchanged after controlling for baseline disability as well as a history of stroke or Parkinson's disease. Accumulating evidence suggests that social isolation as measured by frequency of late-life social activities or size of social network is related to adverse health outcomes such as longevity and risk of dementia, as well as the rates of cognitive and motor decline [18,39]. However, not only social isolation but also self-perceived isolation i.e., loneliness, has a detrimental effect on a wide range of functions including sleep, immune responses, level of physical activity, cognition and risk of AD [19][20][21][22][23]. A prior study reported that loneliness is associated with decreased physical activity or exercise, but this report analyzed physical activity levels which were based on self-report and did not assess levels of other late-life leisure activities [24]. The current study extends prior reports in several important ways. First we report that loneliness is related to the rate of motor decline derived from objective motor performances tested annually for up to 12 years. Second, we show that when self-perceived isolation and social engagement as measured by late-life social activities are considered together in the same model, both are relatively independent predictors of the rate of change in motor function. Third, the association between loneliness and motor decline persisted even after controlling for a wide range of leisure activities including social, physical and cognitive activities, depressive symptoms and other possible confounding covariates as well as after controlling for baseline disability or history of stroke and PD. These results have important translational implications because they suggest that public health interventions designed to maintain motor function in older adults need to consider the possible role of self-perceived isolation as a modifiable risk factor, which might increase the efficacy of other efforts implemented to decrease the burden of agerelated motor decline. The basis for the association between loneliness and motor decline is uncertain. Human social behavior is generated in the brain through interconnected brain structures which process different elements of sociocognitive and socioaffective information which are eventually integrated and translated into motor action [40]. Loneliness and motor decline may be associated since both depend on the structural and functional integrity of neural systems underlying the initiation, planning and execution of motor action and might both be affected by common pathophysiological processes. Moreover, recent work suggests that mirror neurons are thought to play important roles not only for generating movement but also for a wide range of activities essential for social interaction including self-awareness and empathy. Further work is needed to elucidate the role of mirror neurons in human behavior, but this raises an intriguing possibility that mirror neurons might provide a structural causal linkage between self-perceived isolation i.e., loneliness and motor actions [41]. Motor function is necessary for social behavior and is thus an integral component of one's social body. Recent work suggests that social pain may function as an aversive signal, like physical pain, signaling the need to take action against factors which can damage or harm one's social body [42]. Thus, loneliness, as an expression of social pain, may be associated with motor decline because it serves as an aversive signal for factors which may impair motor function and the capacity for social behavior. Loneliness may represent a true risk factor which causes motor decline. For example, loneliness is associated with poor self-regulation which may lead to behavioral changes such as decreased exercise or changes in eating habits causing motor decline [24]. Alternatively, there may be common pathophysiological processes which affect both loneliness and motor impairment in old age. Loneliness is associated with a wide range of physiologic changes such as higher levels of cortisol, increased inflammation, immune dysfunction, increased cardiovascular disease and impaired sleep patterns which may all contribute to both loneliness and motor decline [20,21,[43][44][45]. In addition, to the functional and structural links between social and motor behavior, it is noteworthy that the benefits of social activity-like physical activity-may contribute to improved motor function by increasing neuronal plasticity and protecting against ischemic or neurotoxic damage [25]. Animals subjected to social isolation show decreased dendritic arborization in the hippocampus and prefrontal cortex and down-regulation of brain-derived neurotrophic factor which may be associated with impaired plasticity degrading the ability to compensate for the accumulation of agerelated pathologies [42]. Similar findings can be seen in humans with decreased levels of physical activity which also is related to the motor decline in old age. Finally, recent work suggests that loneliness is associated with alteration in human genome-wide transcriptional activity that might account for increased inflammatory diseases in loneliness [44,45]. The current cohort study cannot distinguish between the existence of a pathophysiological process affecting both loneliness and motor decline or the possibility that motor decline that is caused by loneliness. Thus, further work is needed to clarify the neurobiology underlying the association between loneliness and agerelated motor decline as well as the degree to which other psychosocial factors may contribute to motor decline in the elderly. Our study has some limitations. Most importantly, inferences regarding causality must be drawn with great caution from observational studies. While the findings were robust to potential confounding variables and sensitivity analyses, the potential for reverse causality cannot be excluded. Further, it is possible that residual confounding from an unmeasured latent variable is related to both loneliness and motor decline. Other limitations include the selected nature of the cohort, the self-report measures of chronic diseases and leisure activities and that this study did not assess simultaneous change in both loneliness and motor decline. However, several factors increase confidence in our findings. Perhaps most importantly, the study enjoys high follow-up participation reducing bias due to attrition. In addition, loneliness was assessed among persons without dementia based on a detailed clinical evaluation and motor function was evaluated as part of a uniform clinical evaluation and incorporated many widely accepted and reliable strength and motor performance measures; strength testing was done in all four extremities, and motor performances were tested in both the arms and legs. The aggregation of multiple measures of motor function into a composite measure yields a more stable measure of motor function and increases statistical power to identify associations. In addition, a relatively large number of older persons representative of the general population were studied, so that there was adequate statistical power to identify the associations of interest while controlling for several potentially confounding variables. Conclusions In a cohort of nearly 1000 community-dwelling older persons free of dementia at study entry and followed for up to 12 years, we found that simply feeling lonely or dissatisfied with social interactions is associated with a more rapid rate of motor decline. Furthermore, we found that both feeling alone and being alone are associated with a more rapid rate of motor decline. These findings underscore that psychosocial factors may not only affect the efficacy of interventions designed to maintain motor function in older adults but that these factors such as self-perceived isolation might also be modifiable risk factors that can be targeted to increase the efficacy of efforts to meet the growing public health challenge and burden of motor impairment in our rapidly aging population.
2017-06-29T12:54:54.645Z
2010-10-22T00:00:00.000
{ "year": 2010, "sha1": "4ca6b49cabcf2efb171378231bfebfb0591f2be3", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/1471-2318-10-77", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "441796ab92784de01d773c321c4a2ce36b2deab5", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
42800412
pes2o/s2orc
v3-fos-license
Analyzing the Tourism – Energy – Growth Nexus for the Top 10 Most-Visited Countries By using the Emirmahmutoglu–Kose bootstrap Granger non-causality method, this study explores the directions of causality among tourist arrivals, tourism receipts, energy consumption and economic growth for the top 10 most-visited countries (France, the USA, Spain, China, Italy, Turkey, Germany, the United Kingdom, Russia, and Mexico) in the world. This study finds a variety of causal directions between the pair of analyzed variables for each country and the panel. Since cross-sectional dependence exists across the top countries for the analyzed variables, the bootstrap Granger causality test that accounts for the mentioned issue in the estimation process presumably produces reliable and accurate outputs. Further results and policy implications are discussed in this empirical study. Introduction A general idea has emerged in the last decade: that tourism flows and energy consumption increase revenue, produce growth, create employment in tourism and energy sectors and cause a general improvement in economic development (Crouch and Ritchie 1999).Currently, both energy and tourism are related to sustainability of the economy in the world.(Daly 1991;Hall and Page 2014).According to the U.S. Energy Information Administration (EIA), world consumption of marketed energy will increase from 549 quadrillion British thermal units (Btu) in 2012 to 815 quadrillion Btu in 2030 (EIA 2016).Similarly, the United Nations World Tourism Organization (UNWTO) projects that world international tourist arrivals will increase from 1.0 billion in 2012 to 1.8 billion in 2030 ( UNWTO 2013).Especially in developing countries, tourism flows and energy consumption can impact positively on the trade balance, employment and limited resources of host countries.(Daly 1991;Asif and Muneer 2007;Isik 2012Isik , 2013;;Hall and Page 2014;Dogan et al. 2015;Isik, 2015;Isik and Shahbaz 2015;Dogan and Seker 2016;Ertugrul et al. 2016;Isik et al., 2017bIsik et al., , 2017a)). With the continual increase in tourism and travel activities globally, there are serious allegations that the industry is significantly contributing towards climate change through its impact on CO2 emissions (Sharpley and Telfer 2015).According to Karabu ga et al. (2015), 90% of energy consumption occurs during outgoing and incoming to destinations (43% airway, 42% land transport, 15% sea and railways).Tourism sector has a 5% of worldwide carbon dioxide (CO2) emission.These impacts might be reduced if proper courses of actions are to be taken (Dogru et al. 2016) The current literature recommends reducing the consumption of traditional energy resources while increasing the use of renewable or alternative energy for sustainable economic growth and tourism development (Scott and Becken 2010;Jenkins and Nicholls 2010).Renewable energy resources (Solar Energy, Biomass Energy, Heat Pump, Wind Power and Geothermal Energy etc.) are the most suitable energy forms for clean environment concept that do not pollute during the production and renew it (Karabu ga et al. 2015). The developed countries that are the main generators of tourists, much more attention has been given to tourism development theory in the context of the less developed world.Nevertheless, the developed countries relatively reach major benefits from the tourism.While less developed countries' contribution to energy due to tourism activities is smaller compared to developed countries (Sharpley and Telfer 2015).In especial, the tourism sector has shown an exceptional improvement in France, the USA, Spain, China, Italy, Turkey, Germany, the United Kingdom, Russia, and Mexico, which are the top ten most-visited countries in terms of tourist arrivals in 2014.Table 1 shows tourist arrivals, tourism receipt, energy consumption and GDP statistics for the top 10 most-visited countries in the world for the latest available year.( ) denotes the ranks of the countries.Source: World Bank, World Development Indicators (WDI 2016).UNWTO (2013), (United Nations World Tourism Organization), World Tourism Trends.British Petroleum Energy Outlook (2015). According to the World Development Indicators (WDI) (2016), the top 10 most-visited countries (France, the USA, Spain, China, Italy, Turkey, Germany, the United Kingdom, Russia, and Mexico) reached 44.936 billion USD GDP in 2014 (with a share of 58% of total world GDP).These countries total trade was close to 1.657 trillion USD, exports were 7.828 billion USD and imports were 8.739 billion USD.The top 10 most-visited countries' economies reached an annual average growth rate of 2.00% as of 2014.The tourism receipt for the top 10 most-visited countries was 561.2 billion USD in 2014 and 660.8 billion USD in 2013.The international tourism receipt for the analyzed countries reached an annual average growth rate of 18% as of 2014.Energy consumption in these countries is increasingly growing in the last decade and reached 524.08 quadrillion Btu (British thermal unit) in 2013.It was more than half of the world's energy consumption (56% in 2013).The top 10 countries' energy consumption was 291.24 quadrillion Btu of the world total 524.08 quadrillion Btu in 2013(British Petroleum Energy Outlook 2015). The purpose of this study is to investigate the causal relationships between tourism development, energy consumption and real GDP (economic growth).For this purpose, the Bootstrap Panel Granger causality test is used to examine the causal link between tourism, energy consumption and real GDP, in France, the USA, Spain, China, Italy, Turkey, Germany, the United Kingdom, Russia, and Mexico.The top 10 most-visited countries offer a unique setting to investigate the causal relationship between tourism development, economic growth, and energy consumption because of their respective shares of world's tourism development, GDP and energy consumption.This research contributes to the existing literature in several aspects.The investigation of the top countries is of interest to policy makers and governments as they play important roles in energy and tourism sectors and in the overall world economy upon aforementioned discussions.This study uses the recently developed Emirmahmutoglu-Kose bootstrap panel Granger causality test which accounts for the issue of cross-sectional dependence, since we find that the issue appears in the analyzed data.The reported results are thus reliable and robust, and strong for policy implications. The rest of this paper is systematized as follows.Section 2 presents the main findings of the previous studies on this nexus, Section 3 defines the methodology, Section 4 discusses the empirical results and finally, Section 5 elaborates upon the conclusions and policy recommendations. Literature Review The tourism and energy consumption and their relation with economic growth is not involved in many studies until recent years.Some early works have focused on tourism economics or energy economics itself.Few studies use these two variables which affect economic growth in one equation.The impact of the tourism and energy consumption on economic growth has not always been well argued in the economic literature. It is appropriate for our study to begin investigating the literature that has stressed the link between the tourism-energy-growth.There are some earlier studies on energy consumption, the tourism sector and economic growth (Oh et al. 2010;Akbostancı et al. 2011;Liu et al. 2011;O'Mahony et al. 2012;Pardo et al. 2012;Pace 2015;Moutinho et al. 2015;Isik et al., 2017bIsik et al., , 2017a)).We also discovered different works in the tourism literature that have examined the energy and CO 2 emissions (Liu et al. 2011;Scott 2011;Wu and Shi 2011;Lee and Brahmasrene 2013;Lee and Kwag 2013;Katircioglu et al. 2014). The actual literature focused on the energy-growth relation displays a wide variety of result.Today, the energy-growth connection has widely been empirically examined since the study of Kraft and Kraft.The general results of the current works on energy consumption are not uniform.Some different econometric technics were used (unit root tests, cointegration tests, etc.) to identify the causality direction between these variables (Shahbaz and Lean 2012). We have also studied the main findings of the economic literature for the impact of energy consumption and tourism flows on economic growth as shown in Table 2 (Energy, Tourism Economic Growth Causality).Based on the results, we can classify the research into four acceptable theories; growth, conservation, feedback and neutrality theory.It is generally agreed that tourism and energy plays a robust role for both the income and the expenditure and investment of goods and services within an economy.In light of these findings, the mission of this section is to ensure a review of the previous studies on the causal link between tourism, energy and growth.In the related literature, there have been more studies that have applied Granger causality tests to investigate the causal relationships for tourism development or energy consumption with economic growth as pairs.For instance, while Aqeel and Butt (2008) apply the Hsiao's Granger causality test for Pakistan, Wolde-Rufael (2004) applies the Toda and Yamamoto (1995) causality test and they both find causal relationships from energy consumption to the economic growth.However, Ozturk et al. (2010) apply the Panel Granger causality test on 51 low and middle-income countries and find the same relationship from the economic growth to energy consumption.Furthermore, while Chen et al. (2007) apply the Panel causality for China and find no causal relationships between energy consumption and economic growth, Yuan et al. (2007) apply the Granger causality for the same country and find bidirectional causality between these two variables. Model and Data Following studies Lee and Brahmasrene ( 2013 2016) that focus on the link of tourism-energy-growth nexus, this study uses the following models where economic growth (Y) is the dependent variable, and energy consumption (EGY) and tourism are the independent variables.We used E-Views 8 econometric software for the estimations.It is generally agreed that tourism and energy plays a robust role for both the income and the expenditure and investment of goods and services within an economy.In light of these findings, the mission of this section is to ensure a review of the previous studies on the causal link between tourism, energy and growth.In the related literature, there have been more studies that have applied Granger causality tests to investigate the causal relationships for tourism development or energy consumption with economic growth as pairs.For instance, while Aqeel and Butt (2008) It is generally agreed that tourism and energy plays a robust role for both the income and the expenditure and investment of goods and services within an economy.In light of these findings, the mission of this section is to ensure a review of the previous studies on the causal link between tourism, energy and growth.In the related literature, there have been more studies that have applied Granger causality tests to investigate the causal relationships for tourism development or energy consumption with economic growth as pairs.For instance, while Aqeel and Butt (2008) apply the Hsiao's Granger causality test for Pakistan, Wolde-Rufael ( 2004) applies the Toda and Yamamoto (1995) causality test and they both find causal relationships from energy consumption to the economic growth.However, Ozturk et al. (2010) apply the Panel Granger causality test on 51 low and middle-income countries and find the same relationship from the economic growth to energy consumption.Furthermore, while Chen et al. (2007) apply the Panel causality for China and find no causal relationships between energy consumption and economic growth, Yuan et al. (2007) apply the Granger causality for the same country and find bidirectional causality between these two variables. Model and Data Following studies Lee and Brahmasrene (2013), Tiwari et al. (2013), Leon et al. (2014), and Tang et al. (2016) that focus on the link of tourism-energy-growth nexus, this study uses the following models where economic growth (Y) is the dependent variable, and energy consumption (EGY) and tourism are the independent variables.We used E-Views 8 econometric software for the estimations.To check the robustness of the direction of causality between tourism and economic growth, we employ both tourist arrivals (ARRV) and tourism receipt (RCPT).The models can be written as: where t and i denote the time period and country.According to the World Development Indicators (WDI) (2016), the world's 10 most-visited countries are France, the US, Spain, China, Italy, Turkey, Germany, the UK, Russia and Mexico. 1 Regarding data description, economic growth (Y) is measured by the real gross domestic product constant in 2005 USD; energy consumption (EGY) is shown in kg of oil equivalent; tourist arrival (ARRV) equals the number of international inbound tourists; tourism receipts (RCPT) are measured by expenditures by international inbound tourists constant in 2005 USD.The annual data for the analyzed variables from 1995-2013 are sourced from the WDI (2016).It is worth that we use available data with the longest time period. Methods and Empirical Results As it is the main research proposal of this study to investigate the directions of causality among economic growth, energy consumption and tourism, we should find an appropriate and reliable estimation technique.One of the commonly observed but ignored issues in the literature is the presence of cross-sectional dependence across countries for panel data.Besides, traditional Granger causality approaches such as pair-wise Granger causality test and Granger causality based on vector error correction mechanism, this may produce inconsistent output because they do not take into account the issue of cross-sectional dependence.In the existence of cross-sectional dependence, we should employ a second generation Granger causality method robust enough to handle this issue. To this end, we analyze whether or not the analyzed variables include cross-sectional dependence by using the Pesaran's cross-sectional dependence (CD) test (Pesaran 2004). Results from the CD-test are reported in Table 3.We have enough evidence to reject the null hypothesis of cross-sectional independence in favor of the alternative hypothesis of cross-sectional dependence across the top 10 most-visited countries for economic growth, energy consumption, tourist arrivals and tourism receipts at 1% level of significance.This study, according to the reported results, would rather use the bootstrap methodology to Granger causality test for cross-sectionally dependent 1 The data are available at http://data.worldbank.org/.panels developed by Emirmahmutoglu and Kose (2011) than the above mentioned traditional ones.The Emirmahmutoglu-Kose bootstrap causality test builds on the Meta analysis of Fisher (Fisher 1932) and the idea of vector autoregression (VAR) with lag order and the maximal order of integration due to Toda and Yamamoto (1995).To test the null hypothesis of non-Granger causality, the authors estimated a level of VAR with lag order (k i ) and the maximum order of integration of variables (dmax i ) in heterogonous mixed panels. 2The only prior information needed is dmax i suspected to happen in the system for each country.By following the original study, we applied the Augmented Dickey-Fuller (ADF) unit root test (Dickey and Fuller, to the analyzed time-series data so as to determine the maximum number of integration).Moreover, we also used the Phillips-Perron unit root test for the purpose of robustness.Results from the ADF unit root test and the Phillips-Perron unit root tests are given in Tables 4 and 5, respectively.Both tests virtually produce the same order of integration of analyzed variables.Overall, the maximum order of integration (dmax) is determined to be two for each variable for the panel.The next step is to reveal the directions of bootstrap panel Granger causalities for the pairs of economic growth-energy consumption, economic growth-international tourist arrivals, economic growth-international tourism receipts, international tourist arrivals-energy consumption and international tourism receipts-energy consumption.The empirical results of the bootstrap panel Granger causality test for each pair of variables and the panel are reported in detail in Tables A1-A5 in the Appendix A. In conjunction with the information upon the number of integration for the analyzed variables, this study further looks at the direction of Granger causality between economic growth and energy consumption, between economic growth and tourist arrivals, between economic growth and tourism receipts, between tourist arrivals and energy consumption, and between tourism receipts and energy consumption.Results from the bootstrap Granger causality test due to Emirmahmutoglu and Kose (2011) for each pair of variables are reported in detail in the Appendix A. Nevertheless, Table 6 shows the summary of the outcomes obtained from the bootstrap causality method.In regards to the causal relationship, we found a one-way causal relationship running from energy consumption to economic growth (energy-led growth hypothesis) in Spain, and running from economic growth to energy consumption (growth-led energy hypothesis) in China, Turkey and Germany.We found a two-way causal relationship between energy consumption and economic growth in Italy, the USA and the panel of the top 10 most-visited countries, and no causal relationship between economic growth and energy consumption in France, Mexico, Russia and the UK.Furthermore, we found unidirectional causality from tourist arrivals to economic growth (tourism-led growth hypothesis) in China, Turkey and the panel, and from economic growth to tourist arrivals (growth-led tourism hypothesis) in Russia and Spain, bidirectional causality between economic growth and tourist arrivals in Germany, and no causality between them in France, Italy, Mexico, the UK and the USA.The evidence of one-way causality running from tourist arrivals to energy consumption (tourism-led energy hypothesis) is detected for Italy, Spain, Turkey, the UK and the USA while energy-led tourism hypothesis is held for China and Mexico.Moreover, two-way Granger causality is found for the panel of the analyzed economies, and no causality is valid for France, Germany and Russia.In addition, we found the presence of unidirectional causality from tourism receipts to economic growth in China, Germany, Turkey and the USA, and the presence of growth-led tourism hypothesis for Spain and the UK, the presence of both hypothesis for the panel, and the presence of no Granger causality in France, Italy, Mexico and Russia.Lastly, there is evidence for one-way Granger causality running from tourism receipts to energy consumption (tourism-led energy hypothesis) in Turkey and the USA, and running from energy to receipts (energy-led tourism hypothesis) in China, Mexico, Spain and the UK, evidence of two-way causality between energy and receipts for the panel of the top ten countries, and no causal relationship between them in France, Germany, Italy and Russia. Conclusions and Policy Recommendation Energy and tourism have become among the most important sectors of the economy in the last several decades.As economic growth is the main indicator of the economy, it is of interest for researchers to focus on the relationship between the two most important sectors and economic growth.Thus, this empirical study aims to find the directions of Granger causality among tourism receipts and tourist arrivals, energy consumption and economic growth for the top 10 most-visited countries in the world.The top countries are responsible for about half of the world is tourism receipts, world tourist arrivals, world energy consumption and world income in the last years. By using the Emirmahmutoglu-Kose bootstrap non-causality test, we found that an energy-led growth hypothesis is present in Spain, growth-led energy is present in China, Turkey and Germany, two-way causality is supported in Italy and the USA and no causal relationship is found between growth and energy in France, Mexico, Russia and the UK.By using the data for tourist arrivals a tourism-led growth hypothesis is present in China and Turkey a growth-led tourism hypothesis is found in Russia and Spain, bidirectional causality exists between growth and tourism in Germany, no causality occurs between the variables in France, Italy, Mexico, the UK and the USA.Tourism-led energy hypothesis is detected in Italy, Spain, Turkey, the UK and the USA an energy-led tourism hypothesis is found in China and Mexico and no causality is supported in France, Germany and Russia.By using the data for tourism receipts a tourism-led growth hypothesis exists in China, Germany, Turkey and the USA a growth-led tourism hypothesis occurs in Spain and the UK, and no Granger causality is detected in France, Italy, Mexico and Russia.Tourism-led energy hypothesis is present in Turkey and the USA an energy-led tourism hypothesis is supported in China, Mexico, Spain and the UK and no causal relationship is found in France, Germany, Italy and Russia. Our general policy implications are guidance for researchers and governments in order to build better tourism and energy strategies.From this perspective, empirical analysis like this study plays an important role for strengthening the energy-tourism-growth literature.In other words, energy consumption, tourism development and economic growth are strongly interrelated and cause one another.Therefore, policy makers of these countries should take sustainable energy and tourism policies into account for a sustainable economic growth.Similarly, the policy makers should also take a sustainable economic growth take into account for sustainable energy and tourism policies for their countries.This study reveals the need of further empirical studies using different methods on the energy consumption-tourism development-economic growth literature.These forthcoming studies will enable the policy makers and researchers to better understand the causal relationships between these variables. Author Contributions: Cem Işık and Eyüp Do gan presents the basic ideas, introduction and obtains the main results in conducts, the illustration section also the whole paper; Serdar Ongan adds his contributions to the Sections 1 and 5. Conflicts of Interest: The authors declare no conflicts of interest. Table 1 . Tourist Arrivals, Tourism Receipt, Energy Consumption and GDP of the Top 10 most-Visited Countries. Table 2 . Energy, Tourism and Economic Growth Causality. Author Time Destination Methodology Variables Causality Energy/Tourism-Led Growth ARRV/RCPT → Y Y → CO2, EGY Tourism does not cause in Benin, Congo and Zimbabwe * denotes the statistical significance at 1% level. Table A1 . Bootstrap Granger causality between energy consumption and economic growth. Table A2 . Bootstrap Granger causality between tourist arrivals and economic growth. Table A4 . Bootstrap Granger causality between tourism receipts and economic growth. Table A5 . Bootstrap Granger causality between tourism receipts and energy consumption. k i Tourism-Led Energy Hypothesis Energy-Led Tourism Hypothesis Wald Test p-Value Wald Test p-Value ***, ** and * denote the statistical significance at 1%, 5% and 10% level of significance, respectively. Note:
2017-11-19T12:45:05.657Z
2017-10-30T00:00:00.000
{ "year": 2017, "sha1": "1c6fbbef6b4bdbd2aa4bb6e2cfbc376b6c4cb135", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-7099/5/4/40/pdf?version=1509442526", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1c6fbbef6b4bdbd2aa4bb6e2cfbc376b6c4cb135", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
244611120
pes2o/s2orc
v3-fos-license
In vitro anti-proliferative effect of capecitabine (Xeloda) combined with mocetinostat (MGCD0103) in 4T1 breast cancer cell line by immunoblotting Objective(s): Mouse breast cancer cell line 4T1 can accurately mimic the response to immune receptors and targeting therapeutic agents. Combined therapy has emerged as an important strategy with reduced side effects and maximum therapeutic effect. Mocetinostat (MGCD0103) is one of the members of Class I Histone Deacetylase Inhibitors (HDACi) and its mechanism of action has not been defined, yet. Capecitabine (Xeloda) is an antimetabolite and currently is widely utilized to treat a wide range of solid tumors. The aim of this study was to investigate the effects of the capecitabine, mocetinostat and their combined application on the 4T1 cell line. Materials and Methods: The effects of combined administration of mocetinostat and capecitabine on 4T1 cells were investigated by cell viability and migration assays, apoptosis analysis, and Western blotting technique. Results: The concentrations of drugs that give a half-maximal response (IC50) were detected for capecitabine (1700 µM), mocetinostat (3,125 µM), and 50 µM Capecitabine+1,5 µM Mocetinostat for 48 hr. In capecitabine+mocetinostat combine group, we observed that cell migration decreased, DNA fragmentation increased compared to the control group. capecitabine + mocetinostat group induced apoptosis by decreasing Bcl-2, PI3K, Akt, c-myc protein levels, while increasing Bax, Caspase-3, PTEN, cleaved-PARP, Caspase-7, Caspase-9, p53, cleaved-Cas-9 protein levels in 4T1 cells. Conclusion: Capecitabine and mocetinostat played a toxic role through inducing apoptosis on 4T1 cancer cells in a time- and concentration-dependent manner. These results showed that combined therapy with low concentrations were detected to be more effective than that with high-concentration alone drug treatment. Introduction TNBC (triple-negative breast cancer) is defined by the absence of estrogen, progesterone, and human epidermal growth factor 2 receptors which account for 15% to 20% of all breast cancer cases (1). TNBC is a heterogeneous subtype of breast cancer whose molecular characteristics and clinical response to a targeted therapeutic approach are beginning to refine it (2). The disease also has a more severe profile than hormone receptor-positive tumors, with higher relapse rates and short life expectancy. One of the mechanisms preventing cancer formation and development is apoptosis. The formation of apoptosis is inhibited as a result of tumor cells, disruption of the balance of pro-apoptotic and antiapoptotic proteins, decrease in caspase activity, and disruption of death receptor signals (3). The Apoptosis mechanism is still dominant in breast cancer today (4). In addition, understanding the apoptotic mechanisms of drugs used in cancer treatment provides very important information on how to treat the disease. Chemotherapy is a commonly used treatment for breast cancer (5). When cancer drugs are given together, they are more effective in therapy. Combination therapy aims to use drugs that function through various pathways, reducing the chances of cancer cells developing resistance. When drugs with different effects are combined, each of them can be used at its maximum effectiveness without causing intolerable side effects (6)(7). Two or more therapeutic proposals specifically target the cancer-causing cell or signaling pathways, gaining the advantage of reaching multiple targets in determining the different mechanisms of drugs. Combination therapy is gaining traction as a viable technique for achieving a better long-term prognosis with fewer side effects and maximum therapeutic efficacy (8). This combination therapy is currently being tested in clinical trials as a powerful new cancer treatment technique (9). The epigenetic mechanism is known to be involved in the initiation and progression of TNBC. As a result, the mechanisms, molecules, and signaling pathways of genes that act in and express epigenetic regulation in carcinogenesis are gaining attention (10). HDACi have been shown to inhibit tumor development, induce apoptosis, and control cellular functions ranging from metastasis to angiogenesis in cancer cells. Furthermore, HDACi have been shown to cause significantly less cytotoxicity in normal cells (11). Mocetinostat is a class I and IV selective HDACi that has been shown in preclinical studies to have potent and selective antiproliferative effects in a variety of human cancer cells (12). Mocetinostat is well-tolerated in clinical trials, with favorable pharmacokinetics and pharmacodynamics and promising antitumor activity in many hematological diseases (13). Another effective strategy for the treatment of solid tumors is to combine mocetinostat with other antitumor agents (14). Capecitabine is an anti-cancer chemotherapeutic. It is classified as an antimetabolite. Capecitabine was formed based on observation of a high concentration of thymidine phosphorylase enzyme in many human tumors, and it has low toxicity and is easy to administer. It acts throughout the S phase of the cell cycle by inhibiting DNA synthesis through restricting the availability of thymidylate and inducing apoptosis in cells (15). In this study, mocetinostat and capecitabine were applied in combination to 4T1 breast cancer cells for the first time and combined use of capecitabine and mocetinostat were evaluated comparatively. The viability and apoptosis of cancer cells were identified at the cellular and molecular levels in order to obtain a more detailed and mechanistic understanding of the toxic effects of combined treatment with mocetinostat and capecitabine on 4T1 cancer cells. Cell viability assays The effects of mocetinostat, capecitabine, and capecitabine+mocetinostat on the viability of 4T1 cells were analyzed using the MTT assay. 4T1 cells were seeded at a density of 6000 cells per well in a 96-well plate and incubated for 24 hr before being treated with vehicle (DMSO at a final concentration of 0.5%), mocetinostat, capecitabine, and capecitabine+mocetinostat. Following a 48-hour incubation period, the MTT test was carried out according to the manufacturer's instructions (Acros Organics, China). The absorbances were read at 570 nm with a microplate reader (ThermoFisher Scientific) and the mean values were calculated based on the data of three independent replicates. The concentration and % cell viability curve applied with the help of the Microsoft Excel program were calculated using the formula Cell morphology analysis 4T1 cells were seeded into 6-well plates at a density of 1 × 10 3 cells in each well and incubated for 24 hr in fresh media. The culture medium was then replaced with a freshly prepared culture medium, and all of the drugs were given at the concentration of their IC 50 value, after which they were incubated for 0, 24, 48, and 72 hr in the same setting. The culture media was collected after incubation, and the cells were rinsed in PBS (pH 7.4) and examined under an inverted microscope (Nikon Eclipse TS100). Trypan blue dye exclusion assay Trypan Blue technique is one of the measures of cells' metabolic status markers or their ability to perform complex metabolic tasks. 4T1 cancer cells were seeded 1x10 5 cell/ml in 6-well plates and the cells were treated with drugs at 24, 48, 72, and 96 hr according to the IC 50 values obtained for the drugs. After trypsinization, 0.5% trypan blue solvent was added and cells were counted with a hemocytometer. DNA fragmentation analysis DNA fragmentation assay was performed using the agarose gel electrophoresis method previously mentioned to confirm cell death via apoptosis (17). In brief, 4T1 cells (1×10 6 ) were seeded in T75 flasks and incubated for 24 hr. The cells were then treated with different concentrations of mocetinostat (3,125 µM), capecitabine (1700 µM), capecitabine+mocetinostat (1,5 µM+50 µM) and incubated again for 48 hr. Then, the cells were washed in PBS. Total DNA was isolated and analyzed by electrophoresis on 2% gel containing 0.1 μg/ml of ethidium bromide and visualized under a UV illuminator. In vitro anti effect of capecitabine and mocetinostat Kaya Cakir and Eroglu Cell wound healing assay The ability of live cells to migrate is essential for normal growth, immune response, and disease processes including cancer metastasis and inflammation. 4T1 cells (1 × 10 5 ) were seeded into a 6-well plate and cultured in a complete medium. When the cells had reached 75% confluence, the cell layers were damaged using a sterile pipette tip and incubation continued with mocetinostat, capecitabine, capecitabine+mocetinostat for 72 hr. The in vitro healing mechanism refers to the movement of cells through the wound surface. An inverted microscope was used to photograph the wound healing ın vitro and measure the rate of closure. The rate of wound healing = [(the wound width of 0 hr -48 hr)/ 0 hr wound width] × 100% (18). SDS-PAGE and Western blot analysis 4T1 cells were collected by 1XPBS and centrifuged for 2 min at 13,200 rpm. After harvesting, cells were extracted with radioimmunoprecipitation assay (RIPA) lysis buffer added Phenyl-methyl-sulfonyl-fluoride (PMSF) to protect proteins from degradation. The samples were incubated for 20 min at room temperature before being centrifuged at +4 °C for 20 min at 13,200 rpm. The protein concentration was determined using the Bradford assay and protein samples were separated by SDS-PAGE with equal amounts of total protein (50 μg) gel electrophoresis using 12% polyacrylamide gels. SDS-PAGE was used to separate the lysates, which were then transferred to polyvinylidene difluoride (PVDF) membranes. Ponceau Red staining was used to track protein transfer. 5% skim milk in Tris-buffered saline (TBS) containing 0.1% Tween -20 (TBST) solution was used to block membranes for 1 hr. Primary antibodies: Bcl -2, Bax, c-myc, caspase 7, caspase 3, caspase 9, Hdac I, Hdac III, PTEN, Cleaved PARP, p53, Akt(1:1000), PI3K, HdacI, and HdacIII were added and incubated at a temperature of 4 °C, overnight. Each membrane was washed five times for 5 min using TBST and was then incubated with appropriate secondary antibodies for 2 hr on a shaker at room temperature. Protein signals were detected using enhanced chemiluminescence (ECL). The densitometry of immunoblots was quantified with Image J software. Statistical analysis All data were expressed as mean ± standard deviation (SD). Multiple group comparisons were performed by one-way analysis of variance (ANOVA) prepared in GraphPad Prism 9.1.1. "ImageJ application" program is used for the density measurement of the monitored bands. Each protein band had been measured three times and mean value of three measurement had been used. For the calculation of relative expression levels; each value had been divided to values of β-actin. Cell morphology analysis 4T1 breast cancer cells are shown with inverted microscope images in Figure 2. Treatment of capecitabine, mocetinostat, capecitabine+mocetinostat cells appears to cause abnormal changes, such as condensation in the cell nucleus, reduced cell density and number, reduction in cell size and losing cell extensions, and turning into a round shape. These changes in 4T1 cells showed the effects of drugs at different times and different concentrations. Trypan blue dye exclusion assay The drug-treated and control group's morphological characteristics at 24, 48, 72, and 96 hr after the 4T1 cell line was cultured are seen in Figure 3. Depending on the time, drug-treated cell groups have a substantial reduction in cell proliferation. The rates of dead and living cells were calculated as a result of the counts, and it was shown that capecitabine caused increased lethal effect at 24%, 45.2%, 69.1%, and 86.6%; mocetinostat caused 28%, 42%, 61.5%, and 77.8% increased lethal effect; capecitabine+mocetinostat caused 16%, 34.3%, 51.5%, and 88.2 % increased lethal effect, respectively for 24, 48, 72, and 96 hr compared with the control group. DNA fragmentation analysis Apoptosis is characterized by the fragmentation of DNA. 4T1 cells (Figure 4). Treated with 1700 µM capecitabine, 3,125 µM mocetinostat, and 50 µM capecitabine+1,5 µM mocetinostat. DNA fragmentation was seen in a timedependent manner with both high molecular weight DNA. The intact DNA bands were seen in the control group, which was treated with 0.1 % DMSO. DNA laddering was also observed in cells treated with drugs as a supportive influence. The treated cells had a DNA laddering pattern. Cell wound healing assay In the wound healing experiment, 4T1 breast cancer cell migrations following 24 hr, 48 hr, and 72 hr after capecitabine, mocetinostat, capecitabine +mocetinostat treatment the closure of the width of the wound was examined at 24-hr intervals and the measurement results were recorded in triple repetition. The migration rate of drug-treated cells was decreased compared with the control groups. The control group's wound width was recorded as an average of 851,7 μm at the 0th hour and was completely closed at the end of the 72nd hour. Control was 456,05 μm, capecitabine 866,50 μm, mocetinostat 825,17 μm, and capecitabine+mocetinostat 856,14 μm at 48 hr. These results were evaluated as decreased cell proliferation and increased wound width by drug treatment in a time-dependent manner in 4T1 cells. Discussion In this study, various methods were used to investigate the antitumor effects on breast cancer 4T1 cells caused by treatment with mocetinostat and capecitabine alone, and treatment by these two drugs combined. Combinations of two or more therapeutic drugs, specifically targeting the cancer-causing cell and cell signaling pathways, make important contributions in determining the different mechanisms of drugs (19,16). Capecitabine may be a favored agent to be evaluated in experimental combination regimens due to its tolerability and efficiency as a single agent and its lack of cross-resistance with other chemotherapeutics (20). Cellular functions such as cell cycle, replication, survival, DNA repair, and differentiation are all regulated by histone deacetylases (HDACs). In hematologic and solid tumors, their expression is often changed (21). A significant number of HDACi are currently in clinical trials as anticancer agents (22). HDACi have less effect to display significant antitumor efficacy as a single agent in solid tumors, including breast cancer, and are more effective when combined with radiotherapy, chemotherapy, or others (23). HDACi reduces cancer cells through apoptosis with the use of many cancer drugs (24). Mocetinostat is well-tolerated in clinical trials, with favorable pharmacokinetics and pharmacodynamics and promising antitumor efficacy in a variety of diseases (25). The 4T1 cell line has immunogenicity, proliferation, and metastatic qualities that are very similar to stage IV human breast cancer. Despite advancements in cancer detection and therapy, new choices are needed to increase survival and enhance the quality of life for these individuals (26). In the present study, we examined for the first time the effects of capecitabine in combination with mocetinostat on mouse 4T1 cell line. In this study we investigated, the cytotoxic and growth inhibitory effects of Capecitabine, Mocetinostat, Capecitabine+Mocetinostat on 4T1 cancer cells. The viability of 4T1 cells exposed to various capecitabine, mocetinostat, and capecitabine+mocetinostat concentrations was reduced in a concentration and time-dependent manner, according to our findings. On 4T1 cells, the half-maximal inhibition concentrations (IC 50 ) were: capecitabine 1700 µM, mocetinostat 3,125 µM, and capecitabine 50 µM+mocetinostat 1,5 µM for 48 hr (Figure 1 Mocetinostat is used combined with another antitumor. Some researchers found that mocetinostat in combination with gemcitabine affects cell growth and induces apoptosis in LMS (Leiomyosarcoma) cells (30,31). Mocetinostat treatment was 1 μM to 5 μM on DU-145 and PC-3 cells for 72 hr and induced significant levels of apoptosis (32). It has been used in combination with many drugs such as capecitabine, docetaxel and cyclophosphamide, epirubicin, and positive results have been obtained in the treatment of breast cancer (33). In a phase 3 study conducted in 2002, the combined use of docetaxel and capecitabine significantly reduced the risk of disease progression and increased the survival rate in patients (34). Based on literature study, we obtained from the in vitro study show that the combined use of mocetinostat and capecitabine is effective on breast cancer cells and induces apoptosis of the cells. Different cell viability and proliferation findings were found in our research. 4T1 cells were incubated with capecitabine, mocetinostat and their combination for 24, 48, 72, and 96 hr, and the number of apoptotic cells decreased. Capecitabine+mocetinostat caused 16%, 34.3%, 51.5%, and 88.2 %, respectively for 24, 48, 72 and 96 hr compared with the single uses of capecitabine and mocetinostat. Apoptosis was caused by changes in cell morphology in the current investigation. The cytotoxicity was found to be through causing changes in the cell morphology. Cells shrink in volume and lose their cellular extensions. The wound healing area was closed at 72 hr in the control group (P<0.05). This study demonstrated the cell migration effect of the combined treatment of capecitabine+mocetinostat. Studies showed that capecitabine and mocetinostat inhibited cell migration (35,36). Exposing the cells to the combination of capecitabine+mocetinostat, on the other hand, led to a substantial increase in wound area ( Figure 5). Capecitabine inhibits DNA synthesis in rapidly proliferating cancer cells and mocetinostat affects apoptosis, but the exact mechanism is unknown (37,38). One of the methods to determine apoptosis is genomic DNA fragmentation. Single drugs and their combinations triggered DNA fragmentation in 4T1 cells, resulting in a characteristic ladder pattern of apoptotic mechanism, demonstrating apoptosismediated cell death. Capecitabine, mocetinostat, and capecitabine+mocetinostat were shown to trigger the apoptotic pathway in the 4T1 cell line. In various cancer models, all HDAC inhibitors have been shown to trigger either an extrinsic or intrinsic cell death pathway, or both together. With co-administration of capecitabine and mocetinostat, the expression of Bcl2, Hdac I, Akt, PI3K, c-myc, and Hdac III, which are active proteins in the apoptotic pathway and cell growth, decreased significantly, while Bax, Cas-3, Pten, C-Parp, Cas-7, Cas-9, p53, and C-cas9 protein expression was significantly increased (Figure 6). It is possible to say that the apoptotic pathway is stimulated by the coadministration of capecitabine and mocetinostat in 4T1 cells by increasing the expression of caspase-3 and caspase-7 proteins involved in the intrinsic pathway. When single and combined use of drugs are evaluated in protein levels, it was shown that capecitabine and mocetinostat affect the apoptotic pathway in single use. While Pten protein level increased with combined drug administration, PI3K protein level decreased and cell proliferation was adversely affected. Combination use times, indicated that drugs are more effective on proteins involved in the apoptotic pathway, and Caspase activity had already risen following treatment. AKT/PIK3 could protect cells from apoptosis and PI3K/Akt protein levels were detected to be decreased when compared with that of control and single drug treatments. One of the most effective anticancer therapies is the targeted suppression of histone deacetylase (HDAC). In general protein levels of Hdac 1, Hdac 3, and others show high expression on breast cancer (39). We have demonstrated that levels of Hdac I and Hdac III proteins levels were decreased after combined therapy compared with that of both single agents (P<0.001). It was observed that HDAC I-3 expression increased in the 4T1 cell line with the co-administration of mocetinostat and capecitabine for 48 hr. It can be said that HDAC inhibitors induce apoptosis in tumor cells with their pro-apoptotic and anti-apoptotic regulation. Some researchers have evaluated the levels of similar and different proteins on different cell lines, but such data are not available for the combined effect of the two drugs on the 4T1 breast cancer cell line (40)(41)(42). In this study, we report for the first time that using HDACi mocetinostat in conjunction with capecitabine might be a viable option. Mocetinostat in combination with capecitabine revealed increased anti-tumor effects as compared with either treatment alone. Conclusion In this study, for the first time, we investigated the cytotoxicity of mocetinostat and two drugs combined on the 4T1 cell line. The combination of mocetinostat and capecitabine could significantly inhibit the growth of breast cancer ın vitro and could trigger apoptosis pathways even at very low concentrations. These findings might have a significant impact on breast cancer treatment since significantly lower dosages of capecitabine can result in fewer undesirable side effects. As a result, the combination of mocetinostat and capecitabine may be a novel and effective agent against breast cancer.
2021-11-26T00:07:14.240Z
2021-10-17T00:00:00.000
{ "year": 2021, "sha1": "60364e018f6dc7e4c5629b6761c401704d18077b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "929a77a7bd1167fdef441d0678e16d51d65bb81c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
45751646
pes2o/s2orc
v3-fos-license
Small Angle Neutron Scattering Studies of R67 Dihydrofolate Reductase, a Tetrameric Protein with Intrinsically Disordered N-Termini R67 dihydrofolate reductase (DHFR) is a homotetramer with a single active site pore and no sequence or structural homology with chromosomal DHFRs. The R67 enzyme provides resistance to trimethoprim, an active site-directed inhibitor of Escherichia coli DHFR. Sixteen to twenty N-terminal amino acids are intrinsically disordered in the R67 dimer crystal structure. Chymotrypsin cleavage of 16 N-terminal residues results in an active enzyme with a decreased stability. The space sampled by the disordered N-termini of R67 DHFR was investigated using small angle neutron scattering. From a combined analysis using molecular dynamics and the program SASSIE (http://www.smallangles.net/sassie/SASSIE_HOME.html), the apoenzyme displays a radius of gyration (Rg) of 21.46 ± 0.50 Å. Addition of glycine betaine, an osmolyte, does not result in folding of the termini as the Rg increases slightly to 22.78 ± 0.87 Å. SASSIE fits of the latter SANS data indicate that the disordered N-termini sample larger regions of space and remain disordered, suggesting they might function as entropic bristles. Pressure perturbation calorimetry also indicated that the volume of R67 DHFR increases upon addition of 10% betaine and decreased at 20% betaine because of the dehydration of the protein. Studies of the hydration of full-length R67 DHFR in the presence of the osmolytes betaine and dimethyl sulfoxide find around 1250 water molecules hydrating the protein. Similar studies with truncated R67 DHFR yield around 400 water molecules hydrating the protein in the presence of betaine. The difference of ∼900 waters indicates the N-termini are well-hydrated. D ihydrofolate reductase (DHFR) catalyzes the reduction of dihydrofolate (DHF) to tetrahydrofolate (THF) using NADPH as a cofactor. THF and its derivatives serve as cellular cofactors for one-carbon transfer reactions involved in the synthesis of nucleotides such as purines and thymidine, amino acids such as methionine and glycine, and various other metabolites. Trimethoprim is a potent inhibitor of Escherichia coli chromosomal DHFR (EcDHFR) and has been widely used as an antibacterial drug. The gene encoding R67 DHFR, carried by an R-plasmid, confers resistance against trimethoprim. Recent clinical isolates of E. coli causing urinary tract infections have the gene encoded in class I integrons flanked by other drug resistance genes. 1 There are no antibiotics that target R67 DHFR, though promising leads have recently been discovered. 2,3 This type II DHFR (78 amino acids long) is genetically and structurally unrelated to EcDHFR. R67 DHFR is a homotetramer, and each monomer has five antiparallel βstrands that assemble into a dimer with a six-stranded β-barrel at the subunit interface. Using loop−loop interactions, two dimers assemble into a tetrameric "doughnut" with a single active site pore. 4 Numerous experiments indicate the 16−20 N-terminal residues of R67 DHFR are disordered and can tolerate various sequences. For example, several disorder predictors indicate the N-terminal sequence is intrinsically disordered. 5 Also, the first 17 amino acids for each monomer do not appear in the dimer crystal structure. 6 The N-termini can be cleaved after F16 by chymotrypsin treatment, and the truncated protein is almost fully active, although 2.6 kcal/mol less stable. 7 The truncated tetrameric protein was crystallized, and the structure was first determined at a resolution of 1.7 Å 4 and later at 0.96−1.26 Å. 8,9 High thermal factors in the latter structure suggest the stretch of residues 17−21 is also disordered. In addition, electron densities for residues 21−23 were diffuse, indicating high mobility. 9 Other type II DHFR variants (e.g., R388 and R751) show different N-terminal sequences, but the same core sequence contributes to the β-barrel structure. 5,10,11 This can also be seen from a sequence alignment of the type II DHFR variants 12 showing non-identity in the first 21 residues. His tags can also be added to the N-termini. 13 −15 In addition, a tandem array of four R67 DHFR gene copies encodes a protein in which the Cand N-termini of the first and second monomers are fused as well as the second and third monomers and the third and fourth monomers. The resulting Quad1 protein possessing 4 times the molecular mass of the R67 DHFR monomer is stable as well as functional. 16 Asymmetric mutations in the core of R67 DHFR that favor one topology 17,18 are used in these Quad constructs. Similarly, the N-terminal sequences from R388 and R751 can be used as the linker domains to give a functional monomeric Quad4 protein. 5 These various experiments and constructs indicate the N-termini can be fused without a loss of function. Pelletier and co-workers have made similar dimeric fused constructs of R67 DHFR. 19,20 To gain information about the conformational space occupied by the disordered N-terminal sequences in R67 DHFR, we used small angle neutron scattering (SANS) experiments. Because of the inherent contrast between hydrogen and deuterium atoms, SANS data of the hydrogenated protein in D 2 O buffer allow modeling of the ensemble of conformations sampled by the disordered tails. As disordered sequences often undergo coupled binding and folding, we monitored any potential changes in the conformational sampling of the disordered tails upon formation of a binary complex (R67 DHFR−NADP + ) or a ternary complex (R67 DHFR−NADP + −DHF). Also, osmolytes have been shown to exert protein-stabilizing forces via a preferential exclusion mechanism. 21,22 To determine whether addition of an osmolyte leads to folding of the termini, we added deuterated betaine to examine any changes in the R67 DHFR shape using SANS. Also of interest, the water associated with the protein surface comprises the hydration layer, which can be differentiated from the bulk solvent. The first hydration shell can contain tightly bound water as well as water that can freely exchange. These differences are due to the varied environments associated with the protein surface, which can display different clefts and bumps as well as different atom types. 23 Computational studies have shown that water molecules hydrating the disordered chains exhibit properties different from the properties of those surrounding globular domains, in terms of both the number of waters and the structural order of the water molecules in the hydration layer. 24,25 To monitor the preferential hydration of full-length and truncated R67 DHFR, we used hydrogenated osmolytes in a D 2 O buffer solution in additional SANS experiments. This is analogous to a H 2 O/D 2 O contrast variation approach; i.e., the contrast created by addition of a hydrogenated osmolyte allows measurement of the hydration shell associated with R67 DHFR. The contrast created by osmolytes differentiates between the hydration layer and the bulk solvent. The information obtained from the scattering contrast can be used to obtain the number of water molecules in the hydration layer that are responsible for exclusion of the added osmolyte from the protein surface. ■ METHODS Protein Expression and Purification. Full-length R67 DHFR (MIRSSNEVSN PVAGNFVFPS NATFGMGDRV RKKSGAAWQG QIVGWYCTNL TPEGYAVESE AHPGSV-QIYP VAALERIN) was expressed and purified as described by Reece et al. 7 Briefly, cell lysates were subjected to ammonium sulfate precipitation and ion-exchange column chromatography to purify the protein to homogeneity. Purified samples were dialyzed against distilled, deionized H 2 O and lyophilized. Chymotrypsin-truncated R67 DHFR was obtained as described by Reece et al.,7 starting from full-length His-tagged R67 DHFR. The His-tagged construct has the synthetic R67 DHFR gene 7 cloned into the pRSETB vector from Invitrogen. 26 Purification was performed with a nickel-nitrilotriacetic acid (Ni-NTA) column (Qiagen), followed by elution from a DEAE fractogel column. The resulting protein was incubated with immobilized chymotrypsin (Sigma-Aldrich) in 10 mM Tris/1 mM EDTA, pH 8.0 buffer overnight at 4°C and later at room temperature for ≤24 h. Chymotrypsin cleaves after F16 in the R67 DHFR sequence (or after F47 in the Histagged sequence). The progress of the reaction was monitored by sodium dodecyl sulfate electrophoresis (see Figure S1). Immobilized chymotrypsin was removed, and the truncated tetramer was separated from peptide fragments by gel filtration at pH 8 using G75 Sephadex. A Ni-NTA column further separated the cleaved N-terminus from the tetrameric core of the protein. The purified truncated R67 DHFR was dialyzed against water using a 7 kDa cutoff membrane and then lyophilized. Protein concentrations were determined by measuring the absorbance at 280 nm of the solution using an extinction coefficient determined with a bicinchoninic acid (Pierce) assay. Small Angle Neutron Scattering (SANS). The sample of lyophilized, full-length, apo R67 DHFR was reconstituted in 20 mM deuterated Tris buffer in D 2 O (pD 7.5). Experiments were also performed to study any changes in the ordering of the Ntermini upon binding of NADP + to apo R67 DHFR (binary complex formation) and upon binding of dihydrofolate (DHF) to the R67 DHFR−NADP + complex (ternary complex formation) under saturating ligand concentrations (3 mM NADP + for binary samples and 3 mM NADP + and 2 mM DHF for ternary samples). To study the effect of betaine on the disordered N-termini of R67 DHFR, the change in overall shape and compaction of apoprotein in the presence of 20% deuterated betaine was explored. Additionally, samples of fulllength apo R67 DHFR with no osmolyte and with the osmolytes betaine and dimethyl sulfoxide (DMSO) were prepared to investigate protein hydration. The osmolytes were hydrogenated to create a contrast with the deuterated buffer conditions, allowing measurement of changes in preferential hydration of apo R67 DHFR. 27 The concentrations of osmolytes ranged from 2.5 to 20% (w/v) for betaine and from 2.5 to 17.5% (v/v) for DMSO. The protein concentrations ranged from 4.5 to 7.5 mg/mL. Similar sample sets using the truncated R67 DHFR protein with 0 to 20% (w/v) betaine were prepared. The concentration of the truncated protein was 2.4−2.6 mg/mL. Buffer controls were run to detect the background scattering. All samples were prepared, centrifuged, and loaded into banjo-shaped quartz cuvettes (Hellma USA, Plainville, NY) with a path length of 2 mm. Experiments were performed on the EQ-SANS instrument at the Spallation Neutron Source at the Oak Ridge National Laboratory. In 60 Hz operation mode, a 4 m sample−detector distance with a 2.5−6.1 Å wavelength band was used. Neutron exposure times were approximately 1 h, and the scattered neutrons were detected on a 1 m × 1 m two-dimensional detector at 25°C. The data collected for all experiments were reduced using MANTID Plot, 28 and the total two-dimensional scattering was corrected by the scattering from the empty quartz cell. Then, the scattering was normalized by the incident beam flux and radially averaged to obtain the absolute scale intensity, I(q), versus scattering angle, q. The background scattering for the respective buffers was subtracted from the total scattering. Guinier analysis with a linear plot of ln I(q) versus q 2 for low-q data gave a slope of −(R g 2 )/3, where R g is the radius of gyration and the intercept on the Y-axis gave the I(0) value. Estimates of R g and zero-angle scattering intensity I(0) were obtained using eq 1: 27 where I(q) is the scattering intensity at small angles (q). The data were also analyzed using GNOM in the ATSAS package. 29 GNOM reads the scattering profile and evaluates the particle distance distribution function, P(R), in a defined range of distances and yields the apparent radius of gyration (R g ) and zero-angle scattering intensity I(0). Data for each sample were fit using Guinier analysis and GNOM (see Table S1). The R g values and zero-angle scattering intensities, I(0), of R67 DHFR in the presence of varying concentrations of osmolytes (betaine and DMSO) were normalized by the protein concentration of each sample. To obtain information about the preferential hydration of R67 DHFR and the effect of osmolytes on hydration, the change in I(0) with an increasing concentration of osmolytes obtained from the GNOM analysis was fit to eq 2 from ref 27 where I s (0) and I(0) are the zero-angle scattering intensities in the presence and absence of an osmolyte, respectively, f v , or fractional volume, is the concentration of osmolyte added (w/v for betaine and v/v for DMSO), ρ w , ρ s , and ρ p are the scattering-length densities of water, solute (=osmolyte), and protein, respectively, and V p and V w are the volumes of protein and protein-associated water, respectively. The scatteringlength densities of the full-length (3.248 × 10 10 cm −2 ) and truncated (3.228 × 10 10 cm −2 ) proteins, betaine (0.817 × 10 10 cm −2 ), and DMSO (−0.051 × 10 10 cm −2 ) and the protein volumes were calculated using the online tool MULCh. 30 The volume of protein-associated water gives the number of water molecules in the hydration layer of R67 DHFR upon addition of the osmolyte. Note that eq 2 assumes V p and V w remain constant across the range of osmolyte concentrations used. In other words, the measured V w is an average across the betaine range used for the preferential hydration SANS experiments. Analysis Using MD and SASSIE. Our next step was to analyze the SANS data using models generated via MD and SASSIE (see http://www.smallangles.net/sassie/SASSIE_ HOME.html). 31 The latter creates atomistic models of the protein using Monte Carlo simulations, calculates theoretical scattering data for these models using SasCalc or Xtal2SAS tools, and compares the theoretical data to the experimental data. The experimental SANS data were interpolated into SASSIE in a defined q range using the data interpolation module. As both MD and SASSIE require a model of the full-length protein sequence to generate structures for fitting, sixteen residues (MIRSSNEVSNPVAGNF-) were added to each of the four N-termini of the truncated R67 DHFR structure (PDB entry 2RH2) 8 using Modeller (version 9.15). 32 A total of 100 models were generated, and 10 structures with the lowest discrete optimized potential energy score were used further. The selected models were minimized under vacuum for 10000 steps. The minimized structures were then solvated in a SPC/E water box 33 and equilibrated using the protocol as described by Ramanathan et al. 34 The protein and ligand parameters were generated using AMBER force field f f14SB. Extensive MD simulations were performed for the apoenzyme, the binary complex with NADPH, and the ternary complex with NADPH and DHF using the AMBER 14 simulation package. 35 Initially, 10 models of the apoenzyme were simulated for 100 ns each, and selected conformations from these runs that gave good fits to the SANS data were further studied for four apo simulations of 1 μs each. Therefore, the total aggregate sampling for apo systems was 5 μs. Similarly, nine MD simulations for the binary complex and five for the ternary complex were performed for 1 μs each, providing aggregate 9 μs and 5 μs MD sampling for binary and ternary complexes, respectively. Additionally, two more models were built. The first model had two pairs of N-termini interacting with each other on both sides of the pore, and a second model moved all the four termini to block access to the active site pore. This approach allowed the construction of numerous structures in which the N-termini sample a large area of conformational space. Frames from the MD trajectories were analyzed in SASSIE using the SasCalc module that generated theoretical SANS profiles, which were then compared to the experimental SANS data using the χ 2 analysis module. Those structures with a low χ 2 value (<10) were chosen as good fits. Complex Monte Carlo simulations also generated additional conformers for fitting. In this process, the core of the protein remained constant, and only alternate conformations of the 21 N-terminal amino acids were generated. Acceptable frames avoided atom overlaps. In addition, on the basis of the average R g obtained (for example, ∼21.5 Å for apo R67 DHFR), directed Monte Carlo sampling additionally generated >100000 structures with R g values limited to a range from 20.5 to 22.5 Å. These structures were subjected to a 500-step minimization using NAMD. Again, the theoretical SANS profiles were calculated using the SasCalc module in SASSIE, followed by a χ 2 analysis. We chose to use a strategy by which we analyzed single structures, as opposed to an ensemble structure method, because our relatively exhaustive analysis in SASSIE using our MD and Monte Carlo structures was found to be adequate to fit our SANS data. Both MD and SASSIE analyses were performed to fit the experimental SANS profiles for the ligand-bound complexes (binary and ternary) as well as for apo R67 DHFR in 20% deuterated betaine. To generate sufficient conformers with low χ 2 values, the apo, binary, and ternary structures were interconverted by a Python script by adding or removing the coordinates of the NADP + and DHF ligands in the active site pore of the MD and Monte Carlo-generated structures. The ligand coordinates were obtained from PDB entries 2RK1 and 2RK2. 8 For the analysis of the binary data, initially one NADP + was positioned in the active site pore as per the 2RK2 crystal Biochemistry Article DOI: 10.1021/acs.biochem.7b00822 Biochemistry 2017, 56, 5886−5899 structure; 8 however, only 92 good fits were obtained. To gain more fits, we considered the possibility that a second cofactor could bind as the concentration of NADP + in the SANS sample was high (3 mM) and the active site pore can accommodate two homoligands. 36 There are four sets of coordinates for NADP + in the 2RK2 crystal structure because of the symmetry of the active site. Therefore, we positioned a second set of NADP + coordinates in a symmetry-related position using the 2RK2 structure and continued with the analysis. A workflow and summary of the various steps in our analyses for apo, binary, and ternary complexes are provided in Figure S2. Snapshots from MD simulations were also used to compute the number of water molecules present in the first solvation shell of the protein. AMBER's ptraj module and command watershell, with the default cutoff of 3.4 Å, were used. Data Mining. The structures that best fit the SANS profiles were data mined to find the most frequent interactions between the N-terminus and all the other residues in the protein. To analyze the structures, the cpptraj program in Amber14 35 was used to calculate the distances between the center of mass (COM) of each residue and all the other individual residues in the R67 DHFR tetramer. A python script was then used to determine the minimum distance between the COM for each pair of residues for all structures that fit the SANS data. Additionally, the number of times the center of masses for each pair of residues was within 5 Å was calculated. Heat maps of the inter-residue interactions were created from the matrix of residue pair interactions using Matlab (version r2017a). Differential Scanning Calorimetry (DSC). Thermal unfolding of full-length and truncated R67 DHFRs was monitored between 25 and 95°C using a Microcal VP differential scanning microcalorimeter. The concentration of full-length R67 DHFR was 150−160 μM in MTA buffer (100 mM MES, 50 mM Tris, and 50 mM acetic acid) at pH 8. Samples were also prepared in MTA buffer with 20% betaine or 15% DMSO. Scans were repeated two times with scan rates of 1°C/min. For truncated R67 DHFR, concentrations of 50− 100 μM were used with a scan rate of 1°C/min. The data obtained were analyzed using Origin version 7.0 supplied by the manufacturer and the melting temperatures obtained. Pressure Perturbation Calorimetry (PPC). Effects of betaine on the volume, or hydration, of R67 DHFR were estimated using the change in the thermal expansion coefficients (α) of the protein in the absence and presence of betaine. Pressure perturbation calorimetry (PPC) can be used to determine the α o value in buffer containing osmolytes using the following equation (eq 3): 37,38 where α s and α o are the thermal expansion coefficients for the solute and solvent, respectively, ΔQ is the heat released or absorbed after each application, or release, of pressure, T is the temperature, m s is the mass of the solute in the solution, V s is the specific volume of the solute, and Δp is the change in pressure applied above the solution. A VP-DSC instrument from MicroCal (Malvern) outfitted with a PPC appendage was used to calculate the α s for R67 DHFR. Pressure pulses of 60 psi of nitrogen were applied above the sample, using buffer alone as a reference. The thermal expansion coefficients were determined between 10 and 95°C in 2.5°C increments. Samples of 3−5 mg/mL R67 DHFR (these concentrations are equivalent to 90−150 μM for full-length and 99−150 μM for truncated R67 DHFRs) were prepared in 45 mM Na 2 HPO 4 , pH 8.0 buffer containing 0, 10, or 20% (w/w) betaine. Control experiments using buffer versus buffer, buffer versus water, and water versus water were used to correct the α s of R67 DHFR for the thermal expansion of the buffer and water components of the sample (which are contained in α o ). The raw data from the PPC were manually curated so that they could be integrated using NITPIC. 39 Files with the heat obtained from NITPIC were used to analyze the PPC data in the Origin 7.0 software package provided by MicroCal. The mass of the solute in the solution was determined spectroscopically, and the specific volume of 0.716 mL/g for R67 DHFR was previously obtained. 40 ■ RESULTS SANS of Apo R67 DHFR. Representative SANS profiles for full-length and truncated apo R67 DHFR are shown in Figure 1A and Figure S3A, respectively (see the Supporting Information). A dimensionless Kratky plot 41 of the full-length and truncated proteins indicates that both are globular ( Figure S3C). Analysis using GNOM yields an R g value of 21.89 ± 0.12 Å for the full-length protein (see Figure 1B) and 17.86 ± 0.14 Å for truncated R67 DHFR ( Figure S3B). The latter is comparable to the R g values of 17.1 and 17.5 Å for the 2RH2 8 and 2GQV 9 crystal structures of truncated R67 DHFR, respectively, calculated using CRYSON. 42 The molecular weight of R67 DHFR was calculated from the I(0) and R g of the SANS profile, using a model that is independent of protein concentration. 43 A value of 36470 g/mol matches well with the expected value of 33720 g/mol for full-length R67 DHFR, indicating that the sample is not aggregating under our conditions ( Table 1). As described in Methods, we used MD and the NIST program SASSIE to gain information about the space sampled by the N-termini. We generated 19 μs of MD trajectories and ∼307000 structures from nondirected as well as directed Monte Carlo analysis in SASSIE. A large number of structures were used to analyze our SANS data, and the volumes sampled by the N-termini are shown in Figure S4, differentiated by the method by which they were generated (e.g., MD, Monte Carlo, or hand built). The directed Monte Carlo analysis restricted structures to an R g range of 20.5−22.5 Å, substantially helping us find conformers that fit the SANS data. Figure 2A shows a χ 2 versus R g plot for the 117000 frames from the directed Monte Carlo simulations and the apo frames obtained from MD runs for binary and ternary complexes upon removal of the ligands using a Python script. The χ 2 versus R g plot shows a "U" shape, indicating neither very compact nor very extended states fit the data well. Instead, more intermediate structures fit the data. We identified 7936 structures that fit the apoenzyme SANS data with a χ 2 value of <10. The lowest χ 2 value obtained was 1.8, and the R g of the corresponding frame was 22.24 Å. An average R g value of 21.46 ± 0.50 Å was obtained (see Table 1), which is similar to the R g value obtained from the GNOM fitting. SASSIE also generates mesh plots that show the space sampled by the 21 N-terminal residues. All ∼461000 structures (307000 from Monte Carlo and 154000 from MD) sample the area shown by the dark gray mesh in Figure 2D. Best fits identified by SASSIE show a more restricted area explored by the N-termini (see the green mesh in Figure 2D), indicating compaction of the N-termini as compared to full extension. This trend is also identified in a plot of the center of mass for the N-terminal methionines (see Figure S5 for those structures for which χ 2 < 10). Here the tendency of the methionines to sample space mostly near the sides of the protein core can be seen. However, other methionine positions fit the data, indicating other successful sampling positions. The range of 20.84−23.53 Å (see Table 1) for the best fit R g values generated by SASSIE also indicates sampling of compact and slightly extended conformations of the N-termini. Any asymmetry in the mesh and sampling positions likely arises from nonconvergence of the MD trajectories, even though a total of 19 μs was used. In contrast, the protein termini have millisecond to second sampling times available. Effect of Ligand Binding on the Disordered Termini in R67 DHFR. SANS data were also collected to monitor if there were any changes in the disordered N-termini of R67 DHFR upon ligand binding. Data collected for binary (R67 DHFR− NADP + ) and ternary (R67 DHFR−NADP + −DHF) complexes were analyzed using GNOM. A comparison of the pairwise distribution plots for the apo, binary, and ternary complexes is shown in Figure 3A. The R g values for the apoprotein, NADP + binary complex, and NADP + −DHF ternary complex are 21.89 ± 0.12, 21.45 ± 0.14, and 21.45 ± 0.18 Å, respectively. To gain deeper insights into the disordered tail conformations, SANS data for the R67 DHFR−NADP + binary and R67 DHFR−NADP + −DHF ternary complexes were further analyzed using both MD and SASSIE. SASSIE analyses used the same set of ∼307000 frames described above for the apoenzyme, with ligands added by a Python script with 154000 frames from MD simulations. Figure S2 indicates the various steps used to generate best fit conformers. The plots shown in Figures S6 and S7 are for the analyses of binary and ternary data, respectively. Fitting our data to structures lacking the ligands did not yield any conformers with good χ 2 values. Therefore, we repeated our analyses with ligands in the protein structures. Our first fits to a single bound NADP + yielded only 92 conformers with good χ 2 values, so we docked in another cofactor as two homoligands can bind in the active site pore. 36 This analysis yielded a total of 758 frames with χ 2 values of <10 for the binary data. The number of frames that fit the binary data is low, suggesting (1) a mixed population of species may be present (i.e., both singly and doubly bound NADP + ) and (2) our model of the 2NADP + complex may only approximate this species. Fitting of the ternary complex data was more successful, yielding 15551 frames with a χ 2 of <10. The best χ 2 values for the binary and ternary complexes were 5.2 and 4.8, respectively. The mean R g value obtained for the 758 good binary structures was 21.56 ± 0.39 Å, which is within the error of the R g obtained by GNOM analysis. However, the mean R g value for 15551 structures for the ternary complex was 20.64 ± 0.27 Å, which is different from the R g value determined by GNOM analysis. The range of the R g values in the acceptable SASSIE fits for the ternary complex was 20.14−22.74 Å, which is slightly lower than the range obtained for the apoprotein. A comparison of density plots (or space sampled) in Figure 3B shows the N-termini of the apoprotein and ternary complex sample space at the monomer−monomer interfaces at the sides of the protein. In addition, 28% of the good fits for the ternary data fit to the apo data, indicating overlap in the conformations sampled by the N-termini in the apo and ternary complex. This also can be noted from the Venn diagram shown in Figure 3C. Also, all the 758 frames that fit to the binary data fit the apo data, suggesting similar conformational sampling of the Ntermini under both conditions. A COM for the N-terminal methionine residues in the best fit frames of the ternary complex data was again calculated. These values, represented in Figure S7E, depict sampling of a restricted space near the sides of the protein (i.e., monomer− monomer interface). A comparison of the mesh plots for the apo form and binary complex is additionally shown in Figure S6D, and a COM representation for the 758 frames for the binary analysis is shown in Figure S6F. Effect of Betaine on the R67 DHFR Structure. As addition of osmolytes can lead to protein folding, 21 we added 20% deuterated betaine to R67 DHFR to determine if osmolytes can provide order to the N-termini. SANS data were analyzed using GNOM; Figure 4A shows the pairwise distribution plot. The R g was 22.84 ± 0.31 Å, which is larger than the value for R67 DHFR in the absence of betaine (e.g., 21.89 ± 0.12 Å). These results indicate a more swollen state in the presence of betaine. The ratio of I(0), scaled by sample concentration, for R67 DHFR in the presence of betaine to that in the absence of betaine was taken to ensure that the protein in 20% deuterated betaine was not aggregating. A ratio near 1 indicated that R67 DHFR in the presence of deuterated betaine was not aggregating under our experimental conditions. SASSIE analysis of this SANS data set was performed using the same set of 461000 conformers that were used for the apoprotein. The χ 2 versus R g plot ( Figure S8A) also shows a "U" shape, indicating intermediate rather than very compact or very extended states fit the data well. Of all the structures generated using MD and Monte Carlo sampling, 58277 frames fit to the experimental SANS data with χ 2 values that are <10. The lowest χ 2 value was 3. Figure S8A shows a χ 2 = 10 cutoff for the good fits. The number of structures that fit the SANS data well has greatly increased (compared to that of apo R67 in buffer), again suggesting a more inflated structure. An overlay Binary or ternary complexes were formed by adding 3 mM NADP + or NADP + with 2 mM DHF, respectively. The R g values for the apoprotein, binary complex, and ternary complex are 21.89 ± 0.12, 21.45 ± 0.14, and 21.45 ± 0.18 Å, respectively. While it may seem that the maximal diameter of the protein (D max ) varies, it is more that there is not a well-defined D max value for proteins with flexible regions. Rather, a small range of D max values all appear satisfactory. Within this "optimized" range, R g and I(0) values are not changing significantly, allowing reliable parameters to be gained from the analysis. (B) Comparison of the mesh plots obtained from SASSIE for the apo (green) and NADP + −DHF ternary (blue) complexes. Bound DHF (cyan) and NADPH (magenta) are shown as ball-and-stick models in the center of the active site pore. (C) Venn diagram comparing the overlaps associated with the number of apo, binary, and ternary best fits. Figure S6 shows similar figures for the binary complex. of the theoretical SANS profiles for the best and worst fits and the corresponding structures for these fits are shown in panels B and C of Figure S8, respectively. The R g for the best structure is 23.79 Å, while that for the worst is 29.58 Å. The density plot (see Figure S8D) for the good fits (purple mesh) seems to occupy most of the space sampled by our set of 461000 frames (dark gray mesh). Also, the range of R g values obtained indicates no (or transient) sampling of fully extended conformations for all four N-termini, which would have resulted in higher R g values. The highest R g sampled by Monte Carlo simulation is 29.49 Å, while our model of R67 DHFR with four fully extended N-termini has an R g of 36.25 Å. The R g values for the good fits obtained using SASSIE ranged from 21.04 to 25.94 Å with an average R g value of 22.78 ± 0.87 Å. The wider sampling range and higher average R g both are consistent with the GNOM analysis, indicating that the Ntermini sample extensive conformations in the presence of betaine. The COM point for Met1 in the four N-termini (see Figure S8E) indicates the termini sample many positions both near the core of the protein and farther from the surface. From these data and analyses, the disorder in the N-termini becomes more pronounced upon addition of betaine with the N-termini potentially acting as entropic bristles, sweeping out volume around the protein core. 44 Osmolytes Probe Preferential Hydration of R67 DHFR. Hydration is important in the protein structure and function relationship. Three regions with different scattering-length densities are present in this experiment: (1) the protein, (2) the bulk solution of deuterated buffer containing hydrogenated osmolytes, and (3) the hydration shell surrounding the protein. Betaine or DMSO was used to probe the hydration of fulllength R67 DHFR, while only betaine was used for our truncated R67 DHFR experiments. While a plot of R g values obtained by GNOM analysis does not show any significant trend upon addition of an osmolyte ( Figure S9), the zero-angle scattering intensity, I(0), is sensitive to changes in hydration. No significant change in the R g value is consistent with the Ntermini remaining disordered upon addition of an osmolyte. Note that, earlier, the R g value of 22.8 ± 0.3 Å was obtained from the SANS data for apo R67 DHFR in 20% deuterated betaine. As deuterated betaine was added to the deuterated buffer, the contrast between the hydration layer and bulk was masked and the R g value represents the overall shape of the protein without any contributions from the hydration layer. With hydrogenated betaine, the R g is dependent upon the volume of the hydration layer and the location of the waters in the hydration shell. Even though the hydration layer contrast will increase with the addition of osmolytes, thus causing an apparent decrease in the overall R g , our deuterated betaine data indicate the intrinsic protein volume increases. Therefore, the expansion of the intrinsic volume along with the apparent decrease in size from hydration contrast may be compensating for each other, and the net result is a relatively consistent R g for R67 DHFR in the presence of hydrogenated osmolytes. Decreasing I(0) values for both full-length and truncated R67 DHFR were observed with increasing concentrations of osmolytes. As shown in Figure 5, the data were fit to eq 2. The fits yield the volume of the hydration layer for R67 DHFR in the presence of osmolytes. The number of water molecules in the hydration layer is determined by dividing the observed water volume by the volume of a single water molecule (30 Å 3 ). The number of osmolyte-excluding water molecules associated with the full-length protein is 1285 ± 214 or 1253 ± 199 using betaine or DMSO, respectively, indicating similar numbers of water molecules. The number of water molecules (n w ) excluding betaine from the hydration shell of the truncated protein is 380 ± 100. The difference in the number of hydrating waters (∼900) between full-length and truncated R67 DHFR indicates that the disordered N-terminal tails span a large volume in solution and are extensively hydrated. To compare the experimental hydration values with theoretical numbers, the solvent accessible surface area (ASA) of tetrameric, truncated apo R67 DHFR (2RH2) was calculated to be 11072 Å 2 using the Molecular Operating Environment program (MOE 2015 version). If we assume the area of a water molecule to be 9 Å 2 , 45 this yields approximately 1230 water molecules hydrating the truncated protein. For another highresolution crystal structure of truncated R67 DHFR (2GQV), 9 the solvent accessible surface area was 11673 Å 2 . This structure predicts ∼1297 water molecules. To obtain a theoretical n w value associated with the fulllength R67 DHFR, we used the 7936 good fits obtained from our SASSIE analysis. The ASA was determined using SurfaceRacer 46 for each of the 7936 frames to obtain the n w values. An average of 1800 water molecules were predicted in the hydration layer. Table 2 compares our experimental results with the predicted values from the truncated crystal structures as well as the average value for the 7936 full-length models of R67 DHFR. The predicted ranges of values for waters in the hydration shell are higher than those measured by our SANS data. Effect of Osmolytes on the Thermal Stability of R67 DHFR. DSC scans were performed to monitor the effects of betaine and DMSO on the thermal stability of R67 DHFR. This is an additional way to determine if osmolytes are excluded from the protein surface. Previous thermal denaturation studies of R67 DHFR at pH 8 have found reversible folding with a melting temperature of 70.95°C and evidence of an intermediate state. 47 Our DSC scans are shown in Figure 6. The data were fit to a three-state model, giving two melting temperatures (T M ) that correspond to two events in the thermal unfolding of R67 DHFR. T M 1 and T M 2 values in the absence of an osmolyte are 66.8 and 68.7°C, respectively. Addition of 20% betaine increased the melting temperature of R67 DHFR by 2−3°C, while 15% DMSO decreased the T M by 7−9°C (see Table S2). Stabilization of R67 DHFR in the presence of betaine is consistent with preferential exclusion of DSC was also performed on truncated R67 DHFR. The two T M values were decreased 5 and 7°C, respectively, compared to those of full-length protein, consistent with the disordered Ntermini stabilizing the enzyme (see Table S2). Addition of 20% betaine to truncated R67 DHFR resulted in stabilization of both T M values by 4°C, while addition of 15% DSMO destabilized the protein by 7°C. It has been reported that DSC of intrinsically disordered proteins or regions does not show an unfolding transition. 48,49 If true, then DSC signals of full-length and truncated R67 DHFR should report on the unfolding of the core "doughnut" structure. Also, addition of solutes should have similar effects on full-length and truncated R67 DHFRs. This behavior is seen in Figure 6. As addition of betaine increases the T M values for both R67 DHFR species by similar levels (Table S2), it stabilizes the protein core, most likely by being excluded. In other words, the protein favors interaction with water. 22,50 In contrast, DMSO destabilizes the R67 core structure as the T M values are lowered to similar degrees for both full-length and truncated protein species. This behavior suggests a preferential interaction mechanism for DMSO with the protein. 22,50 Again, DSC appears to report on the core, folded structure. Pressure Perturbation Calorimetry. Another avenue for exploring the molar volume of R67 DHFR in the presence of betaine uses pressure perturbation calorimetry. From PPC, the thermal expansion coefficient for R67 DHFR was 8.7 × 10 −4 K −1 at 10°C (Table 3), and it decreased as the temperature increased to 57°C ( Figure 6C). Structure-breaking polar groups on the surface of the protein are most likely responsible for the decrease in α s . 38 The denaturation transition of R67 Biochemistry Article DHFR between 60 and 75°C caused an increase in α s , which indicates an increase in the volume of R67 DHFR as the protein denatures. Integrating the area underneath this peak in the thermogram yielded a relative change in volume (ΔV/V) of +0.0013. A T M for the denaturation of R67 DHFR of 67.5°C was calculated from the PPC. This value matches well with the conventional DSC analysis of R67 DHFR at pH 8 (Table S2). 47 As a control, PPC was also performed on the truncated form of the protein (see Figure 6D). The α s at 10°C for truncated R67 DHFR (2.2 × 10 −3 K −1 ) was twice that of the full-length protein (Table 3). Another interesting characteristic of the PPC for the truncated R67 DHFR was that no transition for denaturation was noted ( Figure 6D). As there is a clear transition in the DSC data for truncated DHFR (see Figure 6B), the lack of a denaturation transition in the PPC thermogram is not due to the protein being unfolded. Additionally, we note the truncated enzyme was active, indicating that it was not unfolded. A balance exists between elements that contribute to the negative volume change (i.e., loss of voids in the protein and the electrostriction of water around polar and charged groups that are more exposed upon unfolding) and those that contribute to a positive volume change (a larger thermal expansivity for the unfolded state vs the folded state and changes in the hydrophilic−hydrophobic balance of the exposed groups). 38,51 Additional effects may be loss of clathrate water in the R67 active site pore 9 and dissociation of a tetramer to four unfolded monomers. The relative contribution of these effects leads to the observed α value. For a positive ΔV/V (as in full-length R67 DHFR), the positive effects must predominate. For ΔV/V to be zero (as in truncated R67 DHFR), the various effects appear to be balanced. Most monomeric, globular proteins show negative ΔV/V values. 51,52 Because the structural differences between the truncated and full-length R67 DHFRs are the four disordered N-termini, they appear to be the key determinant for the positive ΔV/V value seen for full-length R67. PPC thermograms were also performed in the presence of betaine for both full-length and truncated R67 DHFRs ( Figure 6). Addition of 10% (w/w) betaine to the full-length protein caused an increase in the α s value at 10°C (9.7 × 10 −4 K −1 ) relative to the protein in the absence of betaine (Table 3). This increase in α s suggests there is an increase in the protein volume that is likely due to extension of the collapsed Ntermini in the presence of betaine. Further increasing the betaine concentration to 20% (w/w) decreased the α s at 10°C to 8.9 × 10 −4 K −1 , similar to the value in the absence of betaine. This reduction most likely describes a decrease in the size of the solvation shell of the full-length protein. Similar effects of various solutes on the α values for RNase 38 and SNase 53 have been previously observed and ascribed to effects on the hydration shell. The increased volume at 10% betaine correlates with our SANS result of an increased R g for full-length R67 DHFR in buffer containing 20% deuterated betaine. For truncated R67 DHFR, the α s value at 10°C decreases with each increase in betaine concentration from 1.9 × 10 −3 K −1 at 10% betaine to 1.5 × 10 −3 K −1 at 20% betaine. The volume of the truncated protein, including its water shell, decreases as betaine is added, decreasing the concentration of water in the solution. ■ DISCUSSION The R67 DHFR monomer is 78 amino acids long, and around 16−20 N-terminal residues are disordered; therefore, ∼20− 25% of its sequence is unstructured. R67 assumes a compact structure by forming a homotetramer. Chymotrypsin treatment of the folded protein results in a truncated product, which is almost fully active but 2.6 kcal/mol less stable. 7 Expression of the truncated protein from a shorter gene sequence does not confer trimethoprim resistance. Thus, the N-termini are essential for protein expression and/or stability but not for catalysis. To understand the conformational space sampled by the N-termini of R67 DHFR, we characterized full-length and truncated R67 DHFR using SANS. Apoprotein Analysis. The best fits for apo R67 DHFR indicate compaction of two N-termini on one side of the ordered tetramer core, whereas the other two N-termini prefer to remain partially extended (see Figure 2D and Figure S5). In many of these poses, the N-terminal residues interact with each other and/or with residues exposed on the monomer− monomer interface. Intramolecular and intermolecular interactions are both feasible. These interactions lead to compaction of the overall shape and seem likely to be why the N-termini provide 2.6 kcal/mol of stability to R67 DHFR. 7 Data mining of the conformers fitting the SANS profile was accomplished using a Python script. Frequent interactions were identified by counting the number of times the COM of each amino acid occurs within 5 Å of the COM of every other residue. Figure S10A shows a heat map of the minimum distance between residues. Figure S10B provides a heat map of the number of these interactions versus the amino acid number (1−78 for the first monomer, 79−156 for the second monomer, 157−234 for the third monomer, and 235−312 for the last monomer). The symmetry of the structure provides an initial understanding of these plots as monomers nearby in space interact (A and C or B and D), while distant monomers do not. In Figure S10, intramolecular interactions can be visualized by the points near the diagonal while intermolecular interactions are indicated by the areas describing interactions between residues 1−78 and 157−234 (for example). Using the symmetry of the core structure, the number of potential interactions was summed, using the rationale that a stabilizing interaction would occur in more than one monomer. Supplemental Excel sheet 1 lists these amino acid pairs. Three bins were noted: first, pairs that occur more than 1000 times (=22); second, intramolecular pairs that occur in all four monomers (=8); and third, intermolecular pairs that occur in all four monomers (=1). Hydrophobic, polar/uncharged, and charged residues are identified and colored in the excel sheet as described by Eisenberg et al. 54 In the pairs that occur >1000 times, hydrophobic residues occur 50% of the time while polar/ uncharged amino acids occur 40% of the time and charged residues 10% of the time. These pairs mostly describe Nterminal to N-terminal interactions. As the N-terminal sequence contains several hydrophobic side chains (M1, I2, V8, A13, F16, V17, and F18), these amino acids could also potentially form hydrophobic interactions with similar exposed side chains on the folded protein surface. In particular, each of the two-symmetry related W45 residues provides ∼94 Å 2 of ASA for interaction. Short distances were observed from most of the hydrophobic residues mentioned above to W45 and its symmetry-related W201 residue. Also, cation−π interactions could be transiently occurring as R3 often occurred nearby W45 as well as M1 (N-terminal residue). The crystal structure of truncated R67 DHFR shows exact 222 symmetry. 4 While this symmetry could also apply to each of the disordered N-termini, it is more likely that they impart asymmetry via their disorder. Analysis of Binary and Ternary Complexes. Table 2 summarizes the various R g values obtained from GNOM and SASSIE analysis. No substantial effect of ligand binding was observed in the conformations sampled by the N-termini as those frames that provided the best fits to the SANS data for both the binary and ternary complexes mostly overlap with those sampled by the apoprotein. GNOM analysis yielded comparable R g values for the apoprotein and ligand-bound protein samples, and the conformers obtained from our SASSIE analysis placed the disordered tails near the sides of the active site pore. Data mining of the ternary complex conformers that fit the SANS data was also performed. Figure S11A plots the minimum distance between the COM of residues. A pattern similar to that seen in apo R67 DHFR is observed. Supplementary Excel sheet 2 lists those amino acid pairs whose centers of mass are ≤5 Å apart. Three bins were again considered: 47 pairs occur more than 1000 times, while nine intramolecular and two intermolecular pairs occur in all four monomers. The same type of interactions are observed as in the apo conformers with hydrophobic residues occurring 56% of the time, polar/uncharged 32% of the time, and charged 13% of the time. One difference is that the N-terminal methionines now very frequently interact with W45 or W201 (symmetryrelated residues). Effects of Osmolytes. The main difference in our data arises when betaine is added, which leads to a more swollen state of R67 DHFR. Osmolytes that are excluded from protein surfaces are known to stabilize the protein via the preferential exclusion mechanism. 22 The ability of TMAO to force folding of a modified RNase was attributed to its preferential exclusion from the peptide backbone (also termed the solvophobic effect). 21,55 While R67 DHFR was found to be stabilized upon addition of betaine by our DSC studies, no disorder to order transition was observed for the disordered tails from our analysis of the SANS data. On the contrary, SASSIE finds the addition of betaine results in greater conformational sampling of the disordered tails, from being collapsed near the sides of the protein to being partially extended. In our previous studies of the interaction of betaine with folate and other compounds, we found betaine can compete with water to form stable interactions. 56 Betaine prefers to interact with aromatic surfaces as well as cationic and amide nitrogen atoms, while water prefers to interact with carboxylate, phosphate amide and hydroxyl oxygens. 56,57 For the case of R67 DHFR, betaine may interact with some residues in the N-termini, hindering the collapsed conformations from being sampled. Another possible explanation for the extensive sampling of the disordered tails upon addition of betaine may be attributed to changes in the solvent structure. Studies have characterized the effects of solutes on the structure of bulk as well as hydrating water molecules around proteins. 37,38 The nature and extent of these alterations depend on the chemical properties of the solutes. Polar and hydrophilic surfaces were found to be water structure breakers, whereas hydrophobic surfaces were described as water structure makers. 37 Stabilization of RNase A by 1.5 M sucrose was previously observed, while accompanying pressure perturbation calorimetry studies showed nonlinear effects on α, the apparent coefficient of thermal expansion, Specifically, RNase is less compact in 0.5 M sucrose, as indicated by an increased α, than in the absence of sucrose, while the protein becomes more compact at 1.5 M sucrose, yielding a decreased α. 38 The differences in α were attributed to changes in protein hydration. Hydration Studies. Experiments that have examined protein hydration have used varying techniques. A typical approach calculates the accessible surface area (ASA) and divides the value by 9 Å 2 to predict the number of solvent waters in the hydration shell. This yields a high value. In contrast, experimental approaches often yield smaller numbers of hydration waters. For lysozyme, ASA calculations predict ∼900 waters of hydration. 27 Experimental techniques for studying lysozyme hydration include NMR, 58 excess heat capacity, 59 dielectric relaxation, 60 and X-ray diffraction. 61 These experimental approaches yield 121−900 hydration waters, indicating the value is sensitive to the technique used as well as the experimental conditions employed. A previous SANS study of hydration in lysozyme used different osmolytes. 27 With added betaine, triethylene glycol, PEG400, or PEG1000, 84 ± 5, 114 ± 24, 156 ± 8, or 347 ± 11 hydration waters were observed, respectively, along with different water shell thicknesses. The increase in the number of waters (n w ) may be due to osmotic stress effects combined with volume exclusion as the size of the osmolyte increases. 45,62,63 Alternatively, fewer waters may be observed if the osmolyte interacts with the protein surface. Both factors likely play a role in observation of an n w value that is lower than the predicted upper limit. In our SANS studies of R67 DHFR, we used the osmoprotectant, betaine, as it is often excluded from the protein surface. 64 Our SANS experiments allow three areas of different contrast to be delineated: the protein, the bulk solution containing osmolytes, and the hydration shell that excludes osmolytes. The number of water molecules (n w ) responsible for the exclusion of betaine from the truncated R67 DHFR surface was found to be 380 ± 105. This value is smaller than the predicted value of 1230−1297 waters from ASA calculations from the crystal structures (2RH2 8 and 2GQV 9 ). Because of the high resolution and low temperature factors of the 1.1 Å resolution structure of R67 DHFR (2GQV), 85 waters per monomer were identified in the first hydration shell (e.g., formation of a H-bond with the protein surface) and 106 in higher-level shells. 9 This yields 340 waters in the first hydration shell of the tetramer. This value compares to that from our SANS experiment with truncated R67 DHFR that yields an n w value of 380 ± 100. Thus, both SANS and crystallography appear to measure polar bound waters. When SANS was performed on full-length R67 DHFR, addition of both betaine and DMSO yielded n w values of ∼1250 waters hydrating the protein surface. This value is lower than the average n w value of 1800 waters predicted using ASA calculations on PDB files generated by MD and directed Monte Carlo analyses in SASSIE. We also counted the average number of waters associated with the R67 DHFR tetramer in our MD trajectories and found an average of 1647 (range of 1446− 1879). Again, the experimental value is lower than the predicted upper limit, suggesting some level of interaction of the osmolyte with the protein surface. When the n w values for truncated (380) and full-length R67 DHFRs (1200) are compared, the difference is 900 waters. This indicates each N-terminus is well-hydrated by ∼225 waters. To test whether betaine and DMSO were interacting with R67 DHFR, we performed DSC experiments. Excluded osmolytes typically increase the stability of proteins by increasing the level of hydration, while interacting osmolytes decrease protein stability. 21,65 Additionally, DSC experiments of intrinsically disordered proteins (IDPs) typically lack cooperative structural transitions, 48,49,66 so our results appear to report on the effects of the osmolyte on the structural core of the protein. This idea is supported by a similar 4−5°C increase in T M when betaine is added to either truncated or full-length R67 DHFR. Thus, betaine appears to be mostly excluded from the surface of the core of the R67 DHFR structure (supported by DSC results), while there is some level of interaction of the osmolyte with the disordered N-termini (supported by our deuterated betaine SANS and PPC results). Addition of 20% betaine increased the T M values by 4−5°C for both full-length and truncated R67 DHFRs, while addition of 20% DMSO decreased the T M values by 5−7°C. These results were surprising given that the numbers of hydrating waters for these two osmolytes were within error as measured by our SANS experiments. Though the n w values are similar, the water location may vary. DMSO can form hydrophobic interactions, whereas betaine interacts with aromatic, amide, and cationic nitrogens exposed on the protein. Thus, both osmolytes may lead to the exclusion of water from different protein surfaces, which can in turn result in the variable effects on protein stability. ■ CONCLUSION While it is confounding that disordered regions can provide some level of stability to a folded protein, we find this is the case for R67 DHFR. From our SANS data, we find the disordered N-termini prefer to sample conformational space near the sides of the apoprotein. This allows both N-termini to interact with themselves as well as the monomer−monomer interface, providing 2.6 kcal/mol of stability to R67 DHFR. 7 According to van der Lee et al., 67 entropic chains are a form of IDP that remain disordered. This applies to the N-termini of R67 DHFR as they do not fold upon addition of a ligand. Addition of betaine to R67 DHFR results in a larger R g and SASSIE fits that predict a wider sampling volume. These results suggest the N-termini are responsive to their environment. It is tempting to speculate that in the cell, the N-termini may interact in a similar fashion with small molecules or macromolecules and provide an entropic bristle function where they sweep out volume around the protein core. This would prevent large molecules from entering this space but allow penetration of small molecules. 68 Entropic bristles have also been proposed to enhance protein solubility and prevent aggregation. 44,69−71 Finally, we note there are several R-plasmid DHFRs that differ only in the sequence of their N-termini. 10−12 All have N-termini of similar lengths, which indicates it may be the length of the N-termini more than the sequence that is important for its function. A longer disordered sequence (as in our His-tag constructs cloned in pRSETB with an additional 30 amino acids) leads to ∼2-fold increases in K m values for NADPH and DHF. 72 Another study found the R67 DHFR Nterminal sequence was essential for evolvability. 73 These various observations suggest that R67 DHFR's disordered N-termini play roles in stability, solubility, evolvability, and substrate access. Finally, both the betaine studies and the preferential hydration measurements indicate the disordered tails are highly hydrated, consistent with large, exposed surface areas. These data sets support the importance of water and solutes in the R67 structure−function relationship. As the disordered segments are exposed and our SANS results show the polar regions are well hydrated, it seems likely that betaine can compete well with water for solvation of aromatic groups. Indeed, Uversky suggested intrinsically disordered proteins (IDPs) are "multifarious interactors". 68 While he meant IDP can often interact with various protein partners, here we wonder if disordered regions can interact with different solutes, which can subtly change their behavior. This would add another layer of complexity to the role of IDP and disordered regions in the cell. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.biochem.7b00822. A 20% SDS−PAGE gel indicating clear separation of fulllength and truncated R67 DHFRs, a flow diagram of our steps using MD and SASSIE to find N-terminal conformers that fit the SANS data, another figure depicting the SANS profile and GNOM analysis for truncated R67 DHFR as well as a dimensionless Kratky plot comparing full-length and truncated R67, the COM position for the N-terminal methionine of apo R67 DHFR conformers that fit the SANS data, SASSIE analysis of the data of the binary complex (R67 DHFR− NADP + ), the ternary complex (R67 DHFR−NADP + − DHF), and R67 DHFR in the presence of 20% deuterated betaine, plots of R g for full-length and truncated R67 DHFR probed after addition of betaine or DMSO, data mining plots for apo and ternary complex fits, and two tables listing R g values from the various programs and T M values from DSC data (PDF) Excel sheet from the data mining of apo conformers (XLSX) Excel sheet from the data mining of ternary conformers (XLSX)
2018-04-03T06:10:26.248Z
2017-10-11T00:00:00.000
{ "year": 2017, "sha1": "0179b9d6b9312cf6461da2294de27ba8e7325631", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.biochem.7b00822", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0179b9d6b9312cf6461da2294de27ba8e7325631", "s2fieldsofstudy": [ "Chemistry", "Biology", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
256387867
pes2o/s2orc
v3-fos-license
Methodology for In Situ Microsensor Profiling of Hydrogen, pH, Oxidation–Reduction Potential, and Electric Potential throughout Three-Dimensional Porous Cathodes of (Bio)Electrochemical Systems We developed a technique based on the use of microsensors to measure pH and H2 gradients during microbial electrosynthesis. The use of 3D electrodes in (bio)electrochemical systems likely results in the occurrence of gradients from the bulk conditions into the electrode. Since these gradients, e.g., with respect to pH and reactant/product concentrations determine the performance of the electrode, it is essential to be able to accurately measure them. Apart from these parameters, also local oxidation–reduction potential and electric field potential were determined in the electrolyte and throughout the 3D porous electrodes. Key was the realization that the presence of an electric field disturbed the measurements obtained by the potentiometric type of microsensor. To overcome the interference on the pH measure, a method was validated where the signal was corrected for the local electric field measured with the electric potential microsensor. The developed method provides a useful tool for studies about electrode design, reactor engineering, measuring gradients in electroactive biofilms, and flow dynamics in and around 3D porous electrodes of (bio)electrochemical systems. E lectrochemical technology offers a clean and powerful tool for both treatment of waste streams and chemical synthesis. 1 (Bio)electrochemical systems catalyze (microbial) conversions by applying an electric potential to an electrode on which microbes grow. The microbes use the applied energy directly as electrons or indirectly as hydrogen, which is formed at the cathode from electrons and protons. 2,3 Since most conversions occur at the electrode surface, local gradients of, e.g., protons and hydrogen with concentrations different from the bulk can be expected. Most often in (bio)electrochemical studies, only bulk conditions are measured, which can be nonrepresentative for the local conditions around the electrode surface. 4 Several theoretical studies have modeled the local conditions near the cathode, 5,6 but practical support for these studies is rare. To measure local concentration gradients, microsensors could form powerful tools. Microsensors have a thin tip (down to 1 μm), which allows measurements with the same spatial resolution as the tip size. The sensors can be moved along a profile axis to measure gradients. 7,8 Microsensors have been applied in many different fields, including biochemistry, 9,10 plant science, 11−13 microbiology, 14,15 and biomedicine, 16,17 but their application in electrochemical systems remains limited. The application of microsensors in electrochemical systems is expected to be suitable by application of amperometric microsensors, which measure a current signal resulting from a redox reaction on the microelectrode surface. These Clarktype 18 microsensors are used to measure H 2 , H 2 S, O 2 , NO, N 2 O, and CO 2 8,19 gradients in biofilms growing on 2D electrodes. 8,20 To measure, e.g., pH, oxidation−reduction potential (ORP), and electric field potential, potentiometric sensors are used, which measure a potential drop over the sensor tip membrane between the sensor electrode and an external reference electrode. 8,21 Despite their successful application in the beforementioned fields, the application in electrochemical systems is limited due to signal interference when placed in an electric field 22 with significant distance (several mm) between the sensor electrode and external reference electrode tip. 23,24 To use potentiometric microelectrodes for analysis of local gradients in (bio)electrochemical systems, the interference from the electric field needs to be tackled. The best way to tackle the issue would be to minimize the distance between the sensor electrode and the reference electrode. 23,24 Some studies used so called "combined sensors" to measure pH in electrochemical biofilms. In these custom-made sensors, the reference and measuring electrode were built in the same sensor, connected with a conductive liquid. 8 Although this gave reliable results, the thicker microsensor tip did not allow for the sensor to be moved over distances longer than 600 μm without piercing millimeterwide holes in the biofilm. 23,25 The short distance was enough to measure inside biofilms on 2D electrodes but could not be used to measure inside several millimeter or even centimeter-thick 3D porous electrodes typically used in bioanodes or biocathodes. 26−28 In this study, a methodology for microsensor application in (bio)electrochemical systems was developed to measure gradients in the electrolyte and, for the first time, throughout porous 3D electrodes. The methodology development consisted of three steps. First, a reactor was designed, with key features that allow microprofile measurements over a range of several centimeters, keeping anaerobic conditions and continuous leakfree liquid electrolyte recirculation of 10 L/h. Practical tips and protocols (with video instructions) are provided for the use and careful handling of the microsensors to facilitate future use. Second, the reactor design was used to show a microprofile of H 2 gradients in the reactor. Third, a correction method is presented and validated to overcome the interference of electric field during potentiometric microsensor measurements. With this method, potentiometric microsensors can finally be applied for accurate gradient measurements in (bio)electrochemical systems, even with high current (−10 kA/m 3 ). The correction method was used to show gradient profiles of the electric field potential, ORP, and pH. The profiles in the study showed significant differences between bulk and local conditions at the electrode surface, which highlights the importance of the presented method and its possible application for mass transfer studies. ■ EXPERIMENTAL SECTION Reactor Setup. All measurements of this study were performed in an electrochemical CO 2 -fed reactor. The electrochemical reactor consisted of an anode (Ti/Pt-Ir MMO, thickness 1 mm, Magneto Special Anodes BV, Netherlands) and three cathode layers (graphite felt, thickness 3 mm, Rayon Graphite Felt, CTG Carbon GmbH, Germany), separated by layers of three spacers to study three different distances from the anode. The anode and cathode compartments of the cell were built with Plexiglas flow-through plates with 21.3 cm 2 psa and separated by a cation exchange membrane (Fumasep FKS, Fumatech BWT GmbH, Germany). The three cathode layers were separated with spacer layers (Figure 1, Figure S1) and connected in parallel with a titanium wire (0.8 mm-thick, grade 2, Salomon's metalen, the Netherlands) with 1 Ω between each connection and the working electrode plug ( Figure 1). The graphite felt was a non-microporous, low surface area material (<1 m 2 /g), determined by N 2 physisorption. Since the microprofiles would be made throughout the cathode layers, the cathodes were constructed in such a way that the microsensor tips would not touch the Ti wire or spacers ( Figure S1C). The construction of the cathode layers is shown in Movie S1. Between the cathode and the membrane, a "bypass outlet" was placed at the outflow side of the electrochemical cell (left in Figure S1A). The function of the "bypass outlet" was to remove hydrogen gas that would accumulate below the cathode at current densities of −10 kA/m 3 cathode, hindering proton transfer between the anode and cathode. The catholyte flow distribution between the two outflow ports above and underneath the cathode was 1:1, calculated from flow rate measurements at the reactor outlets. The details of the reactor operation parameters are described in the SI, section "reactor operation". Microsensors and Profiling. A laboratory stand (LS18), micro-manipulator (MM33-2), motor-driven micro-manipulator stage (MMS), and motor controller (MC-232) were combined for precise manipulation of microsensors (Unisense A/S, Denmark). All equipment was installed and operated corresponding to the manuals. A H 2 microsensor (H2-50), pH microelectrode (pH-50), oxidation−reduction potential microelectrode (RD-50), reference microelectrode (REF-100), and electric potential electrode (EP-100) were used for microprofiling, all with an indicated tip size of 40−60 μm (all from Unisense A/S, Denmark). The relative sensor lengths were determined under the macroscope to enable combination of different sensor measurements in plots. For the potentiometric microsensors, two external capillaries were installed 5 mm above (fixed ref TOP, Figure S1A) and 7 mm below the cathodes in the reactor (fixed ref BOTTOM, Figure S1A) additional to the reference used by the potentiostat. The capillaries were filled with gelified 3 M KCl and connected to Ag/AgCl reference electrodes via tubing filled with liquid 3 M KCl. The calibration and measurements corresponded to the manuals, combined with the microprofiling setup (SI section "protocol microsensor calibration" and "protocol profiling"). SensorTrace Profiling and Logging software (Unisense A/S, Denmark) were used in this study. Further details about how a profile was measured will follow in the Results section. Key Features of Reactor and Sensors to Allow Microprofiling. Reactor Design. To allow for in situ measurements with microprofiling sensors under continuous leak-free electrolyte recirculation, the setup contains some key features (Figures 1 and 2A). The electrochemical cell was fixed horizontally with a 17°angle on a ground plate to allow for gas to escape from the higher outlets and allow microprofiling with the microsensors perpendicular to the graphite felt cathode ( Figure S1C, Figure 2A). The outer plate of the electrochemical cell was replaced with a Plexiglas closing plate with three wells ( Figure S1A). The construction of the electrochemical reactor cell is shown in Movie S3. The inlet and outlet tubes of the catholyte compartment in the electrochemical cell were equipped with switchable three-way valves, for closing of the catholyte compartment while installing microsensors. To measure a microprofile, the catholyte recirculation was shortly stopped to replace the well caps with a "sleeve" ( Figure 2C) to enter the microsensor. The microsensor neck was greased with silicone grease to ensure a watertight seal between the neck and the sleeve. The silicone grease allowed electrolyte leak-free sensor moving. After securing the microsensor in the sleeve, the recirculation was switched on again and a profile could be made, during which the microsensor was brought down with a motor tool (default velocity and acceleration of 1000 μm/ s and 1000 μm/s 2 , step size 100 μm) ( Figure 2B) to pierce the graphite electrode and profile a gradient through the cathode layers until the membrane inside the cell. A full detailed protocol is described in SI section "protocol profiling" and shown in Movie S2. Microsensors Used for Profiling. Before measuring microprofiles, the microsensor characteristics were tested ( Table 1). In this study, four different sensors were used; one amperometric sensor to measure H 2 , and three potentiometric sensors to measure electric field potential (EP), oxidation− reduction potential (ORP), and pH. The amperometric H 2 sensor was used to test the feasibility of measuring microprofiles with the presented reactor design. The potentiometric sensors were used to develop accurate microsensor measurements in the electric field (Table 1, signal correction), which will be explained in more detail in the Correction for Electric Field Interference in Potentiometric Measurements Section. First, the response times of different sensors were determined by moving the sensors to different heights in the system and measuring the signal over time ( Figures S3−S5). Most signals were stable after 5 s, so this time was increased to 10 s stabilization time and 5 s measuring time during the measurements to avoid instability offsets by, e.g., hydrogen bubbles (Table 1). Accurate Cathode Position Determination with ORP Sensors. Apart from the technical differences between the sensors shown in Table 1, the sensor tips also showed visual differences ( Figure S2). Most sensor tips were made of glass (H 2 , EP, pH), but the ORP sensor had a metal tip. The ORP microsensor measured the cathode potential when the tip was in contact with the graphite felt layers. The cathode potential values differed significantly from the electrolyte ORP, so the ORP profiles showed clearly the position of the cathode layers. Therefore, the cathode layer positions were determined by the ORP measurements and used in the profile plots of the other sensors. Local Hydrogen Concentration Gradients Measured with the Microsensor. With the new reactor design and profiling method, hydrogen concentration gradients were measured in the three wells of duplicate reactors with active catholyte recirculation ( Figure 3). Prior to the profile, the sensors were calibrated according to the manual. Since the calibration was done at a lower temperature than the profile measurement, a temperature correction was performed for the conversion of the mA signal to the dissolved hydrogen concentration (SI section "protocol microsensor calibration"). After calibration, profiles of the hydrogen concentration distribution were made in duplicate with and without current to the reactor ( Figure 3 shows that cycles 1 and 2 (.1 and .2) are similar, yet not exactly the same in all locations. Although the hydrogen was high in the bottom cathode, the theoretical maximum saturation concentration of hydrogen at the salinity and temperature used in this experiment (709 μmol/L) was not detected in duplicate. Differences between the duplicate measurements can be used to identify measurement interference by, e.g., gas formation. To relate the local hydrogen concentrations to the distribution of local current at the different cathode layers, the applied current was measured over the resistances placed before each connection to the cathode layers. The current was not distributed evenly over the top, mid, and bottom cathode. A major part of the −200 mA supplied to the cathode was led to the bottom cathode closest to the counter electrode (±82% , Table S1). Correction for Electric Field Interference in Potentiometric Measurements. Measuring Electric Potential Offset vs Fixed External Reference Electrode. Since the current was not evenly distributed between the different cathode layers, it was expected that the local electric field would also show gradients over the different cathode layers. To investigate this, the EP (electric potential) sensor was used, which measures the potential difference between the microsensor tip and an external reference electrode; in this study, Ag/AgCl was used (Table 1). Since the EP sensor is also a Ag/AgCl electrode, the value of the electric field should be 0 when no electric field is present and increase with increasing electric field. 21 To measure the electric potential in the electrochemical system, a profile was measured with the EP microsensor throughout the electrolyte and the cathode layers. First, a profile was measured without current applied to the system. During OCV, the electric potential difference between the sensor tip and the fixed reference electrode is constant at 20 mV with the exception of one jump to 0 mV around the middle cathode layer (Figure 4, blue dashed line). On the contrary, when the cathode is current controlled, the electric potential difference versus the same fixed reference electrode shows steep gradients and increases in jumps at each cathode layer when moving the sensor from the fixed reference (black line). Since most of the current was distributed to the bottom cathode, the local electric field was expected to be greater at the bottom electrode. There, a steeper gradient is seen throughout and below the bottom cathode layer (black line, left gray plane). The difference between the OCV and current controlled measurements shows that applying current affects the local electric field. With the fixed reference electrode positioned in this field of steep increase (bottom reference, yellow line), the gradient pattern is the same, with 0 mV offset when the moving electric potential electrode tip was (observed by eye) close to the bottom fixed reference electrode (depth 30 mm). Although the difference between the black and yellow profile seems to be constant, the difference decreases from lower (33 mm, 150 mV) to higher locations (−4 mm, 100 mV) ( Figure S6). The depth with offset 0 mV (30 mm) was determined to be right next to the bottom fixed reference electrode. ORP Profile Signal Corrected for Local Electric Field Potential. Next to the EP sensor, the ORP microsensor also uses an external reference electrode (Table 1). Based on the EP profile (Figure 4), local mV offset signals can be expected in microsensor measurements with current applied to the system. Therefore, using the raw output data from potentiometric sensors would result in unreliable values. Damgaard et al. 21 suggested that a local EP correction could be used to convert the potentiometric microsensor signals to accurate data. The ORP and pH microsensor used in this study were shielded against electric field disturbances with similar caging technique as used for the EP microsensor. 21 Therefore, the three microsensors were expected to be disturbed by the electric field in similar ways. In this study, the hypothesis from Damgaard et al. 21 was tested. Figure 5 shows the raw ORP data (black/gray) and the local EP (green) used for the ORP data correction (red) measured in two reactors (1 and 2). The correction was done by subtraction of the local EP difference versus the fixed top reference ( Figure 5A, green) from the raw ORP signal measured versus the same fixed reference ( Figure 5A, black/gray). To validate the accuracy of the correction, the cathode layer potentials were compared with the raw data. Since the cathode layers were connected in parallel to the potentiostat, the cathode layer potentials should be equal. This is indeed shown for the data corrected for the local EP with deviations of max 200 mV (∼15%) ( Figure 5B, 'Corrected'), but not for the uncorrected signal (deviations up to 360 mV, ∼28%), showing that the EP correction results in reliable ORP values. The corrected values could be compared to ORP without current applied to the system ( Figure 5B, blue). It should be noted that the OCV profile and the duplicate ORP/EP profiles were measured in a different (but similar) reactor, so the exact cathode positions differed (top cathode was placed more to the left). Without current applied to the reactor, the ORP of the cathode layers is Figure 5B, blue). All ORP values without current are less negative than with current. pH Microsensor Signal Interfered by Applied Current. The pH microsensor also uses an external reference electrode (Table 1), so its mV signal was also expected to be interfered by the presence of an electric field. However, unlike the ORP measurements, the value around the cathodes could not be used as verification. Since the pH of the catholyte bulk recirculation is measured outside the electric field (Figure 1), this was used as a validation method in an experiment to investigate the magnitude of the signal deviation in relation to the current magnitude. Different current magnitudes were applied to the electrochemical system, while measuring the pH with a microsensor. To measure the deviation, the tip of the pH microsensor was placed at the influent port of the catholyte recirculation ( Figure 6A, left). With this measurement, the microsensor pH could be compared to the recirculation pH. Figure 6A shows the deviation between the pH reported by the microsensor (with top reference at a depth of −5 mm, 35 mm from the pH microsensor tip) and the recirculation pH plotted against increasing cathode current. When no current was applied to the system, a deviation between 0.06 and 0.3 pH units in signal was measured between Analytical Chemistry pubs.acs.org/ac Article values. Therefore, the signal deviates strongly from the recirculation pH with applied current. pH Microsensor Signal Corrected with Local Electric Potential. After determining the pH microsensor disturbance by the electric field at one point, a pH gradient profile was made with the fixed top reference (depth of −5 mm) ( Figure 6B, black) and in duplicate with the fixed bottom reference (depth 30 mm) ( Figure 6B, yellow). The signals showed a pattern similar to the signal from the electric field potential when measured with the same reference electrode point at −5 mm (top reference) ( Figure 6B, green) or at 30 mm (bottom reference) ( Figure 6B, light blue). The local electric field correction was also applied to the pH microsensor measurements with the EP profiles measured from Analytical Chemistry pubs.acs.org/ac Article with the same fixed reference electrode as the pH profile ( Figure 6C, red). To verify the local electric field potential correction, the pH was measured versus the bottom reference with the microsensor tip placed next to the bottom reference electrode (depth 30 mm), to ensure 0 mV offset ( Figure 4) and logged over 7 min right after the pH measurement with the top reference ( Figure 6B, black) ( Figure S7). The average pH value during that period is indicated with a yellow cross ( Figure 6C). The verification point lies exactly on the line of corrected pH values, showing the accurateness of the correction. The corrected values are more constant over the depth of the reactor and show a small gradient underneath the bottom cathode (depth 27 to 33 mm). The pH value in the bulk solution underneath the cathode (depth 33 mm) is similar to the bulk pH (5.7−5.9), while the pH is higher (up to ∼6.2) in the lowest two cathode layers (depth 13 to depth 30 mm). More away from the counter electrode, above 13 mm, the pH is again similar to the bulk pH value. Without applied current, the pH showed less gradients than with current ( Figure 6C, blue and red). Intermittent Current or Distance to Reference to Allow Potentiometric Measurements. Next to the EP correction method, two additional methods to make profiles with potentiometric sensors are applying intermittent current or minimizing the distance to the reference electrode. When no current was applied between the electrodes, both the local EP and the pH offset were minimal (Figures 4 and 6A). Based on this insight, intermittent current was investigated as the method to measure with potentiometric microsensors. In theory, the values measured right after switching off the current should represent the actual value during applied current. To test this, the microsensor pH values were logged during intermittent current with the tip 35 mm from the external reference electrode. As validation, the microsensor pH values were also logged at the same location but with the external reference next to the tip (with 10 mm distance parallel to the electrode surfaces), with local EP of 0 mV. It was found that it took some time (at least 1 s) after stopping the current before the signal reached validated values representative for the situation with applied current. Simultaneously after stopping the current, the system gradients caused by the applied current disappeared and bulk conditions were measured. To obtain reliable values with potentiometric measurements during the intervals without current, the sensor should measure values that represent the situation with current on the system and not values that represent the bulk conditions, which are reached without applied current. For the systems described in this study, the intermittent current method was not reliable in some of the tests (SI section "pH microsensor measurement during intermittent current"). To determine the applicability of the intermittent current to measure with potentiometric microsensors, it is recommended to use the validation method described in SI section "pH microsensor measurement during intermittent current". ■ DISCUSSION Microprofiling in Electrochemical Systems for Local Gradient Measurement. With the adapted setup, microsensor profiles can be made in the electrolyte and through the different porous electrode layers of the cathode chamber while keeping leak-free electrolyte recirculation. With the hydrogen sensor, local hydrogen concentrations could be mapped precisely (Figure 3). For measurements with potentiometric microsensors, an external reference electrode is used. Both the distance to the external reference ( Figure 4) and the magnitude of the current ( Figure 6A) influence the signal disturbance from microsensors with external reference electrodes. 8 Potentiometric microsensor measurements in fields with low electric potential yet significant distance between the microsensor tip and the external reference electrode, as performed in earlier studies, 29,30 could give seemingly plausible values, even though the electric field still causes an offset ( Figure 6A). Since 59 mV corresponds with 1 pH unit, a 6 mV offset could already cause a measuring error of 0.1 pH unit. Therefore, the signal from microsensors with external reference electrodes needs to be corrected for this disturbance. The suggested correction with the local EP signal 21 was tested and validated in this study. The corrected pH profiles show noise, but the corrected ORP profiles do not show this noise, indicating that the noise is not caused by the correction itself. Since the ORP profile shows equal potential values for all parallel connected cathode layers and the pH validation measurement at the location with EP of 0 mV showed the same value as the corrected profile, the EP correction for potentiometric microsensor measurements is reliable. Further proof of reliability can be gained from the duplicate measurements. The similarity between duplicate profiles show the reliability of the method, with small deviations that can be ascribed to either differences in conditions over the measuring time (>2 h) or to invasiveness of cathode piercing by the microsensors. Another method to measure potentiometric signals in systems with high electrolyte resistance is minimizing the distance to the reference electrode. 8,22,31 The mV offset between the electric potential microsensor and the fixed reference was 0 mV when the distance was minimized (with both tips at equal distance from the anode, with 10 mm distance between the tips parallel to the electrode surfaces), both with the top reference electrode (depth −5 mm) and the bottom reference electrode (depth 30 mm) (Figure 4). The bottom reference electrode was located in an area with a steep gradient of electric potential but still showed 0 mV offset at the minimum distance between the reference and the microsensor tip. In future studies, placing fixed reference electrodes at additional intersection points is recommended for validation purposes (see also SI section "Considerations for practical applications"). The potential ideal solution for potentiometric microsensor measurements would be the development of a combined sensor with an integrated internal reference electrode with a long thin tip that allows piercing soft materials. This sensor could be used for measurement with the least invasiveness in electrochemical systems. Unfortunately, this sensor is not yet commercially available. Microprofiling Shows Steep Local Gradients. With the method from this study, many useful insights were already gained. One insight gained here is that H 2 is stripped likely due to the CO 2 supply in the electrochemical system. The hydrogen concentration is low at the places close to the membrane and influent port (Figure 3). This indicates that a great part of the formed hydrogen is flushed out in the recirculation bottle of the system, where CO 2 and N 2 are continuously sparged to the reactor (Figure 1). Microsensor measurements can be used to test liquid mixing capabilities in optimized reactor designs by measurements of local substrate availability. Furthermore, the microsensor measurements showed that the local hydrogen concentration is the highest at the bottom cathode, corresponding with a great share (82%) of applied current towards that cathode layer. The catholyte recirculation distributes the formed hydrogen evenly through the cathode compartment, and the Analytical Chemistry pubs.acs.org/ac Article concentrations are still around 375 μmol/L inside and around the upper two cathode layers (depth of −5 to 25 mm). This hydrogen concentration is 1000 times above the required threshold reported in the literature for several hydrogenophilic bacteria 32 (assuming a Henry coefficient of 7.7 × 10 −06 mol/ (m 3 Pa) 33 ). Apart from the hydrogen profile, the ORP and pH profiles also gave interesting insights. The ORP profiles showed great differences with and without current, not only in the cathode but also inside the catholyte ( Figure 5B). This indicates that applying a current to the system does not only change the reaction conditions within the porous cathode but also in the liquid around. The pH profile showed no gradient when no current was applied to the system, but it showed local differences when the current was applied to the system ( Figure 6C). The local pH inside and around the cathode layers was higher than the bulk pH, presumably due to proton consumption for the hydrogen evolution reaction. A pH shift can change favorability for microorganisms. For example, the 0.3 pH unit increase causes a 5% decrease of the undissociated fatty acid fraction, this fraction can inhibit methanogenic activity. 34 The insights of these microsensor measurements can be used to adjust the reactor conditions in such a way that allows them to be more favorable for desired microorganisms. For system optimization, fluid dynamic studies within microbial electrosynthesis systems are one of the key points. 35 The results of this study indicated that hydrogen distribution in the system requires optimization. Microsensor measurements of local conditions are a helpful tool to study different electrode and flow designs and their effect on potential limiting conditions. Outlook for Microsensor Application Possibilities. The current distribution was mainly (82%) toward the bottom cathode, while hydrogen is available in all three cathode layers. Thus, different niches can be found within the cathode compartment and even within cathode layers with different local hydrogen concentrations at different depths. Between and within the cathode layers, different availability of substrates, electron donors, and products can be expected based on the profiling results. Several modeling studies have calculated the presence of limiting gradients of, e.g., pH 6 and H 2 36 in biofilms. Thus, the development of a biofilm on the graphite fibers will affect the local conditions even more than in the abiotic situation shown in this study. Verification experiments with microsensors can serve as validation by determination of gradients of substrates, products, and local conditions within biofilms. Linking the different local conditions to the performance at the different spots can give many insights for optimization. For example, linking the local H 2 and current to microbial activity is insightful for determining the dependence of the microbes on electrical current versus hydrogen as the electron donor. With the method presented in this paper, gradients of H 2 , O 2 , H 2 S, CO 2 , H 2 O 2 , NO 2 − , pH, ORP, and electric potential 19,21,37−45 can likely be measured in stable electrochemical systems with or without biofilms under anaerobic or even aerobic conditions. ■ CONCLUSIONS This study showed the successful application of microsensors for measurement of gradients in electrochemical systems. The reactor with measuring wells placed perpendicular to the profiling direction allowed for profiling with electrolyte leakfree recirculating conditions. The presented manuals and video instructions will aid future users to apply this method. Profiles were made of local H 2 , electric potential, pH, and ORP in the electrolyte and for the first time throughout the porous electrodes. For the potentiometric microsensors, a local electric field potential correction is validated as a reliable method to correct for signal disturbance from the electric field. The use of these sensors can be extended to study biofilm gradients and local reactor conditions in electrochemical systems. Additional figures with signal stabilization measurements, validation measurements with the potentiometric sensor, pH microsens or measurement during intermittent current, considerations for practical applications, and manuals for sensor calibration and profiling (PDF) Movie file for constructing the graphite felt layers (MP4) Movie file for making a microprofile (MP4) Movie file for constructing the electrochemical reactor (MP4) ■ ACKNOWLEDGMENTS Great thanks are given to Pim de Jager and Plant E for lending us the motor tool and microreference sensor, to Tage Dalsgaard from Unisense for all his support and valuable input, to Hennie van Dorland, Bert Willemsen, Vinnie de Wilde, and Michiel van den Broek for all the technical support with the setup configuration and modification, and to Cees Buisman for revision of the manuscript. Financial support from WIMEK and Chaincraft BV is greatly acknowledged.
2023-01-31T06:18:08.938Z
2023-01-30T00:00:00.000
{ "year": 2023, "sha1": "09ee89edc62850e90fefb16236c26ce649174d44", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ee1aafd46c5c44c40dd572d958166c3d76e14070", "s2fieldsofstudy": [ "Engineering", "Chemistry", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
95463131
pes2o/s2orc
v3-fos-license
MULTI-INSTRUMENTAL CHARACTERIZATION OF TWO RED PIGMENTS IN FUNERARY ARCHAEOLOGICAL CONTEXTS FROM NORTHERN CHILE Pigments from two archaeological sites, in the Calama and Iquique regions, northern Chile (Figure 1a), were chemically identified. The pigment samples were obtained from different geographic areas and the sites correspond to different cultural periods. The Calama pigment sample (Figure 1b) was found inside a Concholepas concholepas shell during an excavation in the Chorrillos archaeological cemetery, an early Formative site (2750-2150 B.P.), located in the Atacama Desert. The population buried in this cemetery belonged to the early agricultural period, and exhibited a ‘circular-oblique’ cranial deformation trend (Depósito Arqueológico de Calama – DAC– register). Previous studies in Calama (Ogalde et al., 2014) and a review of archaeological collections of this area have shown the presence of shells and pigments on the offering tombs. This type of offering is common in the Calama area and raises questions about the origin of the pigments. However, systematic studies concerning chemical characterization have not been done yet on this type of mineral. The Iquique pigment sample (Figure 1c) comes from a facial mask of a Chinchorro mummy from the Sermenia site, an archaeological cemetery in the coastal deser t SUMMARY Introduction Pigments from two archaeological sites, in the Calama and Iquique regions, northern Chile (Figure 1a), were chemically identified.The pigment samples were obtained from different geographic areas and the sites correspond to different cultural periods.The Calama pigment sample (Figure 1b) was found inside a Concholepas concholepas shell during an excavation in the Chorrillos archaeological cemetery, an early Formative site (2750-2150 B.P.), located in the Atacama Desert.The population buried in this cemetery belonged to the early agricultural period, and exhibited a 'circular-oblique' cranial deformation trend (Depósito Arqueológico de Calama -DAC-register).Previous studies in Calama (Ogalde et al., 2014) and a review of archaeological collections of this area have shown the presence of shells and pigments on the offering tombs.This type of offering is common in the Calama area and raises questions about the origin of the pigments.However, systematic studies concerning chemical characterization have not been done yet on this type of mineral. The Iquique pigment sample (Figure 1c) comes from a facial mask of a Chinchorro mummy from the Sermenia site, an archaeological cemeter y in the coastal deser t SUMMARY Analysis of two archaeological red pigments from two cities in northern Chile, Calama (Calama sample) and Iquique (Iquique sample) are reported in the current work.Scanning electron microscopy and energy dispersive X-ray spectrometry (SEM-EDX), powder X-ray diffraction (XRD) and vibrational spectroscopy (IR and Raman) were used for structural studies.Hematite (α-Fe 2 O 3 ) was the main component of the red color in both pigments.These are the first results reported for both sampled areas, helping thus to clarify the Calama funeral rites and the raw materials used for Chinchorro mummification in the Iquique region.The characterization of the raw materials provides information for future studies focused on hematite mining processes. Light microscopy and SEM-EDX analysis About 1mg of the selected pigment was separated using a stereomicroscope (Olympus SZX-7) and then mounted on a stub for direct analysis in an EVO LS scanning electron microscope (SEM).SEM images were recorded at 100× and 600× with secondary and back scattered electron detectors.The samples were also analyzed on an Oxford EDX detector (8.5 WD and 450kV).The topographic qualitative information, spectrometric data and semi-quantitative results (detection sensitivity down to 0.1% by weight) were observed and interpreted using the INCA software.This analysis was carried out at the Bioarchaeology Laboratory, Instituto de Alta Investigación, Universidad de Tarapacá, Chile.A morphological and microscopic analysis of diatoms was performed at the Marine Science Laboratories, Universidad Peruana Cayetano Heredia, Lima, Perú. Infrared analysis Infrared spectra were run on a Fourier Transform Infrared (FT-IR) Perkin Elmer Spectrum BX spectrometer equipped with a DTGS detector.The spectral resolution was 2cm -1 ; 32 scans were performed.A pellet was prepared from 1mg of the solid sample dispersed in 200mg of KBr.The samples were processed at the Facultad de Ciencias, Pontificia Universidad Católica de Valparaíso, Chile. Raman analysis About 50mg of solid sample was placed on a microscope slide, and the Raman spectrum recorded using a Renishaw Raman Microscope System R M1000 equipped with a diode laser providing the 785nm line, a Leica microscope, an electrically cooled CCD detector and a notch f ilter to eliminate elastic scattering.The spectr um was obtained using a 50× objective.The laser power output was 2.0mW and the spectral resolution was 2cm -1 .This analysis was perfor med at the Facultad de Ciencias, Universidad de Chile, Santiago, Chile. Powder X-ray diffraction analysis Powder X-ray diffraction patterns of the red pigments were recorded on a Bruker D8 Advance diffractometer using Cu Kα radiation obtained from a source tube operated at 40kV and 40mA.All patterns were recorded between 20 and 65º 2θ with a detector slit of 0.6mm and a scan speed of 0.01º/2sec.This analysis was accomplished at the Facultad de Ciencias, Pontificia Universidad Católica de Valparaíso, Chile. Results The Calama sample was a red pulverized material with amorphous conglomerations (Figure 1b), while the Iquique sample consisted of dark red powder with consolidated granules (Figure 1c).These optical images evidenced that homogeneity exists in the morphology and color distribution of both samples.Figures 2a and 2b are microphotographs obtained with backscattered electrons on homogenous areas from Calama and Iquique samples, respectively.SEM of the pigments allowed selecting the most representative areas for EDX analysis.Figure 2c presents the average results of the semi-quantitative EDX analysis of the total area and sub-areas of several granules.The graph shows that iron is the main component in both the Calama and Iquique samples (56 and 58% respectively), while the second highest peak corresponds to silicon concentration (20 and 15%).The high Si concentration, in addition to the presence of aluminum in both samples, suggests phases of clay and/or aluminum-silicate minerals.It is noteworthy that the lower Al concentration in the Iquique sample as compared to the Calama content indicates differences in their geological origins.Furthermore, the Iquique sample presents a high manganese content (10%), not detected in the Calama sample, which is more abundant than Al (4%).The greater concentration of calcium (5%) in the Iquique sample is also worth noting. The IR spectr um of the Ruff standard of hematite is included in Figure 3a, where medium-strong bands at 463 and 544cm -1 , assigned to hematite, are observed.In the Calama sample the bands at 470 and 560cm -1 are assigned to the hematite chromophore (Figure 3a).Bands observed at 1085 and 1037cm -1 are assigned to the (n)n Si-O bond stretching, while weak bands at 694 and 776cm -1 and the shoulder at about 797cm -1 are assigned to deformation modes involving the Si-O moiety and attributable to quartz (SiO 2 ).The band at 3434cm -1 indicates the presence of a hydroxyl group, where the largest band width at 3434cm -1 is probably due to interactions between the hydroxyl group and clay (nSi O-H: 3700-3200cm -1 ) or a hydroxyl associate group (subintervals ~3300-3100cm -1 ).The band at 1634cm -1 shows that a fraction of hydroxyl groups belongs to water, while a band at 898cm -1 from hydrated iron oxide compounds, such as goethite, is not observed.However, the very weak bands at 2925 and 2956cm -1 corresponding to aliphatic nC-H suggest the presence of organic materials in the Calama sample (Perez andMartin, 1967a, 1967b;Rendon and Serna, 1981;Schrader, 1995;Stuart, 2004;Toledano, 1988;Vargas-Rodriguez et al., 2008;Zapatero et al., 2000). In the Iquique sample (Figure 3a) the IR bands at 457 and 532cm -1 correspond to the hematite chromophore.Bands observed at 1085 and 1034cm -1 are assigned to the (n)n Si-O bond stretching or Si-OH bond.The weak band at 878 and 795cm -1 may come from hydrated iron oxide compounds such as goethite; the weak bands at 697, 779 and 795cm -1 are ascribed to deformation modes (Si-O) of quartz.The wide band appearing at ca. 3430cm -1 points to the presence of a highly associated hydroxyl group; while the band at 1634cm -1 shows that a fraction of the hydroxyl group belongs to water.However, the strong bands at 2957, 2917 and 2849cm -1 corresponding to aliphatic nC-H, suggest the presence of organic materials in the Iquique sample, which is of importance because some bands assigned to bonds of inorganic compounds could be reinterpreted.There are three bands in the zone of nC-H bond with C sp 3 hybridization; therefore, the presence of methyl (-CH 3 ) and methylene (-CH 2 -) groups is possible, given the occurrence of a band at 1467cm -1 .Indeed, the band at 1569cm -1 could correspond to the C=O bond, a signal possibly related to bending vibration at 1421cm -1 of activated methylene (-CH 2 -CO-).In this sense , the band at 1322cm -1 could correspond to -O-CO-CH 3 (1380-1365cm -1 ) or -CO-CH3 (1360-1355cm -1 ) groups, in which case the band of CH 3 vibration (usually appears at 1380cm -1 ) moves to 1467cm -1 as indicated above.In order to complement the information to be discussed below, it is necessary to keep in mind that the S=O bond has several signals at 1225-980cm -1 and a very intense band into the 1420-1000cm -1 range (Perez and Martin, 1967a, b;Rendon and Serna, 1981;Toledano, 1988;Schrader, 1995;Coates, 2000;Zapatero et al., 2000;Stuart, 2004;Vargas-Rodriguez et al., 2008). The Raman spectra (Figure 3b Finally, Figure 5 shows SEM images from two kinds of diatoms found in the red powder of th Calama sample.Diatoms are unicellular photosynthetic organisms consisting of one (unique) silica cell wall which is composed of two valves called thecae and several girdle bands.The thecae have different sizes (20-200μm diameter), and are slightly different from each other (Weiner, 2010).Based on these morphological characteristics, the largest and central diatom has been identified as a Surirella (cf.S. wetzelli Hustedt).There is a second unidentified diatom in Figure 5: the smallest oval at the lower left corner of the picture. Discussion The use of the presently studied red pigment in the mortuary Chinchorro tradition is important because it was associated to changes in the chromatic preparation of the body, with large social implications.The high presence of manganese in the Iquique sample (Figure 2) could be ascribed to the use of Mncontaining materials during the preparation of the Chinchorro mummies.Over the surveyed area, the use of black pigment has been reported for Archaic Acha-3 site and in the Camarones14 and 17, where the Chinchorro tradition started with the 'black mummies'.In the middle Archaic period (7000-5000 BP), in Chinchorro 1 and Maestranza 1 sites this type of mummification was already present and Mn appears in modeled artif icial mummies.During the late Archaic period (5000-3500 BP) black color was used on vegetal mats and is part of the filling of artificial mummies.On the other hand, during the late Archaic period, the use of red was fully incorporated in the Chinchorro mummif ication techniques, characterized as 'red mummies' (Arriaza, 1994, 1995, Muñoz et al. 1993;Standen et al., 2004;Arriaza et al., 2005;2006, 2008a, b, 2012;Sepúlveda et al., 2013Sepúlveda et al., , 2014)).In the Arica area, over 400km north of Iquique, mummies with black/red dish face are found in many sites; however few of them have been thoroughly analyzed.Arriaza et al. (2006) study of the coating of eleven black mummies reported 36% of Mn oxide and 8.4% of Fe oxide.In six red mummies they reported 60.7 % Mn oxide and 4.3 % Fe oxide for the head ('helmet') and facial mask, respectively.Both types of mummies contain Fe and Mn in different percentages, which can be associated to regional and chronological variations. The identification of hematite in the present study contributes with hard data to understand the process of chromatic change and the manipulation of the bodies.Also, in the current Iquique case, the presence of Mn could be explained by either an intentional mixture of raw materials during body preparation or natural composition of the red minerals.This mortuary practice could explain the presence of Mn in the mask of the Chinchorro mummy from Iquique.In addition, several bands observed in the IR spectrum of the Iquique sample suggest the presence of organic mater ials.The intense Chinchorro manipulation of the body could explain the presence of this organic compound.However, Mn is also present in large amounts in some red pigments in the Nasca region (Vaughn et al., 2005;Eerkens et al., 2014).Thus, the mixture of Mn and red pigments in the Iquique Chinchorro mummy sample could be the result of the geochemical origin of the hematite deposit.In this regard, the XRD patterns show quartz, kuthnohorite and romerite.The greater concentrations of Ca and Mn found by the EDX analysis of the Iquique sample are ascribed to these minerals that are associated to mummification techniques and/or geochemical origin.Romerite might be responsible for the intense signal of the nS= O mode detected in IR analysis.Similarly, the water in romerite could be related to the particular signal in the 3600-2700cm -1 range of the hydroxyl group from water in the IR spectrum of the Iquique sample (Figure 3a). Another important fact about red and black coastal pigments is that during the middle Archaic period (7000-5000 BP) they are found deposited on sea shells as funeral offerings (Sepulveda et al., 2014).This is precisely the 'funeral situation' of the hematite identified in the Calama sample of Chorrillos archaeological cemetery, located more than 279 km south of Iquique.In the Calama sample a diatom (Surirella, cf. S. wetzelli Hustedt) was discovered, which is normally found in waters with a high salt content.Also, the XRD patterns show quartz and peaks attributed to albite and cordierite, coherent with EDX spectra.These minerals could be related to the geochemical origin of the hematite deposits in the Calama area, which have silicates rich in Al and water, with high salt content.Previous studies in Calama (Ogalde et al., 2014) have shown the presence in grave goods of a yellow chromophore called orpiment (As 2 S 3 ), which is potentially toxic.Subsequently, red, black and yellow pigments were deposited in valves of C. concholepas as funerary offerings at the archaic and formative cemeteries of the studied area.In ancient times people handled different types of minerals with various degree of toxicity.For instance, in Huancavelica, Lima, Peru, cinnabar (HgS) was used as red pigment to decorate pottery, for textile dyes, etc.Consequently, the handling of toxic substances could have been the consequence of trial and error during pre-Columbian era.This type of research could help understand the mining processes, as well as the mineral sources and distribution routes (Salazar et al., 2010a(Salazar et al., , 2010b(Salazar et al., , 2013;;Salazar and Vilches 2014).This is of interest considering that the extraction of hematite is the first evidence of mining in South America (Vaughn et al., 2007(Vaughn et al., , 2013;;Eerkens et al., 2009). Final Comments The Iquique sample studied herein is associated to the Archaic period and the Calama one to the Formative period of northern Chile.In summary, the results allow to conclude that in this case, hematite was used in the Chinchorro mummif ication tradition in the Iquique region.In the Calama area, ancient people also used the same red pigment as part of their funerary offerings for the care of the dead in the afterlife.In both cases, these red pigments are specifically related to mortuary practices and in both cases the red chromophore was hematite.This cultural relationship between pigments and afterlife has a long history in northern Chile and the Andes.The funerary character, toxicity and antiquity of the pigments are important, since they allow investigating the acquisition, processing and ritual aspect of these materials.Future studies could focus on the processes necessary for mining these pigments and the social implications. Figure 1 Figure 1.a) Map of northern Chile showing the cities of Calama and Iquique, where the pigment samples were found.b) Calama sample (NA-4 DAC, register) which was found in a Concholepas concholepas shell, and c) Iquique sample collected from the nasal area of a Chinchorro red mummy. ) of the Ruff standard hematite showed bands at 610, 496, 408, 291 and 226cm -1 .These signals are also present in both Calama and Iquique samples.No other signals are visible in the Raman spectra obtained from the samples.This may be due to the intensity of the Fe-O bond signals, which mask other less intense signals, such as those of manganese oxides (Schrader, 1995; Smith and Dent, 2005). Figure 2 . Figure 2. Micrographs obtained with backscattered electron on granules from a) Calama and b) Iquique samples.c) Average results of EDX analysis of the archaeological pigments (carbon and oxygen are not displayed in this chart). Figure 3 Figure 3. a) Infrared and b) Raman spectra obtained from Calama and Iquique samples along with a hematite spectral reference. Figure 4 . Figure 4. XRD powder diffraction patterns from a) Calama and b) Iquique samples.
2019-04-05T03:37:19.192Z
2014-09-01T00:00:00.000
{ "year": 2014, "sha1": "5b3faa0cce2a5c6134b2f42ec756242ccd39cea7", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/jcchems/v59n3/art10.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5b3faa0cce2a5c6134b2f42ec756242ccd39cea7", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
259469675
pes2o/s2orc
v3-fos-license
G-OPTIMAL DESIGN OF NON-LINEAR MODEL TO INCREASE PURITY LEVELS OF SILICON DIOXIDE ABSTRACT INTRODUCTION A series of statistical applications known as the experimental design is used to classify and quantify the correlation between the input and output variables in the process under research which aims to find the settings and conditions under which processes are optimized [1]. The experiment aims to obtain information or facts that follow the study's objectives by considering a minimum of time, cost, effort, and experimental materials [2]. The design of experiment (DOE) is a statistical method widely used in various scientific and industrial fields to support the design, development, and optimization of items and processes [3]. One use of the experimental design is the design for experiments on silicon dioxide. Silica, often known as silicon dioxide (SiO2), is a mineral commonly found on Earth and obtained from mining materials. Silica is available as a naturally occurring resource that can be exhausted because of its many benefits in various fields. As a result, its selling price is extremely high. Efforts must be made to maintain the availability of silica and produce silica with a high purity level capable of providing opportunities in the business and industrial fields. Several studies have shown that it turns out that silica can be made from several materials that are around humans, such as rice husks, corn cobs, palm oil, bamboo leaves, and others for vegetable silica. Based on previous research, the silica content in rice husk ash is 90-98% dry weight [4]. The second highest silica content is from bamboo leaf ash, which is 75.9% [5], while straw ash has a silica content of 75% [6]. The highest silica content is found in rice husks. Compared to mineral silica, silica made from rice husk has several advantages. It is more reactive, has more refined grains, is easier to produce, costs less, and is supported by the availability of abundant and renewable raw materials [7]. Complete combustion of rice husk ash produces silica as much as 90% to 98% dry weight. Silica with low purity can be increased through the purity process by adjusting the influencing factors. The combination of these influencing factors will further increase silica purity. Research or experiments that want to know the effects of several experimental factors can be studied through a design that is processed using optimal design theory [8]. The optimal design is part of the design that estimates the parameters without bias with the smallest variance to produce accurate statistical inference [9]. An optimal design is needed to determine the points to be tested to optimize the design with the desired criteria. Optimal design is part of the experimental design that allows parameter estimates without bias and has a minimum variance. The optimal design depends on the model used and the number of observations desired by estimating using optimal criteria. The G-optimal design minimizes the maximum mean squared prediction error in the experimental region by minimizing the maximum variance value of each predicted value [10]. A non-linear model is a relationship between the response variable and the explanatory variable that is not linear in the parameters. Taylor series is one approach that can be used to approach non-linear equations through linear equations. This research is a case study of silicon dioxide by using the temperature factor and the rate of temperature increase as factors that affect the rate of increase in the purity of silicon dioxide. The design used is the optimal design with G-optimal criteria on non-linear models. The optimal design leverages computational power and algorithms to generate a broad class of designs that can be used for many problems [12]. The VNS algorithm will be used to select the design point candidates. The VNS algorithm can be used to explore the neighborhood sequentially from the neighborhood with the fewest solutions to the neighborhood with the most solutions [13]. Model Used The model used in this study is a non-linear model with temperature (°C) and rate of temperature rise (°C /min) as factors and silica purity levels (percent) as a response. The higher the temperature at which silicon dioxide is burned, the higher the purity of silicon dioxide. However, the increase in silicon dioxide purity follows a non-linear trend, with the increase becoming smaller as it approaches 100 percent purity [11]. Exponential decay is a non-linear model in chemical kinetics that approximates an asymptote value [8]. This is the non-linear exponential decay model that was used: where: ( , ) = expected value of the response 0 = constant 1 , 2 = parameters = temperature = rate of temperature rise The non-linear model is more complicated than the linear model, so it requires more analysis in the research process. One method for approaching non-linear equations through linear equations is the Taylor series. The multivariable Taylor method is written as follows: where ( , ) is a function of and , and are constants. Steps of the Variable Neighborhood Search (VNS) Algorithm The steps to choosing the optimal design point are as follows: 1. Make a list of candidate sets based on a combination of temperature points and rate of temperature rise. The temperature was raised from 800 to 900 °C, and the rate of temperature rise from 1.67 °C/min to 5 °C/min, with a 0.5 °C/min increase. 2. Make a starting design (initial design) with the following steps: a. To create the initial design, choose points randomly from candidate sets. b. Calculate and determine the estimated variance of the predicted value (SPV(x)). SPV can be calculated by [14]: c. In the initial design, look at the highest predicted variance ( ( )) value as the best option. 3. Explore the neighborhood that has been set in the application of the Variable Neighborhood Search algorithm with the following steps: a. Neighborhood N0 1) Add a point at random from the list of candidate sets to the initial design. 2) Using the same calculation as in step 2, calculate the expected variance of the predicted value of the current design. 3) Review the maximum predictive variance ( ( )) value as the optimal solution in the current design. 4) Compare the optimal design in the N0 neighborhood to the initial design, choosing the one with the minimum and maximum variance. If the design in the N0 neighborhood is not better than the first, the next neighborhood will be explored. b. Neighborhood N1 1) Replace two points from design N1 with two points from the candidate sets. 2) Using the same calculation as in step 2, calculate the variance of the predicted value of the current design. 3) Review the maximum predictive variance ( ( ))value as the optimal solution in the current design. 4) Compare the optimal design in the N1 neighborhood with the N0 design, and select the design with the minimum and maximum variance values. 4. Repeat steps 2 to 4 in the previous neighborhood. The third stage is repeated an infinite number of times. 5. Calculating the G-efficiency value in the chosen design. In the G-optimal design, the G-efficiency formula is [15]: where is the number of parameters in the model and is a lower bound for ∈ ( ). Non-linear Model Approach A Taylor series approach with a k th order polynomial will be used to approach the model employed in this study. The initial solution approach is used because it has the smallest mistake. The model's MSE (Mean Square Error) value is used to determine the order. The MSE values for the results of the SiO2 purity level given from the Taylor approximation simulation using 1 = 0,005 and 2 = 0,005 is as follows: Based on the simulation in Table 1, the Taylor polynomial selected is in the second order. The secondorder Taylor polynomial has a fairly small MSE and an uncomplicated model. Taylor's model uses Equation (2) with the second order as follows: ( , ) = 0,79413127 + 0,00037952 -0,000379 -1,8130105.10 -7 2 + 3,62602.10 -7 -1,8130105 10 -7 2 . G-Optimal Design on Silicon Dioxide Purity Levels n This study resulted in a design point for SiO2 purity levels in the temperature range of 800 o C to 900 o C with a different number of points, namely the design of SiO2 purity levels with seven design points, 12 design points, and 20 design points. The best design results are presented in the following table: . Visualization of SiO2 design points with 7 points The visualization of the design points obtained by design at a temperature of 800 o C to 900 o C with seven design points can be seen in Figure 1. The minor design point in the figure is at 800 o C, and the rate of temperature increase is 1,67 o C/minute, while the greatest is at 900 o C, when the rate of temperature increase is 3,33 o C/minute. The temperature points of 815 o C and 825 o C will then be changed to the lowest temperature point obtained, which is 800 o C. The point of temperature rise rate of 3,34 o C/minute and 2,50 o C/minute is converted into the point of the maximum temperature rise rate obtained, which is 3,50 o C/minute, to see if the design is better compared to the design obtained. The results are presented in Table 3. The results from alternative 1, which replaced the temperature points of 815 o C and 825 o C for the lowest temperature point of 800 o C, resulted in a lower G-efficiency value of 85,90%. Alternative two, which is obtained by replacing the point of the temperature rise rate of 3,34 o C/minute and the rate of temperature increase of 2.50 o C/minute to the point of the maximum temperature rise rate obtained is 3,50 o C/minute, also obtains a smaller G-efficiency of 85,91%. That proved the G-optimal design at the level of SiO2 purity levels with 7 points is the best design with a large enough G-efficiency value of 96.41%. The design point for the level of purity of SiO2 in the temperature range of 800 o C to 900 o C with 12 design points and 20 design points can be seen in Table 4 and Table 5 below. Table 4 shows the optimal design point for SiO2 purity using a temperature range of 800 o C to 900 o C using 12 design points with a G-efficiency value of 88,22%. The best design point at the level of SiO2 purity using a temperature range of 800 o C to 900 o C with 20 design points is presented in Table 5. Based on Table 5, the G-efficiency value obtained is 76,38%. Figure 2 shows the visualization results of the design points obtained by design at a temperature of 800 o C to 900 o C with 12 and 20 design points. The figure shows that the lowest temperature for the 12-point design is 800 o C, while the highest is 895 o C. The lowest temperature rise rate in the design with 12 points is 1,67 o C/minute, and the largest temperature rise rate is 4.50 o C/minute. The design with 20 points shows that the lowest temperature is 800 o C, and the highest is 900 o C. The lowest temperature rise rate in the design with 20 points is 1.67 oC/minute, and the largest temperature rise rate is 5 o C/minute. The DETMAX algorithm on the SAS program will be used to display the findings of this study's search for the best design utilizing the G-optimal criteria. The results of the VNS algorithm compared to the optimal design of the DETMAX algorithm to test whether the VNS design gives a better optimal design. The point exchange process is carried out by the DETMAX algorithm, namely, adding a design point from the initial design point, then reducing the current design point. The optimal design results using the DETMAX algorithm can be seen in Tables 6 -8. Based on Tables 6 -8, it can be concluded that the optimal design using the DETMAX algorithm has a lower G-efficiency value than the optimal design obtained using the VNS algorithm. Figure 2 and Figure 3 show the results of the G-optimal design points obtained using the DETMAX algorithm and the VNS algorithm in common; the design points obtained are very diverse and irregular. Based on the results of this study, it can be concluded that the greater the number of points used, the greater the value of the maximum predictive variance, so that the value of G-efficiency obtained will be smaller. The best G-optimal design is the design using 7 points with a G-efficiency value of 96.41%. CONCLUSIONS The best design point obtained from the G-optimal design on the relationship between temperature ( o C) and the rate of temperature increase ( o C/min) on the response variable of the level of purity of SiO2 with a temperature range of 800 o C to 900 o C is a pair of points 800 o C and 1,67 o C /min, 800 o C and 2,17 o C/min, 815 o C and 2,50 o C/min, 825 o C and 2,00 o C/min, 845 o C and 2,34 o C/min, 895 o C and 3,34 o C/min 900 o C and 3,50 o C/min with a G-efficiency of 96,41%.
2023-07-11T01:23:47.875Z
2023-06-11T00:00:00.000
{ "year": 2023, "sha1": "2fcfb0b970bed7eea453099f919c951f78703c50", "oa_license": "CCBYSA", "oa_url": "https://ojs3.unpatti.ac.id/index.php/barekeng/article/download/6950/5927", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "66d974efb6ac13152043bcd8e0b2834d73af7d41", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
56324714
pes2o/s2orc
v3-fos-license
Applicability of Becker ’ s theory of allocation of time in modelling married women ’ s allocation of time between household duties and labour force participation in Zimbabwe With the rise in women participation in labour force and gender equality campaigns on the one hand and cultural norms which characterise women as house makers on the other, most married women often find themselves in a dilemma as to how to allocate their time among competing needs. This paper used a theoretical approach in reviewing the applicability of the proposals of Becker’s allocation of time theory to the married women’s allocation of time between household duties and labour force participation to the Zimbabwean situation.  It was concluded that though the model ignores the cultural norms of assigning household roles to specific gender, it explained to a greater extent the trends observed in which women spend more time in household chores to which they have a comparative advantage as opposed to their male counterparts. The substitution and income effects explained in this model are also applicable to the preferences and patterns of time allocation by married when faced with a change in wages. With the rise in women participation in labour force and gender equality campaigns on the one hand and cultural norms which characterise women as house makers on the other, most married women often find themselves in a dilemma as to how to allocate their time among competing needs.This paper used a theoretical approach in reviewing the applicability of the proposals of Becker's allocation of time theory to the married women's allocation of time between household duties and labour force participation to the Zimbabwean situation.It was concluded that though the model ignores the cultural norms of assigning household roles to specific gender, it explained to a greater extent the trends observed in which women spend more time in household chores to which they have a comparative advantage as opposed to their male counterparts.The substitution and income effects explained in this model are also applicable to the preferences and patterns of time allocation by married when faced with a change in wages. Introduction With the rise in women participation in labour force and gender equality campaigns on the one hand and cultural norms which characterise women as house makers on the other, most married women often find themselves in a dilemma as to how to allocate their time among competing needs.Becker's theory of allocation of time provides a basic theoretical analysis of choice that includes the cost of time on the same footing as the cost of market goods [1]. Objectives This paper serves to discuss Becker's theory of allocation of time and the extent to which it models decisions made by married women as they allocate time between household duties and labour force participation. Methodological issues The paper is purely qualitative and was constructed from the vast literature surrounding the Becker's theory and also the knowledge on the participation of married women in labour markets in Zimbabwe.The analysis was prompted by the idea to match economic theory on labour to the situation obtaining in African countries hence the case of Zimbabwe.A general overview of the country approach was used with no specific questionnaires administered to married women. Becker's Theory Becker's Theory assumes that individuals within a family make informed and rational decisions resulting in the attainment of maximum utility satisfaction through combining time and market goods to produce more basic commodities.It also assumes that individuals decide whether or not to participate in the labour market by comparing the value of their time to the value they place on the time spent at home [1] [2].Household duties include such activities like babysitting, cleaning, ironing and cooking to which no direct income is attributed where as labour force participation or market activities involves trading labour for a wage. According to Becker's Theory, households maximise utility functions of the form U(Z 1 ,...,Z m ).Each commodity Z i is produced by the household using a production function Z i = f (i) (X i, T i ), i = 1,...,I.Where each X i is a bundle of goods purchased at the vector of prices p i and T i is the amount of time spent at work.The resource constraints ∑p i y i = I = wT+V where y i are goods purchased, p i are their prices, I is the money income, w is the wage rate per unit of time T i and V is the amount of unearned income accruing to households [1][3] [4].In these situations, the amount of money income forfeited in doing non-market activities measures the cost of obtaining additional utility. Specialisation in Households To a greater extent the model better explains the allocation of time by married women in that on average most married women earn less than their husbands and hence they spend more time carrying out household duties.Becker's theory advocates for specialisation and division of labour within households.Assuming that male and female times are perfect substitutes for home production, then it would be more efficient to specialise.The partner with a comparative advantage in domestic production is likely to give up market work altogether and concentrate on domestic duties [1]. Women are generally viewed as having comparative advantage in household duties especially minding after children and thus would rather commit more of their time for that living the market activities for men.This comparative advantage is even greater during the early stages of a child's life and can be indicated for example in their biological ability to breast feed babies.The average earnings of men have over the past years been higher than that of women though some scholars attribute this to discrimination against women [1].This then according to Becker explains why most women spend more time in household chores compared to labour participation.However according to Maponga and Mushaka, (2015) even in circumstances where women earn more income than their husbands, married women still had a greater proportion of time allocated to household duties though it was not equal to that of those women who were not employed at all. An increase in labour force participation by women in Zimbabwe has resulted in some instances where the wife will be the sole bread winner, looking after the family including the unemployed husband.This scenario naturally turns the husband to baby sitting and doing household duties whilst the wife is at work.Though the wife may still be involved in some household duties, the majority of these would be carried out by the husband as he will be having lesser opportunity cost for his time. In Becker's view, 'the member specialising in household skills may enter the labour market part time if the domestic workload permits' [6].This assertion holds for the Zimbabwean situation where most people's incomes are far much less than the poverty datum line.Married women have been left with no choice than to increase their labour force participation rate so as to complement their husbands' earnings.Married women in farming communities for example, have been engaged in paid piece jobs to help out generating income for their households. The Substitution and Income Effect According to Becker's substitution effect, increase in wages would at specific levels of income increase married women's participation in labour force at the expense of household duties [2].In Zimbabwe the increase in wages has seen increased participation of married women in previously male dominated sectors such as mining and manufacturing sectors.An increase in wages increase the opportunity cost of non -market activities and hence household duties would be more expensive.This factor then compels them to trade their labour for money income at the expense of household duties.This also explains the increase in the number of women aiming higher decision making posts which are highly remunerated. In contrast, the substitution effect will be outweighed by the income effect for higher income levels.An increase in the husband's income may also influence the household to want 'high quality' children which would require more time [2].This then would result in the married woman devoting much of her time to babysitting since the household can now afford to forgo the income she could have earned from work.In other words, the household would start to invest their time in more time intensive activities owing to their extra income earned. Cultural Norms However, this model fails to account for cultural norms as a factor affecting time allocation decisions by married women.In Zimbabwe women are expected by culture to perform household chores and are socialised from an early stage as such.On the other hand, men are seen as the bread winners and providers for the family and hence have to sell their labour in order to sustain their families.Historically, women would stay at home looking after the kids and doing household duties whilst their husbands were at work.Some would even discriminate the girl child and only empower the boy child through education so as to prepare him to look after his family when he grows up [5]. Based on this cultural background, there have been many cases where even when the married woman has comparative advantage in market activities (earning more money than her husband), she would still commiting relatively more time to household duties.Specialisation which was advocated by Becker does not apply in such circumstances as other forces would be at play.Becker's model would be fully applicable in a Zimbabwean situation if there is a transformation in the way children are socialised. Substitutes for Household Chores Becker's model ignores the use of substitutes for household duties such as house maids and labour saving equipment like automated cookers, dish washing machines and washing machines.The model is only limited to household duties being carried out either by the husband or the wife.The income effect according to Becker would result in the married woman committing more time to household duties but this might not be the case since they will be affording house maids and labour saving equipment and hence living more time for work and leisure.However, Korenman et al, (2005) argue that household duties such as child minding do not have good substitutes and would rather be carried out by either of the parents especially the mother. Possibility of Simultaneity Household duties and labour force participation are seen in the model as mutually exclusive.This may not apply in the 21st Century since there have been a great improvement in making the working environment friendlier to nursing mothers.Married women in some jobs can bring their babies to work and leave them in the mother's rooms under the watchful eye of an attendant.This then allows the women to combine nursing and work activities in a sense.At Lafarge Cement Zimbabwe for example, there are mother's rooms which are conducive for women to regularly breast feed and play with their children whilst at work.Many organisations also allow married women to bring their children to workshops especially those workshops held far from their homes for several days. The Divorce Factor The other factor as asserted by Iversen and Rosenbluth, (2003) is the issue of the divorce factor.In the 1950s when this theory was founded, divorce was for example in United States of America considered one in five cases of new marriages but now it is approximately one in two.In light of such statistics, married women, even those with a comparative disadvantage in market activities would want to engage in labour force participation as a precautionary measure in the event of a divorce.They would want to be economically independent and have options outside marriage.In such cases specialisation becomes almost impossible whilst time allocation to market activities becomes a contentious issue. Conclusion In conclusion, the statement that "Becker's theory of allocation of time better models decisions made by married women as they allocate time between household duties and labour force participation" is generally true for most married women.This is clearly explained through the specialisation concept where married women spend more time in household duties relative to males due to their comparative advantage in such activities.The substitution and income effects explained in this model are also applicable to the preferences and patterns of time allocation by married women when faced with a change in wages.The model however ignores the cultural norms of assigning household roles to specific gender. Area for Further Study The researcher acknowledges the need to quantify the effects of cultural norms, women empowerment campaigns and socialisation have on the labour participation of married women.There is also need to view the actual cause of low women labour force participation rate hence answering the question: " is it because of discrimination or comparative advantage that women relatively spend more time on household chores as compared to the male counterparts.Such recommended studies should be able to differentiate the factors affecting rural married women and urban married women.
2018-12-15T08:21:35.356Z
2016-05-04T00:00:00.000
{ "year": 2016, "sha1": "b6c111b5902ea1c0a297862ff3d797cb3c23c949", "oa_license": "CCBY", "oa_url": "https://osjournal.org/ojs/index.php/OSJ/article/download/295/21", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "b6c111b5902ea1c0a297862ff3d797cb3c23c949", "s2fieldsofstudy": [ "Economics", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
259108642
pes2o/s2orc
v3-fos-license
U-PASS: an Uncertainty-guided deep learning Pipeline for Automated Sleep Staging As machine learning becomes increasingly prevalent in critical fields such as healthcare, ensuring the safety and reliability of machine learning systems becomes paramount. A key component of reliability is the ability to estimate uncertainty, which enables the identification of areas of high and low confidence and helps to minimize the risk of error. In this study, we propose a machine learning pipeline called U-PASS tailored for clinical applications that incorporates uncertainty estimation at every stage of the process, including data acquisition, training, and model deployment. The training process is divided into a supervised pre-training step and a semi-supervised finetuning step. We apply our uncertainty-guided deep learning pipeline to the challenging problem of sleep staging and demonstrate that it systematically improves performance at every stage. By optimizing the training dataset, actively seeking informative samples, and deferring the most uncertain samples to an expert, we achieve an expert-level accuracy of 85% on a challenging clinical dataset of elderly sleep apnea patients, representing a significant improvement over the baseline accuracy of 75%. U-PASS represents a promising approach to incorporating uncertainty estimation into machine learning pipelines, thereby improving their reliability and unlocking their potential in clinical settings. Introduction Machine learning has ushered in a new era in healthcare, providing opportunities for remote monitoring and computer-aided diagnosis. Although machine learning models are performing with human-expertlevel accuracy in controlled tasks [1,2,3,4,5,6], machine learning models have seen limited integration into clinical practice. The challenges to embedding these models in healthcare have shifted to providing guarantees of reliability, explainability and estimates of uncertainty [7,8]. These factors are essential to build trust with clinicians and promote adoption of machine learning in clinical settings [9,10,11]. Uncertainty or confidence estimates are needed at all stages of the machine learning workflow: in data acquisition, at training time, and at deployment time. In data acquisition and in training, it is important to evaluate the usefulness of the information that is utilized to train the machine learning model. Feeding too much data in a machine learning model is sub-optimal both financially and in terms of model performance [12,13]. Medical data is often expensive to acquire, requiring specialized equipment to measure each parameter, and expert knowledge to interpret the data. Moreover, feeding uninformative data to the model can deteriorate the performance because it adds noise without adding information [14]. The same principles apply to active learning [15,16], a training strategy in which the model actively queries an expert to label extra data points which would be most informative to improve the model. At deployment time, typical deep learning models always make a prediction, no matter how uncertain they are. Such systems are not usable in high-stakes medical applications: if the machine learning model doesn't indicate when a prediction is uncertain, a clinician cannot rely on it, as all outputs are equally likely to be incorrect [17]. Consequently, we propose that uncertainty estimation provides a principled approach to solving the aforementioned challenges in order to successfully integrate a machine learning workflow in the clinic. A variety of uncertainty estimation methods have been studied in the context of machine learning and deep learning [18]. In this work, we take a leap forward, leveraging the uncertainty estimates to maximize performance and reliability of machine learning models. We aim to develop a framework that improves predictions at every stage of the machine learning process. It should be model-agnostic, as it needs to be applicable to various clinical machine learning models and problems. We fulfill these goals by building an uncertainty-guided machine learning pipeline. Our pipeline follows a data-centric approach, and integrates uncertainty into data acquisition, training, active learning, and model deployment. We show how this greatly improves machine learning predictions and overall model reliability at different levels: 1. we improve the model through the quality of the training data, 2. we finetune the model to make better predictions on difficult examples, and 3. we reduce mistakes by deferring predictions on ambiguous data. Our pipeline represents a crucial step towards the integration of machine learning in clinical settings. We demonstrate its effectiveness on the challenging task of sleep staging, a clinical annotation task characterized by high levels of uncertainty, and known for its relatively low inter-rater agreement [10,19]. The gold standard for diagnosing sleep-wake disturbances is by performing a polysomnography (PSG) study, which involves a fully monitored overnight stay in the hospital. Various physiological signals are recorded, including electroencephalography (EEG), electro-oculography (EOG) and electromyography (EMG) signals. These full-night recordings are segmented into 30-second segments, which are then labeled by trained clinicians. Each segment is labeled as one of five sleep stages according to clinical standards [20,21]. This so-called sleep staging process (see Figure 1a) is labor-intensive and subject to inter-rater variability, so automated approaches have been researched to help humans in this task. Numerous deep learning methods for sleep staging have recently been developed [3,4,5,6]. These state-of-the-art sleep staging models achieve a similar performance to human experts. However, stateof-the-art models are mostly validated on healthy adults, and typically perform poorly on diseased subjects [10]. Disease influences brain activity and, consequently, PSG recordings, creating uncertainty in the data. Moreover, models trained on data acquired with one measurement setup or device may not generalize well to data obtained using a different measurement protocol, resulting in the well-known distribution shift problem [22,23,24,25,26,27]. Hence, measurement factors can also contribute to uncertainty. Leveraging uncertainty estimation tools can help identify these uncertainties, which are likely to cause poor-quality predictions. As such, uncertainty estimates enhance model transparency and allow to actively query a sleep expert to label uncertain samples. Hence, by integrating uncertainty estimation into sleep staging, our uncertainty-guided deep learning pipeline for sleep staging (U-PASS) improves both trust and performance of the model. The pipeline is more generally applicable to machine learning for clinical applications. A. At training time, data uncertainty is tracked to (1) select which channels (or features) to use, and (2) remove uninformative data. B. After training, active learning is used to finetune the model. Model uncertainty is tracked to (3) identify the most informative samples to query. C. At evaluation or deployment time, the most uncertain samples are deferred to the clinician (4). Used symbols: p is the model output, p y is the model output for the true class y, D train is the training dataset and x test is a test sample. Data and study cohorts This study uses PSG recordings of 90 patients [23]. It was recorded at the sleep laboratory of the University Hospitals Leuven (UZ Leuven) from January 2021 to September 2022. We included all patients over 60 years of age with suspicion of sleep apnea. This is a common sleep disorder in which breathing repeatedly stops during sleep. For each patient, the PSG recording includes EOG, EMG and a total of 6 EEG channels, all measured at 500 Hz. 45 recordings are used as a train set, and 45 recordings are used as the test set and for active learning. The dataset is manually annotated by an expert sleep scorer according to the AASM standard [20]. The sleep study was conducted in accordance with the Declaration of Helsinki, and the protocol with registration number S64190/B3222020000148 was approved by the Ethics Committee Ethische Commissie Onderzoek UZ/KU Leuven. Model and training U-PASS can be applied to any machine learning model and problem, as we designed it to be model agnostic, with no requirements or adaptations to the network architecture or training procedure. In this paper, we use it in a sleep staging context with a state-of-the-art sleep staging model, SeqSleepNet [3]. This deep neural network has a sequence-to-sequence classification scheme. It transforms a sequence of adjacent raw data segments into a corresponding sequence of outputs, in this case sleep stages. The inputs are presented as time-frequency images, spectrograms of each 30-second segment. The sequence length is a parameter, which is fixed to 10 in this study. SeqSleepNet is a hierarchical architecture, composed of a block of layers that processes individual segments, and a block that works at the sequence level. The segment processing block comprises a number of frequency filters, followed by an recurrent neural network (RNN) on the segment level. The outputs from the segment processing block are then concatenated into a sequence of inputs for the sequence processing block. A sequence-level RNN transforms this input sequence into an output sequence, which is mapped to a sequence of sleep stages by a softmax layer. The optimization and training parameters in this study are identical to those in the original paper [3]: L2 regularization is applied, and the Adam optimizer is used with a learning rate of 10 −4 . Formulation and visualization of uncertainty U-PASS utilizes insights from two recent methods that examine the characterization of uncertainty in training data. These methods analyze the training dynamics, which refers to the behavior of individual samples during the model training process. Data Maps [28] maps instances by tracking predictive confidence and model uncertainty (epistemic uncertainty) during training. Data-IQ [29] on the other hand maps instances using predictive confidence and data uncertainty (aleatoric uncertainty). Both methods use mappings to stratify the training dataset into subgroups of easy, hard and ambiguous data. We further refer to instances with high epistemic uncertainty as model-ambiguous samples. These are examples for which the prediction variability of the model is highest during training. They are important to improve the performance and generalizability of a model [28]. On the other hand, instances with high aleatoric uncertainty are further referred to as data-ambiguous samples. These are samples for which predictions are the most uncertain on average, and on which the model systematically underperforms [29]. Let us formally define the characterization of model-ambiguous samples and data-ambiguous samples. Consider a typical supervised learning setting for a classification problem, where inputs x ∈ R d need to be assigned to classes y ∈ N. The goal is to learn a model f θ which maps inputs to outputs by assigning a probability to each class given the input: f θ (x) = P (Y |X = x, Θ = θ). Let θ designate a certain instantiation of the model pa-rameters. The model is iteratively trained for e * training epochs, resulting in e * different instantiations of the model parameters {θ 1 , θ 2 , ..., θ e * }. Let Θ ∼ P emp ({θ 1 , θ 2 , ..., θ e * }) be a random variable with an empirical distribution over the model parameters throughout the training process. P(x, θ) = [f θ (x)] y is the model's output probability for the true class label. Using the variance as an uncertainty metric, we can decompose the uncertainty by the law of total variance. The epistemic uncertainty of the prediction . The variance is evaluated over the model outputs during training, hence epistemic uncertainty captures the variability of model outputs, representing model uncertainty. The aleatoric uncer- [29]. In the aleatoric uncertainty, the variance is computed over the prediction. Hence, this represents the data uncertainty. Using the empirical distribution of Θ, the uncertainties are calculated as follows [29]: with P(x) = 1/e * e * e=1 P(x, θ e ). We define model-ambiguous data as data with a high v ep , and data-ambiguous samples as data with high v al . In making these two groups, we use the definitions of 'ambiguity' proposed in Data Maps [28] and Data-IQ [29], respectively. We can further stratify the training dataset using the concept of predictive confidence as the mean output probability for the ground-truth-class c = P(x), i.e. the confidence in the correct class. This allows us to define easy-to-classify samples as instances with high confidence and low ambiguity and hard-to-classify samples as instances with low confidence and low ambiguity. U-PASS pipeline Based on the uncertainty quantification methods from Section 2.3, we build a full uncertainty-guided pipeline, outlined in Figure 1b. A. First, we tailor and improve the training dataset. Even before collecting a full dataset, we can start training a model using various measured signals and monitor the data uncertainty to pinpoint which signals are effective in decreasing uncertainty. This information can aid in deciding which modalities and channels should be recorded. There is a trade-off between the cost of data acquisition and the amount of uncertainty that can be tolerated to achieve a clinically useful result. After choosing a measurement setup, a full dataset is collected. We can then train a model on this dataset, and track the data uncertainty to identify the most data-ambiguous samples. These samples are removed from the training dataset, as they confuse the model instead of improving its performance. B. After training, the second step in U-PASS is to finetune the model using active learning. The model uncertainty can be used to identify for which recordings finetuning is the most useful, i.e. which recordings have the highest learning potential. Indeed, model uncertainty measures how much predictions of the model vary during training. In this work, we achieve finetuning with a semi-supervised transfer learning technique, using the additional labels supplied through active learning. C. Lastly, we can use uncertainty at evaluation or test time. A simple uncertainty measure based on uncertainty of the neighboring training samples can be used to identify uncertain test samples. Instead of making an incorrect prediction on these samples, it is better to defer them to a clinician. The three principled uncertainty-guided steps in U-PASS result in a reliable and robust machine learning pipeline that is tailored for clinical environments. This framework enables effective interaction between clinicians and the model, thus enhancing clinical applicability. We delve into each next. Data collection and data selection In the data collection experiments, we train the model on different numbers of channels to identify the best measurement setup. In the data selection experiments, we train on the best channels based on the data collection experiment, removing different amounts of ambiguous training data in each experiment. In both the data collection and data selection experiments, the model is trained on the full training dataset for 10 training epochs. Early stopping is applied using a validation dataset consisting of 2 recordings, retaining the model with the best performance on the validation set. As the training dataset consists of 45 recordings, each experiment is repeated ⌊45/2⌋ = 23 times, with different recordings in the validation set for every repeat. Accuracies are computed on the separate test set consisting of 45 recordings. The test set accuracies and training set uncertainty measures reported in Figure 3a are computed as averages over these 23 repeats. Active learning The active learning experiments start from the model obtained with the optimal setting based on the channel and data selection experiments. Using active learning to query informative labels, the model is then finetuned to personalize it to individual patients with the semi-supervised adversarial domain adaptation approach from [23]. The training set of 45 patients is used as the source dataset, and each selected individual recording of the test set as the target dataset for adversarial domain adaptation. This experiment is performed 23 times, with each of the trained models (see 2.4.1). The labels for the semi-supervised training approach are acquired through the following active learning scheme. Before every training epoch, 1% of samples (roughly 10) are queried. The labels of these samples are used during that training epoch, along with the labels cumulatively acquired in previous epochs. The basic unsupervised adversarial domain adaptation is thus augmented by adding a supervised loss from the queried labels. The final model obtained after 10 training epochs with querying is retained. The instantiated querying strategy is a simple uncertainty-based querying strategy based on the prediction entropy of the current model at every epoch H[P (y|x, Θ f )]. Any other querying strategy could also fit in the U-PASS pipeline. The model uncertainty is a measure for how much the model performance varies while training on certain data, and can hence be seen as a measure for learning potential. Therefore, we focus the labeling efforts of the clinician on the recordings with the highest model uncertainty. Section 2.3 describes how to estimate model uncertainty using the variance of the prediction for the correct class as an uncertainty metric, specifically for training data that has ground truth labels available. In our active learning scenario, labels are not available. However, we can still analyze training dynamics to estimate uncertainty, using entropy as an alternative uncertainty metric [30]. The definition of epistemic and aleatoric uncertainty with this uncertainty metric are the following: (4) with H the entropy metric, and P (y|x, Θ) the output distribution. We train the model on every recording using unsupervised adversarial domain adaptation, estimate the model uncertainty v ep based on the training dynamics in this unsupervised training step, and select the recordings with the highest v ep for active learning. Deferring ambiguous test samples In the evaluation experiments, we used the personalized models obtained in the AL experiments with model uncertainty as the query strategy. As uncertainty metrics based on training dynamics cannot be used at deployment time, we must rely on post-hoc uncertainty metrics. Several post-hoc uncertainty metrics were compared, including two that were based on the output of the model: the entropy of the output distribution and the maximum output probability (i.e. the output probability of the predicted class). Two other metrics were distancebased. One metric measured the distance of the test sample to its n closest train samples, while the other used the distance of the test sample to the n closest training samples for each separate class, and was calculated as the proportion of the smallest distance to the second-smallest distance. The remaining three metrics calculated uncertainty by averaging metrics of the n closest training samples, weighted by their distance to those training samples. One metric used the average confidence of the closest training samples, the second used the average data uncertainty, and the third used the average model uncertainty. Every uncertainty metric for evaluating uncertainty on the test set has its own scale, and it is up to the domain experts to define what level of uncertainty and how many mistakes can be tolerated. Therefore, we designed this experiment by ranking test set samples from most to least uncertain, and plotting the accuracy corresponding with the z% most uncertain test samples. The uncertainty measures were then compared by averaging these accuracies over the 23 trained models. Insights from uncertainty First, we illustrate how the two types of uncertainty are manifested in our data in Figure 2. Figure 2a shows the training dynamics for the 1% most model-ambiguous and data-ambiguous samples, as well as for the 1% most easy-to-classify and hard-to-classify samples. Model-ambiguous samples show the greatest learning curve during training, while data-ambiguous samples are characterized by a flat learning curve. The distributions of the two types of ambiguity are visualized in Figure 2b. The figure shows two-dimensional UMAP embeddings of the training data, with the different colors indicating the five sleep stages in the left figure, and heatmaps showing the model and data uncertainty in the middle and right figure, respectively. In both heatmaps, the 1% most ambiguous samples are highlighted in black. The 1% most model-ambiguous samples are found mostly at the borders and outside the main training data distribution, as well as on class boundaries. Indeed, atypical and out-of-distribution samples are more challenging for the model at first, but the model can learn these samples through more training. The most data-ambiguous samples are centered in sleep stage N1, which is the least well defined and hardest to classify. Data uncertainty is also higher on the class boundaries, where some samples contain characteristics of multiple sleep stages. Data collection U-PASS starts with evaluating the data uncertainty during data collection to assess which signals should be acquired and used. Figure 3a shows the confidence and data uncertainty in the PSG training datasets, mapped by using Data-IQ [29]. Each 30-second sample is one dot, and the density of samples is represented by the brightness of colors. The figure demonstrates that most samples are in the upper left corner, characterized by high confidence and low data uncertainty (easy-to-classify). Some samples have high data uncertainty (data-ambiguous), and a minority of samples is characterized by low data uncertainty and low confidence (hard-to-classify). The concentration of samples in the data-ambiguous region decreases when going from one channel to three channels, and decreases again from three channels to five channels. This is also evident from the average data uncertainty over the whole training dataset, which is shown under the plots. The average confidence increases accordingly, as samples go from being data-ambiguous to easy-to-classify. As a result, the accuracy on the test set also improves, which is the final proof that reducing the data uncertainty is helping the outcome. We conclude that tracking the data uncertainty allows to compare the quality of different measurement setups for sleep monitoring. Indeed, a decrease in data uncertainty with the addition of a channel indicates that this channel adds valuable information. It can advise users in selecting the optimal setup that will result in the best accuracy at deployment time. Data selection We can curate our training dataset, by discarding from it the most data-ambiguous training samples. Figure 3a shows how removing these ambiguous data affects the model's test performance on an independent test set. There is a trade-off to be made between removing samples that confuse the model and removing too many samples so that useful information is discarded from the training dataset. In this case, the optimal point is reached when discarding 1% of training samples. Data uncertainty in sleep staging can arise from biological sources (e.g. age, pathology) and sources related to the measurement technique (e.g. segmentation, interference) [19]. Figure 4a shows that data uncertainty per segment is significantly higher for segments on sleep stage transitions than for segments that are not on sleep stage transitions. This source of uncertainty is hence related to the discretization caused by segmentation and the continuous nature of sleep, with features from multiple sleep stages in a single segment. Figure 4b shows that wake segments have the lowest data uncertainty, and N1 segments have the highest data uncertainty. The fact that different stages of sleep are characterized by different levels of uncertainty is related to how clearly sleep stages are recognized and defined, which is influenced by sleep scoring rules and the biology of sleep. We find evidence for this in the fact that the interrater agreement between human scorers shows the same trends as are observed through data uncertainty [31]. Active learning Once the model is trained on the improved training set, U-PASS evaluates model uncertainty to assess on which recordings the model should be finetuned through active learning. We use active learning to personalize sleep staging models to those individual patients. Figure 3b shows the average improvement from active learning to individual PSG recordings. The improvement is shown for the selected 40% recordings with the highest model uncertainty and for the rest of the recordings. Model uncertainty clearly points to the recordings that benefit the most from the active learning adaptation. Figure 5 shows two examples of how active learning improves the predictions in both labeled and unlabeled samples. Deferring uncertain test samples Lastly, the finetuned model is deployed on unseen data. U-PASS then uses uncertainty measures to Figure 3: U-PASS applied to hospital-based polysomnography (PSG) data. The resulting accuracy after every consecutive step is indicated with a red box. (a) During training, the training dataset is tailored using data collection and data selection based on data uncertainty. In the data collection step, samples move to the upper-left corner with high confidence and low data uncertainty when more channels are used. We thus proceed with all five channels. In the data selection step, the optimal test result is achieved when removing the one percent (roughly 440) most data-ambiguous training samples. (b) Active learning is used to adapt the model to individual recordings. 1% of samples is queried in each of the 10 training epochs. We only perform active learning on recordings characterized by high model uncertainty. On the rest of the recordings, it doesn't show a large improvement. (c) At deployment time, uncertainty-based deferral increases the test accuracy to the desired level of 85%. Samples with uncertainty values below the threshold are referred to a clinician for manual labeling. defer test samples to an expert when predictions are uncertain. Figure 3c shows the accuracy on the 100% to 1% most certain test samples. This is equivalent to the accuracy when removing 0% to 99% of the most uncertain test samples. The higher this accuracy, the better the uncertainty estimation, as the uncertainty metric should retain the most accurate predictions and defer the least accurate ones. Different post-hoc uncertainty metrics were applied. Figure 3c only shows the best-performing one: the weighted average confidence of the n closest training samples. The performances of all the uncertainty metrics are shown in Supplementary Figure 7. The results in Figure 7 show that different uncertainty metrics perform best depending on the threshold set on the tolerated uncertainty. If we require 85% accuracy, the confidence of neighboring training samples achieves the best performance, with 20% of samples deferred. For reference, 85% accuracy is a great performance, as the inter-rater agreement in sleep staging is estimated to be 82.6% [32,31]. When we exclude samples that lie on sleep stage transitions, the accuracy improves from 85% to 89%. As clear from Figure 7, the difference between the entropy metric and metrics based on the uncertainty in the training set is mostly quite small, and depends on the uncertainty threshold we set. However, the metrics based on uncertainty in the training set have an important advantage compared to the decision entropy, as they provide a reason for deferral. Indeed, the distance-weighted confidence of neighboring training samples is a product of the distance from the training samples and the uncertainty of close training samples. Figure 6 visualizes both factors. The Spearman correlation coefficient shows that accuracy is negatively correlated with distance from the closest training samples (r(43)=-.40, p=.0067) and positively correlated with the confidence of the closest training samples (r(43)=.51, p=.00032). The two factors detect, respectively, out-of-distribution samples, and samples lying close to uncertain regions, e.g. decision boundaries. The effect of applying U-PASS for sleep staging is summarized in the red boxes of Figure 3, which show the resulting accuracy after each step in the pipeline. This clearly demonstrates how U-PASS leverages uncertainty in training, active learning and deployment to improve the accuracy at every stage. Figure 6: The accuracy on the test data depends on both the distance from the training data and the uncertainty of those training data. For every patient, this figure shows the accuracy, confidence of the closest training samples and the distance to those training samples. These metrics are averaged over all the samples of the patient's recording, so every patient is represented by one dot. 13 Discussion We have developed and validated U-PASS, an uncertainty-guided pipeline for automated sleep staging. Although we applied U-PASS to sleep staging, the pipeline is generally applicable to any machine learning problem. It is particularly useful for machine learning in healthcare, which has strong requirements for reliability, trust, and benefits from user interaction. By utilizing data uncertainty and model uncertainty at different stages in the data processing pipeline (training, active learning and deployment), we have shown how U-PASS improves predictions step by step. It can easily be tailored to the accuracy needs of any particular application, by adapting the amount of data queried and deferred in the active learning and deployment phase, respectively. We have validated our pipeline in PSG data of a real patient population of elderly subjects with suspicion of sleep apnea. Many performant deep learning models have been developed for sleep staging on PSG, but they are developed as static, stand-alone input-output machines. This limits their performance and usability in the clinic. Our contribution consists of building a pipeline in which such deep learning models can be integrated, boosting their performance and allowing to choose the desired performance level. The performance gains achieved by applying U-PASS to a state-of-the-art sleep staging model learning are shown in Figure 3. The first two parts of the pipeline, data collection and data selection, are focused on tailoring the training dataset to achieve optimal results. Both steps decrease the data uncertainty in different ways. In the data collection step, data uncertainty is estimated to find out which features (or channels in the case of sleep staging) need to be collected. Once the data acquisition protocol is fixed and the training set is acquired, the data selection step filters out the data with the highest data uncertainty, as these can deteriorate model performance. In our experiments, the data collection step shows the largest improvement by going from one EEG channel to five channels. The data selection step only increases the outcome by a relative 0.4%, a small but significant percentage. This maximum improvement is attained when 1% of the data are discarded. Our training dataset is not overly large, so we hypothesize that the optimal percentage of data to discard and the corresponding improvement may change depending on the dataset size. In future work, we could investigate the influence of the type, 'cleanness' and size of training data on the data selection. The last step of U-PASS, deferring ambiguous samples, allows us to choose a tolerated level of uncertainty, or a desired accuracy for a specific task. This comes at a labeling cost, similar to the active learning step of the U-PASS pipeline. The difference between the two steps is the type of uncertainty they tackle. The only uncertainty that can be resolved through active learning is model uncertainty: uncertainty coming from a lack of knowledge of the model. Data uncertainty, on the other hand, is inherent to the data and can hence not be resolved through learning. The only possible mechanism to cope with high data uncertainty at deployment time is deferral to an expert. In theory, doctors do not know more than a machine learning model in cases of pure data uncertainty. However, they can take the right course of action, whether that be ignoring the data, performing a new measurement, or fixing a broken electrode. We conclude that both active learning and deferring help to increase the accuracy to the desired level at a certain labeling cost. Since they tackle different types of uncertainty, they should be combined for optimal performance and usability. Along with performance gains, the uncertainty estimation methods in the U-PASS pipeline also provide some interesting clinical insights into the sleep data. Figure 2b and Figure 4 shows that the data uncertainty in segments on sleep stage transitions is higher than for others, and that sleep stage N1 is characterized by more uncertainty than other sleep stages. This data uncertainty is by definition inherent to the data and can not be resolved by more training. These results are also consistent with the agreement between human sleep scoring experts [31]. As such, the objectivity of uncertainty metrics can guide clinical practitioners and experts developing sleep scoring guidelines [20] to define better rules. For example, changing the 30-second segmentation to a more fine-grained segmentation should benefit the data uncertainty. Furthermore, the less ambiguous the sleep stages (through clearer staging guidelines or even by changing the sleep stages themselves), the less data uncertainty. An additional insight we have gained from plotting the sleep features (Figure 2b) is that sleep stages don't seem to be discretely separated, but rather lie on a continuous spectrum. Recent works have advocated modelling sleep as a continuous process [33,34,35], which may be a more biologically accurate representation. Hence, uncertainty estimation methods can inform the medical field on how to best describe and define sleep. In conclusion, U-PASS is a machine learning pipeline that integrates uncertainty-informed curation of the training set, uncertainty-based active learning to incorporate a clinician's feedback and deferral of uncertain decisions to a clinician. As such, all the facets of uncertainty and benefits of uncertainty estimation are harmonized in a single framework. This optimizes the machine learning pipeline and unlocks its potential in a clinical setting by adding safety guardrails to the process. Furthermore, U-PASS has the potential to provide medical practitioners with valuable insights into their data, offering a deeper understanding of sleep biology and potentially other clinical applications. Overall, U-PASS represents a promising approach to enhancing the reliability and safety of machine learning systems in critical fields such as healthcare.
2023-06-09T01:16:17.249Z
2023-06-07T00:00:00.000
{ "year": 2023, "sha1": "02e719226f7903d617215d5aa66578419186e6a9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.compbiomed.2024.108205", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "02e719226f7903d617215d5aa66578419186e6a9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Computer Science" ] }
55626711
pes2o/s2orc
v3-fos-license
Comparison of Climate and Environment Change of the Last Interglacial Period and Holocene in Beijing Area , China Research on climate changes between the last interglacial period and Holocene renders a speculation on the tendency of present climate. Fully understanding the nature of the changes will play a significant role in a better understanding of global climate change. This work discussed the climate change of the last interglacial period and Holocene in Beijing area to discover the mechanism of local palaeo-climate change. The palaeo-vegetation of the last interglacial period in Xishan Mountain of Beijing was reconstructed by pollen analysis and thermo-luminescence dating to represent the change of palaeo-climate and palaeo-environment. Palaeo-vegetation indicators demonstrated that the climate change of the last interglacial period included 6 stages and was homologous to that reflected by the records from deep sea depositions and polar ice cores, respectively corresponding to Marine Isotope Stage (MIS) 5e, 5d, 5c, 5b, 5a and the interim from MIS5 to MIS4 from the early to the late. Millennial climate abrupt events occurred in MIS 5e, which had an agreement with the records of GRIP. In addition, a climate warming event appeared in the interim from MIS5 to MIS4 and it also was found in other regions of the world. Compared with the vegetation and environment indicators of Holocene in Beijing area, it was found that the vegetation, climate and environment of the last glacial period were better than those of Holocene. The climate abrupt events not only appeared in the last interglacial period and MIS 5e, but also occurred in Holocene, whose mechanism and pattern were analogical. After analyzing the records of millennial climate abrupt change events from this work, Ice Cores and others, it was concluded that climate was instability in the interglacial period. Introduction It contributes to discovering the mechanism of climate change and to predict its tendency in the future that studies the nature of palaeo-climate of the last interglacial period and Holocene.The last interglacial period was proved to consist of 5 climate stages according to the records of deep sea depositions [1] and polar ice cores [2]- [4], including 3 warm stages and 2 cold stages which were homologous to MIS 5e, 5c and 5a, and 5d and 5b, respectively.This feature was confirmed generally by global records.Furthermore, the records from Greenland Ice Core (GRIP) indicated that MIS 5e also comprised 3 sub-warm stages and 2 sub-cold stages, named 5e5, 5e3 and 5e1, and 5e4 and 5e2, respectively.However, there are different opinions to MIS 5e among scientists because climatic instability has not been favored in consensus by the global records [4]- [13].For Holocene, it is believed generally that it could be divided into 3 climatic stages, including the early (10 -8 Ka B.P.), the megathermal stage (8 -3 Ka B.P. or later) and the late.The climate has the feature of being warm and humid in the megathermal stage, whose temperature was 2˚C -3˚C higher than that of the present.Moreover, researchers found that the climate instability was a characteristic of Holocene with millennial, centennial and decennary climate abrupt events [14]- [20].Fully understanding the mechanism of climate instability of the last interglacial period and Holocene will redound to learn climate change of the present and to be against global climate change.This paper reconstructs the palaeo-vegetation and palaeo-environment of the last interglacial period in Xishan Mountain, Beijing, which is located in the eastern edge of loess and the transition zone between continental climate and oceanicclimate, and then discussed the climate change and environment change of the last interglacial period and Holocene in Beijing area by comparing their environmental and climatic records, contributing to discovering the mechanism of climate and environment change of extratropical regions. Study Area Beijing with an area of 16.8 × 10 3 Km 2 (39˚26' -41˚03'N, 115˚25' -117˚30'E), is located in the east corner of China and the intersection of China North Plain (Huabei Plain), Taihang Mountains and Yanshan Mountains (Figure 1).Semi-humid monsoon climate with the featured of warm temperate zone dominates the region due to the impacts of west wind zone, especially Mongolia high pressure in winter.The annual mean temperature and annual mean precipitation is 11.8˚C and 630 mm, respectively, and 90% rainfall occurs in rain season (April to September).Deciduous broad leaf forest with grassland is the dominant vegetation, consisting of Larix, Pinus, Populus, Betula, Carpinus, Quercus, Ulmus, Acer, Tilia and dominant herbage includes Artemisia, Saussurea, Primula.The vertical difference of climate and environment results in the diversity of vegetation distribution in mountainous areas.Meadow and shrub develops in sub-alpine areas, deciduous broad leaf forest and warm temperate conifer forest or conifer and broad leaf mixed forest grow in the middle of mountains, deciduous broad leaf shrub or brushwood exist in the low of mountains, and warm temperate grassland occupies loess mesa areas. Zhaitang basin with an average altitude of 500 m, the site of sampling, is situated at the middle reach of Qingshui River, a tributary of Yongding River, in Xishan Mountain.The climate has the feature of temperate and semi-drought, whose annual mean temperature and annual mean precipitation is 6˚C -10˚C and 500 mm, respectively.By virtue of perennial human activities, forest has disappeared in the basin and shrub and brushwood occupy mountain slopes.The sampling site is lied in East Zhaitang village of Zhaitang Basin (39˚58'N, 115˚41'E), the site of Malan Loess named (Figure 1).The palaeo-soil profile including 10 m of Malan Loess (L1), 2.8 m palaeo-soil (S1) and Loess (L2) from the upper to the low, which provides good environmental information and evidences for the last glacial climate circle. Pollen Analysis 56 soil samples, each heavier than 600 g, were collected by the scheme of 5 cm equidistant in S1 stratum to extract and analyze pollens.In the laboratory, all samples for pollen analysis were treated using the revised methodology [21].Firstly, these samples were added Lycopodium spores as an exotic marker to determine pollen concentration, and then were treated by HCl (36.5%) and Na2CO3 (5%).Subsequently, they were floated 2 times in heavy chemical liquid (2.2 g/cm 3 ), and then the suspensions were got by centrifugal operation and kept in specific tubes.Finally, the suspensions was disposed by HF and H 2 O 2 to get rid of organic substance, and protected by glycerin to identify the taxon of fossil pollen.Pollen identification and counts were made at 400× or 600× magnification under OLYMPUS-BX51 light microscope.Then pollen concentration and the pollen percentage of every taxon of every sample were calculated.First, confirm that you have the correct template for your paper size. TL Ages A complete S1 profile was gathered for testing TL ages, which was collected by successive aluminium tubes with un-exposed light.On the base of the results of pollen analysis, the spots of sample 1, 12, 18, 28, 39 and 56 were selected as control sites to test the ages of S1 palaeo-soil by the technology of thermo-luminescence technology.6 samples were picked from the S1 soil profile and measured luminescence dose, calculate De and obtained ages at the Luminescence Dating Laboratory of Capital Normal University at Beijing.Under subdued red light, 20 g fragment of each sample was extracted to prepare fine grains for TL activity measurement.They were treated by 5% aqua fortis to remove organic materials and carbonates and to get fine quartz grains(<74 um) [22], and then collect the 4 -11 um fine sediment and made the discs with 1 mm of diameter by water-immersion to get the aliquot [23] [24].All the TL measurements were implemented using a RGD-3B thermo-luminescence meter.A 90Sr/90Y beta source with a dose rate of 8.0 Gy/min was used for sample irradiation to complete luminescence regenerative experiment of bleached aliquots.During the TL measurement experiment, the aliquot was heated from 20˚C to 500˚C with an increase rate of 5˚C/s, and keep fixed temperature for 10 s and 30 s at 200˚C and 500˚C, respectively.The photos emitting nearing 375, the peak temperature of TL, was used to estimate De value. The single-aliquot regenerative-dose procedure provided by Murray and Wintle [25] [26] was used to recovery the process of the given laboratory dose and establish the regenerative curve to estimate the De value of each aliquot using it's natural emission photos.The error coefficient of regenerative curve of each test aliquot was kept less than 5%, indicating over 95 percent of reliability.Then the De value of each sample was estimated by calculating the average of De values of the 5 aliquots, with less than 5 percent of standard deviation.The TL age of 6 each sample was calculated by the average value of De of 5 aliquots dividing the manual dose of natural accumulating of this sample, which was calculated by the tested content of the radioelement of U, Th and Ka. Vegetation Sequence Reconstruction In order to analyze the characteristics and modes of the change of palaeo-climate and palaeo-environment, the high-resolution vegetation change sequence was constructed using pollen concentration unit soil and pollen percent content of the dominant species and the TL ages of 6 control sites and the interpolating ages of other samples to reconstruct palaeo-climate and palaeo-environment.Then, the mechanism and mode of climate and environment change of the last interglacial period and Holocene in Beijing area were discussed by comparing the climatic and environmental records of these two stages. TL Ages The tested and calculated data of 6 control samples including radioelements, De and TL were showed in the Table 1. Vegetation Change Sequence The high-resolution paleo-vegetation change sequence with the change diagram of pollen percentage and pollen concentration, was constructed based on the data of pollen analysis and TL ages.According to vegetation types reflected by fossil pollens, the palaeo-vegetation change sequence was divided into 6 zones and several subzones, which were showed as the Figure 2 (the diagram of pollen concentration and zones). MIS 5 It was illuminated by the Figure 2 that the palaeo-vegetation sequence was classified into 6 pollen belts from Pollen concentration varied from 50 to 65 grains/g, and the pollen percentage of arboreal pollens in each sample was less 38%.The main arboreal pollens included Pinus, Carpinus, Ulmus and Betula, and they accounted for about 20% of the total pollen.Picea pollen was also found in some samples.The pollen percentage of Rosaceae was about 6%, that ofRanunculaceae, Compositaeand Artemisia was over 22%, that of Cruciferae and Gramineae was about 10%, and that of Chenopodiaceae reached 5.4% in Sample 22.These proved that vegetation was forest and grassland, and climate had the feature with cool and drought. P5: Sample 20 -11, TL age 88.7 -78.5 Ka B.P. The pollen concentration decreased in fluctuation, the highest and the lowest was 88.5 grains/g in sample 18 and 35.2 grains/g in Sample 12, respectively.The percentage of arbor pollens was ordinarily about 40%.This belt was also divided into two sub-belts as follow: P5-1: Sample 20 -17, TL age 88.7 -84.4 Ka B.P. The pollens mainly included Ulmus, Quercus, Rosacea and Ranunculacea, and the pollen concentration was about 67 grains/g.The pollen percentage of Quercus, Ulmus, Carpinus and Juglans was beyond 26%.Shrub and grass with the dominant species of Rosaceae, Elaeagnaceae, Ranunculacea, Artemisia and Compositae grew in forest.These showed that deciduous broad leaf forest developed well, and the climate was warm with a little drought.Moreover, the pollen percentage of Pinus and Betula increased gradually, but that of Chenopodiaceae was justly in reverse, which further proved that the temperature decreased and the humidity rose in this stage. P5-2: Sample 16 -11, TL age 84.4 -78.5 Ka B.P. The major pollens were Carpinus, Pinus, Betula, Rosacea, Ranunculacea and Artemisia.The pollen concentration of every sample was below 56 grains/g and the lowest was only 35.2 grains/g in Sample 12.The pollen percentage of arbor decreased continuously, but that of Artemisia, Compositae and Chenopodiacea rose distinctly.These showed that the vegetation changed from conifer and broad leaf mixed forest into forest and grassland or grassland with sparse trees, which suggested that climate evolved from the warm-drought into the cold-drought gradually. P6: Sample 10 -1, TL age 78.5 -70.3 Ka B.P. The dominant pollens composed of Carpinus, Juglans, Pinus, Artemisia, Compositae and Rosacea.Pollen concentration was about 50 grains/g, the highest 63 grains/g in sample 10 and the lowest 34.2 grains/g in Sample 1.This belt was divided into two sub-belts as follow: P6-1: Sample 10 -4, TL age 78.5 -72.5 Ka B.P. The primary pollens contained Carpinus, Juglans, Pinus, Artemisia, Compositae and Rosacea.The pollen concentration was over 50 grains/g.The pollen percentage of Carpinus, Juglans and Pinus was about 20% and that of Tilia increased gradully.The dominant herbage pollens were Artemisia, Compositae and Ranunculaceae, and the pollen percentage ofChenopodiacea was about 4%.These showed that vegetation was forest and grassland, and climate was warm-drought. P6-2: Sample 3 -1, TL age 72.5 -70.3 Ka B.P. The principal pollens consisted of Artemisia, Compositae, Pinus and Chenopodiacea.The pollen percentage of herbage was about 70%, that of Artemisia was beyond 30% and that ofCompositae and Chenopodiacea was over 17%.The major arboreal pollens were Pinus and Betula, and some Ephedra pollen was also found.These proved that dry grassland with sparse Pinus and Betula grew, and climate was cold-drought in this stage. It was proved from the above analysis that conifer and broad leaf mixed forest or deciduous broad leaf forest developed in the periods that corresponded to the belts of P1, P3 and P5, and forest meadow or forest grassland appeared in the stages that corresponded to the belt of P2 and P4.It indicated that there were 3 warm stages with warm-humid and 2 cold stages with temperate-drought or cool-drought, which had a good agreement with the records of deep sea deposition, especially the SPECMAP.The belt P1, P2, P3, P4 and P5 corresponded to MIS 5e, 5d, 5c, 5b and 5a, respectively.Moreover, the climate and environment of MIS 5e were better than those of 5c and 5a, that of 5c was superior to that of 5a and that of 5d was preferable to that of 5b in Xishan Mountain of Beijing.On the whole, the climate changed from the warm-humid into the temperate-drought. It was also confirmed that a warming event occurred in the interim from MIS 5 to MIS 4 because the vegetation of belt P6-1 was better than that of belt P5-2.The vegetation components of P6-1 were similar to those of the present vegetation in Beijing area, which indicated that the analogical climate and environment appeared in the two periods.This event was also proved by physical or chemical indictors from Zhaitang Basin of Beijing [27], and was also found in other regions by the records of Greenland Ice Core (GISP2 and GRIP) [28], Vostok Ice Core from the Southern Polar [29], N. Pachyderme (s.) concentration of Vema 2381 Core from the North Atlantic [30], loess granularity of Lijiayuan profile [31] and loess median granularity of Luochuan profile [32] from Loess Plateau. Furthermore, the vegetation change sequence demonstrated that climate transformation of earth orbit scale (10 Ka) was a rapid process and completed with a millennial abrupt event.The transformation of P1 to P2, P3 to P4, P4 to P5, and P5 to P6 all finished between the two neighboring samples.It has a good agreement with the records from Greenland Ice Core and other records. MIS 5e The P1 belt was also divided into 6 sub-belts, which implied the climate and environment of MIS 5e experienced 6 stages.P1-1 sub-belt including sample 56 and 55 reflected the characteristics of vegetation and climate of the interim from MIS 6 to MIS 5.The other 5 sub-belts indicated the features of vegetation, climate and environment of MIS 5e as follows: P1-2: Sample 54 -51, TL age 128.2 -123.9Ka B.P. The dominant pollens were Carpinus, Pinus, Betula, Rosacea and Ranunculacea.The pollen concentration was about 90 grains/g.The pollen percentage of Carpinus, Pinus and Juglans was beyond 10%, 8% and 4.5%, respectively.Alnus, Pterocarya and Tilia pollens also appeared.The pollen percentage of Ranunculaceae and Artemisia decreased significantly, but that of Rosacea, Saxifragaceae and Polygonaceae increased continuously.These showed that conifer and broad leaf mixed forest developed and the climate was warm and humid in this stage. P1-3: Sample 50 -48, TL age 123.9 -120.8Ka B.P. The primary pollens included Pinus, Carpinus, Betula, Rosacea and Ranunculacea.The pollen concentration was 88.9 grains/g, 82.7 grains/g and 105 grains/g, respectively.The pollen percentage of Pinus and Carpinuswas was over 11% and 9%, respectively, and those of Juglans and Tilia reduced gradually.Furthermore, that of Pinus, Carpinus, Betula, Saxifragaceae and Polygonaceae all had an abrupt decrease in sample 49 and then increased significantly.These demonstrated that conifer and broad leaf mixed forest developed well, and the climate fluctuated abruptly from the warm-humid to the temperate -drought and then recovered the warm-humid.The change of vegetation and plant species also indicated that the climate and environment of this stage was worse than that of P1-2. P1-4: Sample 47 -45, TL age 123.9 -120.8Ka B.P. The pollen concentration was 92.9 grains/g, 76.3 grains/g and 100.5 grains/g, respectively.The percentage of arbor pollen decreased in large and was only 28.8% in sample 46, reaching the lowest of P1 belt.That of Pinus, Betula andCarpinus gradually decreased, but that of Quercus and Ulmus rose significantly.Moreover, some Syringa pollens appeared in Sample 46, and some pollen of Acacia that grew in the tropical and sub-tropical areas now were found in Sample 45.The pollen percentage of hygrophilous Saxifragaceae and Polygonaceae went through the change process of high, low and then high, but that of Artemisia and Chenopodiaceae was in the reverse.These suggested that forest or forest and meadow with conifer and broad leaf mixed forest developed in the early period and deciduous broad leaf forest grew in the late period, and climate was more instable. P1-5: Sample 44 -42, TL age 117.6 -114.4Ka B.P. The dominant pollens were Ulmus, Quercus, Saxifragaceae and Ranunculacea.The pollen concentration of all samples was about 80 grains/g.The pollen percentage of Ulmus and Quercus was above 18%, that of Juglans and Tilia was less than that of belt P1-4, and some Castenea and Alnus pollens appeared.Besides, the pollen percentage of Saxifragaceae was beyond 10%, and that of Polygonaceae was higher than that of belt P1-4.These indicated that deciduous broad leaf forest with hygrophilous shrub and herbage grew, and climate was warm-humid. P1-6: Sample 41 -39, TL age 114.4 -111.2Ka B.P. The principal pollens were similar to those of P1-5 and the pollen concentration of every sample was over 100 grains/g.The major arbor pollens contained Ulmus, Quercus, Juglans and Tilia, whose concentration reached the highest of the profile in sample 41 and then decreased gradually.As for shrub and herbage pollens, the percentage of Rosaceae, Compositaeand Artemisia rose in fluctuation, but that of Saxifragaceae and Polygonaceae decreased.These showed that deciduous broad leaf forest developed well, and the climate was the most warm-humid in the last interglacial period and then dried gradually. From the above analysis, it was demonstrated that the vegetation evolved from conifer and broad leaf mixed forest with the majority of Pinus, Betula and Carpinus into deciduous broad leaf forest with the majority of Ulmus and Quercus.It represented more biological diversity than the present vegetation, and some subtropical species developed in this period, which indicated that palaeo-climate had the feature of warm-humid.In addition, the warm-humid level of climate increased gradually in MIS 5e and the period of belt P1-6 was the most warm-humid stage in MIS 5e and MIS 5. Plaeo-vegetation change illuminated that the plaeo-climate of MIS 5e was instable.The climate instability was homologous to the records from GRIP because that the features of climate and environment described by P1-2, P1-3, P1-4, P1-5 and P1-6 sub-belt was consistent to those of 5e5, 5e4, 5e3, 5e2, 5e1 reflected by GRIP records, respectively.Only was the range of climate change less than that of GRID because the climate and environment of MIS 5e was always better than that of MIS 5d and MIS 5b, but the records from GRID was not. The climate transformation was a rapid process in MIS 5e.It was testified by the transformation from P1-6 to P1-5, which the pollen concentration increased from 86.2 grains/g of sample 42 to 123.4 grains/g of Sample 41.The similar climate transformation also occurred in the processes of P1-1 to P1-2, P1-3 to P1-4 and P1-4 to P1-5, and only their change range was less than that of P1-5 to P1-6.Moreover, climate abrupt events also appeared in the stages of P1-3 and P1-4.These suggested that millennial and even centennial climate abrupt events appeared in MIS 5e, which further suggested that climate was instability. Comparison of Climate and Environment Change between MIS 5 and Holocene The comparison of climate change between MIS 5 and Holocene conduces to discover the climate change mechanism of interglacial period and Holocene. The climate and environment characteristics of Holocene in Beijing area were discovered by pollen analysis and other methodologies in the past several decades.According to pollen analysis, the grassland with the majority of Compositae and Artemisia developed in the plain areas, and conifer forest with Pinus, Picea and Abies grew in mountains with an altitude of 500m in the early of Holocene.Picea distribution reached the foothills, which indicated a cold event appeared in a short time.During 8 -6 Ka B.P., conifer and broad leaf mixed forest composed by Pinus, Quercus and Betula extensively developed, and swamps appeared, which suggested that climate was warm and humid [33].During 6 -2 Ka B.P., deciduous broad leaf forest developed widely and climate was warm generally.Especially in 5 Ka B.P., the deciduous broad leaf forest composed by Quercus, Ulmus, Betulaand Mulberry developed broadly [33]- [36], whose woody pollens percentage was ordinarily over 50% and the highest even reached 90% [34].Besides, aquicolous plants grew well in lakes and swamps.These records showed that climate was more warm and humid in this stage than in the early of Holocene.However, a short-term clod event appeared in 5.6 Ka B.P. when dark conifer forest comprised byPicea and Abies developed.During 3.5 -3.2 Ka B.P., conifer forest extensively existed in Beijing area and then climate became cool and drought.After 2.1 Ka B.P., a cold climate event lasted for several hundred years, whose average annual temperature was lower 1˚C -2˚C than that of the present.In that time, Pinus forest dominated mountain area.However, it disappeared and was displaced by grassland in plain areas.After that time, the climate was temperate and cool with a little drought, whose change circle became shorter [33] [35]. Pollen analysis showed that the major species of vegetation were similar in the last interglacial period and Holocene, such as Pinus, Betula, Quercus, Ulmus, Compositae and Artemisia.In addition, Carpinus, Rosaceae, Ranunculaceae, Saxifragaceae, Cruciferae and Gramineae extensively grew in the last interglacial period, but not in Holocene, which suggested climate and environment of the last interglacial period was in diversity and better than that of Holocene.The vegetation change sequence also demonstrated that forest developed in MIS 5e, 5c, 5a, the warm-humid level of climate gradually decreased, and that of 5e was the best.Furthermore, it was proved that the climatic condition of MIS 5a was equivalent to the megathermal stage of Holocene because deciduous broad leaf forest with the majority of Ulmus and Quercus developed well in the period corresponding to P5-1 and 5 Ka B.P. Meanwhile, due to the impacts of mountain environment, the climate of Zhaitang basin was more drought than that of the plains, which resulted in the less hygrophilous pollens were found in this work.From these analyses, the climate and environment of the warm stages in the last interglacial period was superior to that of the megathermal stage in Holocene, and that of the early of 5a was similar to that of the megathermal stage of Holocene, but droughty. The data from pollens, archeology and meteorology indicated that the average annual temperature was higher 2˚C -3˚C during 5 -3.4 Ka B.P. than in the present, and the average annual precipitation was also more [34] [36].Therefore, the average annual temperature of the early of 5a was higher about 2˚C -3˚C than that of the present, and the precipitation was probably equal to that of the present.Researches also found that the average annual temperature of the early megathermal stage of Holocene was higher 3˚C -4˚C than that of the present [36].So, it was estimated that the average annual temperature of 5c and 5e was higher 3˚C -5˚C than that of the present, and the precipitation was also more. The vegetation components of 5d, 5b and the interim from MIS 5 to MIS 4 were better than those of the early and the late of Holocene (cold stages), which implied that the climate and environment of the three stages were superior to those of the early and the late of Holocene. The megathermal stage of Holocene and the last interglacial period, MIS 5e, had the homologous climate change mode.The pollen data showed that the megathermal stage of Holocene appeared two cold events with several hundred years, and it included 3 warm stages and 2 cold stages.This climate change mode was consistent to that of MIS 5e demonstrated by this work.The cold stage lasted less time than the warm stages in Holocene, but it was equal to or less than the warm stage in MIS 5e.Moreover, researchers found that the early of the megathermal stage was most warm-humid of Holocene, but 5e1, the late of MIS5e, was the most warm-humid period.When the time scale of climate change was unconcerned, the climate change mode of MIS 5 with 3 warm stages and 2 cold stages, were also analogical to those of the megathermal stage of Holocene.Their temperature and humidity gradually decreased, and 2 cold stages disturbed the process of climate change.It was said that the similar mechanism and mode of climate change dominated the megathermal stage of Holocene and the last interglacial period in Beijing, and climate instability was the feature of the two periods. This study discovered that the climate of the last interglacial period and Holocene was instable.Comparing this result with the records from Ice Core and other studies, it was further proved that millennial climate abrupt events occurred persistently since 1 Ma B.P. and even the earlier [37]- [40].Therefore, it was deduced that the climate instability maybe was a general law in interglacial periods. It was confirmed by the above analysis that the vegetation, climate and environment of the last interglacial period was better than those of Holocene, the similar mechanism and mode of climate change dominated the megathermal stages of Holocene and the last interglacial period, and the climate of interglacial periods was in instability. Conclusion It was concluded from this work that the vegetation evolvement of the last interglacial period experienced six stages (P1, P2, P3, P4, P5 and P6) in Beijing area, and climate change had an agreement with that of deep sea depositions (SPECMAP) and polar ice cores, which corresponded to MIS 5e, 5d, 5c, 5b, 5a and the interim from MIS5 to MIS4, respectively.A climate warming event occurred in the interim from MIS5 to MIS 4, which was also testified in the records from Greenland Ice Core, Deep Sea Deposition and the loess.The climate of MIS 5e experienced 5 stages, which corresponded to 5e5, 5e4, 5e3, 5e2, 5e1 of the record of GRIP, respectively, and became warmer and humid with rapid fluctuation changes.5e1 was the period with the best condition of tem-perature and precipitation in MIS 5e.Additionally, millennial climate abrupt change events occurred not only in MIS 5, but also in MIS 5e.Compared to vegetation and environment indicators of Holocene in Beijing area, it was found that the vegetation, climate and environment of the last glacial period were superior to those of Holocene.The climate abrupt events in the last interglacial period and MIS 5e also appeared in Holocene, and they were controlled by the analogical mechanism and mode.After analyzing the records of millennial climate abrupt change events from this work, Ice Cores and others, it was deduced that climate was instability in interglacial periods. Figure 1 . Figure 1.Location of study area. Table 1 . De, TL ages and related data of 6 control samples.low to the upper of the profile, including P1, P2, P3, P4, P5 and P6, which indicated that vegetation change experienced 6 stages.The characteristics of every belt were as follows:P1: Sample 56 -39, TL age 129.3 -111.2KaB.P. Pollen concentration of each sample was beyond 80 grains/g, and the highest was 123.4 grains/g of Sample 41.The pollen percentage of arbor was almost equal to that of shrub and herbage.Dominant arboreal pollens changed from Pinus, Betula, Carpinus of the early in-toUlmus and Quercusof the later.In addition, hygrophilous shrub and herbage pollens appeared in these samples, which reflected these shrubbery and grass developed well in forest.Pollen data indicated that forest evolved from conifer and broad leaf mixed forest into deciduous broad leaf forest, and climate was warm and humid.P2: Sample 38 -33, TL age 111.2 -104.8KaB.P. The dominant pollens consisted of Ranunculaceae, Compositae, Artemisia, Rosaceae, Pinus and Carpinus.Pollen concentration varied from 60 -70 grains/g and arboreal pollen percentage universally was less 60%.Arbor pollens mainly comprised Pinus, Carpinusand Betula and some Picea pollen appeared.The pollen percentage of Ranunculaceae, Compositae and Artemisia was beyond 22%, that of Polygonaceae decreased gradually, and that of Chenopodiaceae increased and reached about 4%.These indicated that forest meadow and steppe developed, and climate was warm and dry in this period.P3: Sample 32 -27, TL age 104.8 -97.7 Ka B.P. The dominating pollens included Quercus, Ulmus, Rosacea and Ranunculacea.Pollen concentration was generally beyond 80 grains/g.The pollen percentage of Quercus and Ulmus exceeded 10%, respectively, and that of Pterocarya was over 2%.The major shrub pollens were Rosacea and Saxifragaceae and the principal herbage pollens consisted of Ranunculacea, Artemisia and Compositae.Polygonaceae percentage underwent a change process of rose, decrease and then increase.These suggested that vegetation was deciduous broad leaf forest, and climate was warm and humid with a little fluctuation between droughty and moist.
2018-12-16T16:04:47.516Z
2014-07-24T00:00:00.000
{ "year": 2014, "sha1": "7017bfc35fce4b486f5fae69771bd1e3c37e5740", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=48360", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "7017bfc35fce4b486f5fae69771bd1e3c37e5740", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology" ] }
15359392
pes2o/s2orc
v3-fos-license
Evolutionary development of the plant and spore wall The article provides an overview of the development and structure of spore and pollen walls in the major plant groups and summarises progress in our understanding of the molecular genetics underpinning spore/pollen evolution and development. Introduction The colonization of land by plants in the Palaeozoic was a highly significant event in Earth's history, both from an evolutionary point of view and because it fundamentally changed the ecology and environment of the planet (Beerling 2007). Land plants evolved to form crucial components of all modern terrestrial ecosystems through evolutionary adaptations involving changes in anatomy, physiology and life cycle (Waters 2003;Menand et al. 2007;Cronk 2009). Key adaptations include rooting structures, conducting tissues, cuticle, stomata, and sex organs such as gametangia and spores/pollen. Development of a durable spore wall is essential for terrestrialization as it enables the spore to withstand physical abrasion, desiccation and UV-B radiation (Wellman 2004). As part of their life cycle, sexually reproducing embryophytes manufacture either spores, or their more derived homologues pollen. The major component of the spore/pollen wall proposed to be of primary importance in enabling resistance to the conditions described above is the highly resistant biopolymer sporopollenin (Ito et al. 2007;Cronk 2009). It seems reasonable to hypothesize that colonization of the land by plants was not possible prior to the evolution of the sporopollenin spore wall, and this adaptation is considered to be a synapomorphy of the embryophytes. Additionally, spore walls are not present in the hypothesized embryophyte antecedents, the green algae (Wellman 2004). However, the production of sporopollenin is highly likely to be pre-adaptive as it is present in a number of different algal groups such as the charophyceans, which have been proposed as the sister group to the embryophytes. In certain charophyceans, sporopollenin occurs, but is located in an inner layer of the zygote wall (Graham 1993). Phylogenetic studies and fossil evidence have shown that the most basal living land plants are the paraphyletic 'bryophytes' (Kenrick and Crane 1997;Qui et al. 2006) (Fig. 1). They comprise the liverworts, mosses and hornworts, and their phylogenetic position should allow us to further elaborate the evolutionary changes that facilitated the conquest of land by plants . The moss Physcomitrella patens is the first 'bryophyte' genome to be sequenced. This genome, through comparisons with angiosperm genomes, is proving to be a valuable tool in experimental studies that attempt to reconstruct genome evolution during the colonization of land (Reski and Cove 2004;Quatrano et al. 2007;. In this review, we first outline the nature of spore/pollen wall development in the major plant groups, before considering emerging understanding of the molecular genetics of pollen wall development. The latter includes identification of genes involved in sporopollenin biosynthesis and exospore formation, callose wall formation and tetrad separation. We also report results from BLAST searches of the basal land plant physcomitrella and the clubmoss Selaginella moellendorfii using genes implicated in pollen wall development in arabidopsis. Spore and pollen wall structure and development The spore/pollen walls of embryophytes have multiple layers and components that are laid down in a regulated manner during spore/pollen development. Layers containing the macromolecule sporopollenin are the component enabling the resistance of the spore/pollen wall to numerous environmental factors that make life on land challenging. Sporopollenin is highly resistant to physical, chemical and biological degradation procedures. Consequently, its precise chemical composition, structure and biosynthetic route have not yet been ascertained (Meuter-Gerhards et al. 1999). Traditional Qui et al. (2006). The bryophytes are a paraphyletic group comprising three separate lineages. Together with the vascular plants (which include the angiosperms), bryophytes form the embryophytes, which have a sister group relationship to the green algae. convention asserts that sporopollenin is a polymer of carotenoid esters (Cronk 2009). However, modern purification, degradation and analytical techniques have shown that it is comprised of polyhydroxylated unbranched aliphatic units with small quantities of oxygenated aromatic rings and phenylpropanoids (Ahlers et al. 1999;Domínguez et al. 1999). Modes of sporopollenin deposition in spore and pollen walls The basic mechanisms involved in the formation of the spore wall, and the deposition of sporopollenin in the exospore/exine, have been illuminated by numerous ultrastructural studies performed on extant and fossil species across the plant kingdom (Paxson-Sowders et al. 2001). Blackmore and Barnes (1987) proposed a number of sporopollenin deposition processes apparent in the spore wall. Firstly, they recognized the role of white-line-centred lamellae (WLCL) in this process. The accumulation of sporopollenin on an array of WLCL is regarded as being the most primitive method of sporopollenin deposition and has been identified in a number of algal groups and most, if not all, embryophytes (Wellman 2004). These lamellae materialize at the plasma membrane with sporopollenin polymerizing out onto either side of the white line. They accumulate in a variety of ways to form the spore/pollen wall (Blackmore and Barnes 1987;Blackmore et al. 2000;Wellman 2004). Another mode of exospore/exine formation involves the deposition of sporopollenin from the surrounding cells of the tapetum. Transmission electron microscopy has shown that the tapetal cells possess a highly active secretory system containing lipophilic globules, which are thought to contain the precursors of sporopollenin and are deposited onto the surface either directly contributing to the exospore/exine or forming extraexosporal layers (Piffanelli et al. 1998). Blackmore et al. (2000) suggested that a tapetal contribution to the spore wall can take place in a variety of ways, including the addition to the layers formed by the WLCL or directly onto WLCL. Studies of pollen wall formation in angiosperms highlight the role that tapetal cells play in supplying nutrients and lipid components to developing microsporocytes and microspores (Scott et al. 1991;Ariizumi et al. 2004;de Azevedo Souza et al. 2009). Interestingly, the most basal extant land plants (liverworts) lack a tapetum, which is acquired in mosses and vascular plants. An alternative deposition process involves centripetal accumulation of sporopollenin onto previously formed layers. Blackmore et al. (2000) noted that exospore formation may be achieved by sporopollenin accumulation below a pre-existing layer, either by WLCL accumulation or by the deposition of granular or unstructured sporopollenin. A further mode of deposition is observed in seed plants where sporopollenin accumulates within a pre-patterned cell surface glycocalyx referred to as the primexine (Blackmore and Barnes 1987;Blackmore et al. 2000;Wellman 2004), which is essentially an exine precursor. Spore wall development in bryophytes Spore wall development has been studied in all three of the traditional bryophyte groups (reviewed in Lemmon 1988, 1990). In the majority of liverworts, immediately after meiosis, a polysaccharide wall (the spore special wall) is laid down outside the plasma membrane (Brown and Lemmon 1985). In many liverworts, this spore special wall seems to function as a primexine in which the pattern of exospore ornamentation is established (Brown and Lemmon 1993). However, in some liverworts exospore ornamentation appears to be determined by exospore precursors produced by the diploid sporocyte prior to meiosis and formation of the haploid spores (Brown et al. 1986). The exospore develops centripetally (Brown and Lemmon 1993) based on WLCL formed outside the spore cytoplasm. At completion, the entire exospore comprises sporopollenin deposited on WLCL. At maturity, the lamellate structure thus formed is clearly discernible and is highly characteristic of the liverwort exospore. Liverworts lack a tapetum and there is therefore no input from this source. The innermost layer of fibrillar intine is the final wall layer to be formed (Brown and Lemmon 1993). Studies of spore wall development in hornworts are limited. As with liverworts, a spore special wall is formed after meiosis and functions as a primexine in which the exospore is set down. It was initially thought that the exospore formed in the absence of WLCL, but Taylor and Renzaglia have recently demonstrated their presence (W.A. Taylor, University of Wisconsin-Eau Claire, USA, pers. comm., 2011). Recent analyses of Phaeomegaceros fimbriatus have shown that the mature spore wall has a thin perine-like outer layer, but this represents the remnants of the spore mother cell wall rather than extra-exosporal material derived from a tapetum (Villarreal and Renzaglia 2006). Three types of spore wall have been recognized in mosses: Bryopsida type, Andreaeidae type and Sphagnidae type (Brown and Lemmon 1990). All three of these types appear to form in the absence of a spore special wall. Bryopsida-type spore walls are homogeneous except for an inconspicuous foundation layer (Fig. 2). This foundation layer forms first via sporopollenin accumulation on WLCL. Subsequently, the homogeneous exospore layer is laid down outside the foundation layer in a centrifugal manner. This layer is probably mainly extrasporal in origin. Sometimes additional homogeneous material is also deposited inside the foundation layer. This layer is almost certainly derived from the spore. Following the accumulation of homogeneous material, the spores are coated by an additional extraexosporal layer, referred to as the perine or perispore, which is derived from the tapetum. Finally, the intine forms. Spore wall development in the Andreaeidae type is unique among mosses in that they have a spongy exospore that appears to form in the absence of WLCL (Brown and Lemmon 1984). By studying Andreaea rothii, Brown and Lemmon (1984) demonstrated that the exospore is instead initiated as discrete homogeneous globules within the coarsely fibrillar network of the spore mother cell. These globules accumulate and form an irregular layer with numerous interstitial spaces. The sequence of spore wall layer development is essentially the same as that of other mosses and the mature wall consists of an inner intine, a spongy exospore and an outer perine (Brown and Lemmon 1984). Sphagnidae-type moss spore walls are more complex than those of the other mosses and consist of five layers (Brown et al. 1982). Unlike other mosses, the exospore of Sphagnidae type comprises two layers: an inner lamellate layer (A-layer) and a thick homogeneous outer layer (B-layer). In addition to the exospore, there is an intine, a unique translucent layer and the outermost perine. The A-layer is the first to form and does so by sporopollenin accumulation on WLCL, and develops evenly around the young spore immediately after meiosis. The homogeneous B-layer is deposited outside the A-layer. Overlying the exospore is a translucent layer that consists of unconsolidated exospore lamellae in a medium of unknown composition. The tapetally derived perine is deposited on top of this unique layer. The study of spore wall development in Sphagnum lescurii by Brown et al. (1982) suggests that the ontogeny of the wall layers is not strictly centripetal. Spore wall development in pteridophytes Spore walls have been investigated in a number of pteridophyte species representing all of the major pteridophyte groups (reviewed in Lugardon 1990;Tryon and Lugardon 1991). Spore wall development is well understood in the homosporous lycopsid Lycopodium clavatum . Shortly after meiosis, the plasma membrane of the sporogenous cell folds into a pattern that later becomes the reticulate spore sculpture. Small WLCL form on the plasma membrane and accumulate in a centripetal fashion, forming the greater part of the exospore. After the main lamellate part of the exospores is formed, an inner granular layer, possibly derived from the spore cytoplasm, is deposited. In some Lycopodium there are no extra-exosporal layers whereas in others a thin extra-exosporal layer is deposited after the completion of the exospore (Tryon and Lugardon 1991). Spore structure and development in heterosporous lycopsids differ between microspores and megaspores. In the clubmoss selaginella, microspores possess an exospore consisting of two layers (Fig. 3). The thin inner layer is the first to develop and comprises imbricate lamellae that are formed on WLCL in a centripetal direction (Tryon and Lugardon 1991). The outer layer starts to form only once the inner layer is complete. Some selaginella species may also develop a thin perispore or a paraexospore. In the microspores of the heterosporous lycopsid Isoetes japonica, a large gap is developed between the two exospore layers . The outer exospore layer is regarded as a paraexospore as it begins to form before the inner exospore, consists of similar sporopollenin, and is completed at the same time as the inner exospore. Selaginella megaspore walls contain two layers of similar thickness (Morbelli 1995). The inner and outer layers consist of lamellae and poorly segregated components, respectively. The inner layer does not thicken during exospore development and a dense basal layer is formed by the lamellae. In contrast, the outer layer increases significantly in thickness due to self-assembly (Hemsley et al. 1994(Hemsley et al. , 2000Gabarayeva 2000). During the final stages of sporogenesis, the endospore forms between the plasma membrane and the exospore. In Isoetes, the megaspore wall is similar to that of selaginella in terms of development and structure, consisting of two layers, with the formation of the outer layer commencing prior to that of the inner layer. Substantial quantities of silica are deposited within and on top of the outer layer before the exospore is completed. Finally, the endospore is laid down between the plasma membrane and the exospore. The exospore in homosporous ferns develops centrifugally and is once again bilayered. The inner layer acts as a substructure and consists of varying numbers of fused sheets (extensive interconnected laminae) that form by sporopollenin accumulation on WLCL. The homogeneous outer layer is considerably thicker and contains thin radial fissures and small cavities. An extra-exosporal layer (perispore) forms once the exospore is complete and is deposited from the decaying tapetum. Spore wall development in heterosporous ferns is similar to that observed in homosporous ferns, and is also similar in both microspores and megaspores. In sphenopsids the spore walls appear to be highly derived (Lugardon 1990), and observations of Equisetum arvense have shown that four layers are present in the form of an exospore, an endospore, a middle layer and pseudoelators (Uehara and Kurita 1989). The exospores comprise inner and outer exospores. The broad and homogeneous inner exospore forms first by way of platelike structures accumulating on the plasma membrane. The outer exospore is then formed by the deposition of granular material on the inner exospore and is similarly wide and homogeneous. Once exospore formation is complete, the middle layer forms in the gap between the exospore and the plasma membrane. The pseudoelators are the next structure to form and consist of two layers. The inner layer comprises longitudinal microfibrils during the early stages of development but eventually becomes homogeneous. The outer layer is also homogeneous and is formed by granules that are released from vesicles in the plasmodial cytoplasm. The pseudoelators are connected to the spore, by way of the middle layer, at the aperture. The endospore is the final component of the wall to form on the inside of the exospores (Taylor 1986;Uehara and Kurita 1989). Pollen wall development in gymnosperms Although differences in pollen wall structure and development are evident in different extant and extinct gymnosperm groups, the main ontogenetic elements appear to be homologous (summarized in Lugardon 1994; Wellman 2009). The pollen mother cell undergoes meiosis to form four haploid microspores. Subsequent development of the exine consists of a number of stages. Firstly, a callose wall forms around the pollen mother cell and subsequently extends around each of the microspores. Next, a matrix develops around each microspore upon which the fibrillar microspore surface coat and later the sexine (consisting of tectum and infratectum components) pattern is established (Zavada and Gabarayeva 1991). The microspore surface coat is deposited between the surface of the microspore and the surrounding tetrad wall prior to the formation of the wall components. This layer is regarded as being equivalent to the primexine in angiosperms. The sexine then begins to form on and within the microspore surface coat. The nexine (inner pollen exine wall) lamina is then formed below this coat; therefore, the sexine is partly developed when the nexine begins to develop. The exine as a whole appears to form in a centripetal direction from the outside inwards (Lugardon 1994). Finally, an intine is deposited on the inside of the pollen exine. Pollen wall development in angiosperms Pollen walls in angiosperms typically consist of an outer exine composed of sporopollenin and an inner intine composed of cellulose and pectin ( Fig. 4) (Paxson-Sowders et al. 1997;Morant et al. 2007). Models of development have been proposed based on observations on numerous species, including Lilium and arabidopsis (e.g. Suzuki et al. 2008). Similar processes have been described in both these species. Once again, prior to meiosis, the pollen mother cell is surrounded by a callose special cell wall (Blackmore et al. 2007). Immediately after meiosis, four microspores derived from the pollen mother cell form a tetrad. A callose special wall surrounds the microspores (Blackmore et al. 2007). A cellulose primexine then forms between the plasma membrane and callose wall of each microspore. Both the callose wall and primexine are deposited at the surface of the microspore through processes mediated by the plasma membrane (Blackmore et al. 2007). A section of the primexine is then adapted to form column-like structures called the probaculae upon which sporopollenin, secreted by the microspore, will eventually accumulate and polymerize. Sporopollenin deposition and accumulation extend the probaculae, which form the baculae and the tectum (Heslop- Harrison 1963Harrison , 1968. The callose wall then degrades and the developing baculae and tectum are exposed to the fluid of the locule and receive sporopollenin secreted by the tapetum. Wall formation is complete when the nexine and intine layers are formed and the primexine recedes and disappears . The mature pollen grain is then coated by tryphine and pollenkitt, which are synthesized by the tapetum (Dickinson and Lewis 1973;Blackmore et al. 2007). Summary While the basic components associated with spore/ pollen wall development and structure described above can be localized in different wall regions and influence wall development at different stages across the principal land plant groups, their presence in the majority or all of these groups suggests that their involvement in wall development and sporogenesis as a whole is a signature of embryophytes. The main differences concern the absence, presence and role of callose, and also the mode of sporopollenin deposition. Callose would appear to play a major role in pollen wall development in angiosperms, where a callose wall surrounds the tetrad and serves as a template for exine development. The role of callose in wall development in other groups is less well defined. In some groups such as pteridophytes, with the exception of selaginella, callose has yet to be identified during sporogenesis and is currently thought to be absent. In some bryophytes (Hepaticopsida: Anthoceros, Geothallus, Riccia; Bryopsida: Mnium) (Waterkeyn and Bienfait 1971), callose has been identified around the spore mother cell but its link to wall development, if any, is not well understood. In this instance, it could be the case that callose is relictual as it is involved in the reproductive systems of some algal groups (Gabarayeva and Hemsley 2006). Certain modes of sporopollenin deposition are not confined to individual plant groups and different modes have been observed on numerous occasions in a single species. For example, WLCL, while not a constant feature, have been observed in species belonging to all phylogenetic levels of land plants. However, the accumulation of sporopollenin within a pre-patterned microspore surface coat, or primexine, is seemingly confined to gymnosperms and angiosperms. Molecular genetics of pollen wall development In recent years there has been a surge in papers describing genes involved in pollen wall development. However, our understanding of the molecular genetics of spore/ pollen development remains poor due to the complexity of the developmental process and problems in pinpointing the actual function of the genes involved. Furthermore, research has been confined to particular model angiosperms (Table 1 and Fig. 5), with little or no information on gymnosperm pollen or the spores of 'lower' land plants. This begs the question as to whether similar genes are involved in development of the simple walls of 'lower' land plant spores and the more derived pollen walls of the gymnosperms and angiosperms. However, this research is now beginning to incorporate model plant species from more primitive groups, such as the bryophytes. This extended research will enable the comparison of the molecular genetics of spore/pollen wall development in angiosperms and more primitive plants. The results from this may allow us to assess how conserved are the genes and genetic networks involved in spore/pollen wall development. We begin by reviewing what is known of the molecular genetics of pollen wall development in the angiosperms. Arabidopsis genes implicated in sporopollenin biosynthesis and exine formation A number of arabidopsis genes associated with the biosynthesis of exine encode proteins with sequence homology to enzymes involved in fatty acid metabolism (Dobritsa et al. 2009). Aarts et al. (1997) observed expression of the MALE STERILITY 2 (MS2) gene in the tapetum of wild-type plants at, and shortly after, the release of microspores from tetrads and noted that MS2 mutants produced pollen grains that lacked an exine layer. The exine layer had been replaced by a thin layer of unknown composition. MS2 encodes a protein with sequence similarity to long-chain fatty acyl reductases, and expression of the MS2 protein in bacteria leads to the increased synthesis of fatty alcohols (Doan et al. 2009). Taken together, these data suggest that an MS2-linked enzymatic pathway is required for the synthesis of sporopollenin (Aarts et al. 1997;Ariizumi et al. 2008;Dobritsa et al. 2009). Another gene implicated in exine formation is YORE-YORE (YRE)/WAX2/FACELESS POLLEN1 (FLP1). Ariizumi et al. (2003) suggested that this gene encodes a transporter or catalytic enzyme that is involved in wax synthesis in stems and siliques, in the tryphine and in sporopollenin synthesis. As with MS2, the pollen exine in YRE/FLP1 mutants is poorly constructed and easily damaged, suggestive of defective sporopollenin. Expression analyses in the same study suggest that FLP1 is expressed in the tapetum, which is supported by the fact that the FLP1 mutant phenotype is sporophytically controlled (Ariizumi et al. 2003). In addition, Rowland et al. (2007) demonstrated that the ECERIFIUM 3 (CER3) gene encodes a protein of unknown function identical to YRE/WAX2/FLP1 and is therefore allelic to YRE/WAX2/FLP1. Morant et al. (2007) showed that the arabidopsis cytochrome P450 enzyme CYP703A2 is also necessary for the synthesis of sporopollenin. The CYP703 cytochrome P450 family is specific to embryophytes and each plant species contains a single CYP703 (Morant et al. 2007). The exine layer in CYP703A2 knock-out mutants is significantly underdeveloped. Sporopollenin also appeared to be absent as the fluorescent layer around the pollen associated with the presence of phenylpropanoid units in sporopollenin was absent in CYP703A2 mutant plants (Morant et al. 2007). Morant et al. (2007) demonstrated that lauric acid and in-chain hydroxy lauric acids are present in the plant substrate and product for this enzyme. These are important building blocks in the synthesis of sporopollenin and facilitate the formation of ester and ether linkages with phenylpropanoid units. Furthermore, the same study showed that CYP703A2 is expressed in the anthers of developing arabidopsis flowers with initial expression detectable at the tetrad stage in the microspores and the tapetum (Morant et al. 2007), consistent with a role in exine formation. Dobritsa et al. (2009) described another cytochrome P450, CYP704B1, and demonstrated that this gene is essential for exine development. CYP704B1 mutants produce pollen walls that lack a normal exine layer. The exine layer was replaced with a thin layer of material and irregular distribution of aggregates that may have been sporopollenin. The pollen walls also exhibited a characteristic striped surface, unlike the reticulate pattern displayed by the wild type, to which Dobritsa et al. (2009) designated the name zebra phenotype. It has also been shown that heterologous expression of CYP704B1 in yeast catalyses v-hydroxylation of longchain fatty acids, consistent with a role in sporopollenin synthesis (Dobritsa et al. 2009). Dobritsa et al. (2009) have suggested that these v-hydroxylated fatty acids, in concert with the formation of in-chain hydroxylated lauric acids catalysed by CYP703A2, may serve as vital monomeric aliphatic building blocks in the formation of sporopollenin. Analyses of the genetic relationships between CYP704B1, CYP703A2 and MS2 (which as described above encodes a fatty acyl reductase) along with expression analyses and observation of similar zebra phenotypes in all three mutants indicate that these genes are involved in the same pathway within the sporopollenin synthesis framework and are co-expressed (Dobritsa et al. 2009). In addition, an orthologue of CYP704B1 (BnCYP704B1) has recently been identified in Brassica napus, and mutants in this gene exhibit defective exine layers (Yi et al. 2010). Another gene reported to participate in exine formation, ACOS5, has recently been described (de Azevedo Souza et al. 2009). This encodes a fatty acyl-CoA synthetase with broad in vitro preference for the medium-chain fatty acids required in tapetal cells for sporopollenin monomer synthesis. Mutations in ACOS5 significantly compromise the development of the pollen wall, which appears to lack sporopollenin and exine. The defect in pollen formation in ACOS5 mutants coincides with the deposition of exine at the unicellular microspore stage (de Azevedo Souza et al. 2009). Additionally, after analyses of ACOS5 expression in developing anthers, de Azevedo Souza et al. (2009) proposed that it is also involved in the same biochemical pathway as the CYP703A2, CYP704B1 and MS2 genes. The RUPTURED POLLEN GRAIN1 (RPG1) gene, which encodes a plasma membrane protein, and the NO EXINE FORMATION1 (NEF1) gene, which encodes a plastid integral membrane protein, are both required for primexine development (Ariizumi et al. 2004;Guan et al. 2008). Guan et al. (2008) revealed that exine pattern formation in RPG1 mutants is defective as sporopollenin is randomly distributed over the surface of the pollen grain. Primexine formation of microspores in RPG1 mutants is abnormal at the tetrad stage, which results in imperfect deposition of sporopollenin on the microspores (Guan et al. 2008). RPG1 plants experience microspore rupture and cytoplasmic leakage, suggesting that cell integrity had been impaired in the microspores. The same study demonstrated that RPG1 is strongly expressed in the tapetum and the microspores during male meiosis (Guan et al. 2008). Ariizumi et al. (2004) showed that NEF1 mutants exhibited similarly defective primexine and that although sporopollenin was present it was not deposited onto the plasma membrane of the microspore because of the lack of normal primexine. Ariizumi et al. (2004) tentatively suggest that NEF1 is expressed in the tapetum and is sporophytically controlled. Additionally, it was proposed that NEF1 is likely to be involved in exine formation at earlier developmental stages than other exine formation genes, such as MS2 and FLP1, since the exine is more poorly developed in NEF1 plants (Ariizumi et al. 2004). Suzuki et al. (2008) also identified a number of genes involved in the construction of exine and pollen development in general. They managed to successfully isolate 12 KOANASHI mutants (KNS1-KNS12), which were found to be recessive and thus likely to affect pollen development sporophytically. The 12 mutants were categorized into four types. Type 3 (KNS5-KNS10) mutants displayed abnormal tectum formation on the pollen surface, and these genes therefore appear to be required either for creating primordial tectum (onto which sporopollenin is deposited) in the space between the primexine and the callose wall, or for depositing sporopollenin itself . Additionally, the type 2 mutant (KNS4) exhibits a thin exine layer mostly due to shortened baculae. It is proposed that baculae extension is closely linked to the thickening of primexine; therefore, KNS4 is likely to be a novel gene that regulates the thickening of the primexine layer . Recently, Quilichini et al. (2010) proposed that ATP-BINDING CASSETTE G26 (ABCG26) plays a crucial role in exine formation. Abcg26-1 mutants lack an exine layer, and expression studies showed that ABCG26 is transiently and locally expressed in the tapetum post meiosis. Quilichini et al. (2010) suggest that ABCG26 transports sporopollenin precursors across the tapetum plasma membrane to the anther locule for polymerization on the surface of the developing microspores. Other genes that have recently been associated with a defective exine include LESS ADHESIVE POLLEN 5/POLY-KETIDE SYNTHASE B (LAP5/PKSB) and LESS ADHESIVE SYNTHASE 6/POLYKETIDE SYNTHASE A (LAP6/PKSA), which are also specifically and transiently expressed in the tapetum during microspore development . Mutant plants compromised in the expression of LAP5/PKSB and LAP6/PKSA exhibited significantly defective exine layers, and a double LAP5/PKSB LAP6/ PKSA mutant appeared to completely lack an exine layer. These two genes are co-expressed with ACOS5, and recombinant LAP5/PKSB and LAP6/PKSA proteins were able to generate tri-and tetraketide alpha-pyrone compounds in vitro from a wide range of potential ACOS5-generated fatty acyl-CoA starter substrates via condensation with malonyl-CoA. These compounds would therefore appear to be required for sporopollenin biosynthesis . Additionally, two closely related genes, TETRAKETIDE alpha-PYRONE REDUCTASE1 (TKPR1/DRL1) and 2 (TKPR2/CCRL6), encode oxidoreductases, which have been found to be active on the tetraketide products produced by LAP5/PKSB and LAP6/PKSA. TKPR activity reduces the carbonyl function of the tetraketide alpha-pyrone compounds synthesized by LAP5/ PKSB and LAP6/PKSA, and together with the activities associated with LAP5/PKSB, LAP6/PKSA and ACOS5, forms a biosynthetic pathway that ultimately produces hydroxylated alpha-pyrone compounds, potential precursors for sporopollenin . Arabidopsis transcription factors involved in sporopollenin and exine formation A number of transcription factors participating in the development of exine have been described. AtMYB103/ MS188 is a MYB transcription factor that is specifically AoB PLANTS 2011 plr027 doi:10.1093/aobpla/plr027, available online at www.aobplants.oxfordjournals.org & The Authors 2011 expressed in the anthers and trichomes of arabidopsis (Li et al. 1999;Higginson et al. 2003). Zhang et al. (2007) have shown that AtMYB103/MS188 directly regulates the expression of the previously described exine formation gene MS2 and the callase-related A6 gene. Knock-out mutants of AtMYB103/MS188 resulted in early tapetal degeneration and abnormal microspores. Additionally, expression of the MS2 gene was not detected in the anthers of the AtMYB103/MS188 mutants (Zhang et al. 2007). The MALE STERILITY1 (MS1)/HACKLY MICROSPORE (HKM) gene, encoding a leucine zipper-like, PHD-finger motif transcription factor, is also involved in tapetum function (Ariizumi et al. 2005;Vizcay-Barrena and Wilson 2006;Ito et al. 2007;Yang et al. 2007). Phenotypic analysis of MS1 mutants by Ito et al. (2007) indicated that MS1 is required for transcriptional regulation of genes involved in primexine formation, sporopollenin synthesis and tapetum development. Lack of MS1 expression results in changes in tapetal secretion and exine structure with the appearance of autophagic vacuoles and mitochondrial swelling, suggesting that the tapetum is broken down by necrosis rather than by apoptosis as observed in the wild type (Vizcay- Barrena and Wilson 2006;Yang et al. 2007). Yang et al. (2007) further demonstrated that MS1 is expressed in the tapetal cells in a developmentally regulated manner between the late tetraspore stage and microspore release. Another transcription factor involved in exine formation has been identified by Gibalová et al. (2009), who demonstrated that AtbZIP34 mutants exhibit defects in exine structure. The exine layer is wrinkled, and the baculae and tecta are deformed. Additionally, 50% of mutant pollen exhibited a wrinkled intine layer. Despite these abnormalities, high levels of pollen abortion or male sterility were not observed (Gibalová et al. 2009). Transcriptomic analyses revealed that expression of the proposed primexine development gene, RPG1, is significantly downregulated in AtbZIP34 mutant pollen. Given the expression profiles of both genes, it is possible that RPG1 expression is regulated by AtbZIP34 (Gibalová et al. 2009). Analyses also suggested sporophytic and gametophytic roles for AtbZIP34 in exine and intine formation. The observation that many of the genes described in the previous sections are predominately expressed sporophytically is somewhat at odds with the fact that exine development mostly occurs at the surface of individual microspores after meiosis. Suzuki et al. (2008) propose that this apparent contradiction may possibly be explained by many of these genes being expressed in pollen mother cells so that the relevant mRNA or proteins are inherited by the derived microspores. Arabidopsis genes associated with probaculae formation At present, five arabidopsis genes have been specifically associated with the formation of probaculae, which is an important component in the exine development process. The DEFECTIVE IN EXINE1 (DEX1) gene encodes a novel membrane protein that is required for anchoring sporopollenin to the surface of the microspores and is implicated in probacula formation (Paxson-Sowders et al. 1997, 2001. Sporopollenin synthesis still takes place in DEX1 mutants but primexine development is delayed and ultimately reduced, which alters membrane formation and therefore the deposition of sporopollenin. Spacers do not form in the primexine, which results in sporopollenin being randomly deposited on the plasma membrane (Paxson-Sowders et al. 2001). Additionally, sporopollenin does not appear to be anchored to the microspore and forms bulky aggregates on the developing microspore and locule walls, and the pollen wall does not form, which results in pollen degradation (Paxson-Sowders et al. 2001). Ariizumi et al. (2008) suggested that the TRANSIENT DEFECTIVE EXINE1 (TDE1)/DE-ETIOLATED2 (DET2) gene is also involved in probacula development. Specifically, they proposed that TDE1/DET2 is involved in brassinosteroid synthesis, a hormone purported to control the rate or efficiency of the initial process of exine formation. Primexine synthesis is defective in TDE1/DET2 mutant plants which ultimately fail to produce probacula at the tetrad stage (Ariizumi et al. 2008). Additionally, globular sporopollenin is haphazardly deposited onto the microspore at the early uninucleate microspore stage (Ariizumi et al. 2008). As with DEX1 mutants, sporopollenin apparently failed to anchor to the plasma membrane of the microspore and instead aggregated on the locule wall and in the locule at the uninucleate microspore stage (Paxson-Sowders et al. 2001;Ariizumi et al. 2008). However, despite these defects, reticulate exine was clearly formed at the later stage in TDE1/DET2 mutants, which is in contrast to other mutants displaying primexine defects, such as DEX1, which always fail to produce normal exine at the later stages. This suggests that mutations in TDE1/ DET2 do not result in defects at critical stages of exine development (Ariizumi et al. 2008). Expression analysis also demonstrated that brassinosteroids may be synthesized in developing microspores. The same analysis also showed that TDE1/DET2 mutations did not affect the expression of genes implicated in exine development. This suggests that brassinosteroids support exine development in a distinct pathway (Ariizumi et al. 2008). The KNS2, 3 and 12 genes, designated type 4 genes by Suzuki et al. (2008), have also been associated with probacula formation. Type 4 mutants were shown to exhibit abnormal positioning of baculae, which were densely distributed. This suggests that the type 4 genes govern the position of probacula formation either by forming undulations on the microspore plasma membrane at the tetrad stage or by forming spacers . Additionally, Suzuki et al. (2008), using mapbased cloning, were able to reveal that one of the type 4 genes, KNS2, encodes sucrose phosphate synthase, which is proposed to be potentially involved in primexine synthesis or callose wall formation, which are known to be important for the positioning of probaculae. Further studies are required to specifically determine the time and location of expression of KNS type 4 genes. Arabidopsis genes connected to intine formation Recently, Li et al. (2010) have proposed that the fasciclin-like arabinogalactan protein gene FLA3 is involved in the development of the intine layer by playing a role in the deposition of cellulose. The downregulation of FLA3 via RNAi results in the appearance of a thinning intine layer and the production of 50% non-viable pollen grains, many of which display a wrinkled or shrunken phenotype. Expression studies showed that FLA3 is specifically expressed in pollen tubes and pollen grains, and is localized to the cell membrane (Li et al. 2010). Other arabidopsis genes have also been implicated in intine formation, including the reversibly glycosylated peptide genes, RGP1 and RGP2. Pollen grains in double-knock-out plants of RGP1 and RGP2 exhibit unusually enlarged vacuoles and a poorly defined intine layer (Drakakaki et al. 2006). Arabidopsis genes implicated in callose wall formation To date, three arabidopsis genes have been associated with callose wall formation. Dong et al. (2005) and Nishikawa et al. (2005) have demonstrated that the CALLOSE SYNTHASES (CALS5)/LESS ADHERENT POLLEN (LAP1) gene encodes a callose synthase essential for callose wall formation. CALS5/LAP1 mutants lack callose on the cell wall of pollen mother cells, tetrads and microspores, which ultimately results in the development of sterile pollen due to the degeneration of microspores (Dong et al. 2005). Additionally, exine structure in the mutant plants was severely deformed, affecting the baculae and tecta structure, and tryphine was haphazardly deposited as globular structures (Dong et al. 2005). This implies that the callose wall is vitally important for the formation of a properly sculpted exine (Dong et al. 2005). Expression analyses have produced varied results with regard to CALS5/LAP1 and suggest that the gene is expressed in either pollen mother cells or pollen tetrads, or possibly both cell types (Nishikawa et al. 2005). The KNS1 and KNS11 genes constitute the type 1 genes as described and classified by Suzuki et al. (2008). Type 1 mutant plants exhibit pollen grains that display a highly collapsed exine structure in which the tecta disappear and the baculae deform into globular protrusions. Additionally, mature pollen grains of both genes were reduced in size and in number, and were distorted in shape . This phenotype closely resembled the pollen phenotype of CALS5/LAP1 mutants described above (Dong et al. 2005;Nishikawa et al. 2005;Suzuki et al. 2008). This resemblance, along with the recessive nature of the type 1 genes, suggests that KNS1 and KNS11 are expressed in pollen mother cells and are important in synthesizing or secreting callose . Arabidopsis genes involved in tetrad separation The QUARTET (QRT) genes have been identified as being required for pollen separation during normal pollen development (Preuss et al. 1994;Rhee and Somerville 1998;Francis et al. 2006). In wild-type arabidopsis pollen, degradation of the pollen mother cell walls takes place, which releases the individual microspores as single pollen grains (Francis et al. 2006). Mutations in any of the QRT1, QRT2 or QRT3 genes cause the outer walls of the microspores to become fused following meiosis, resulting in pollen grains being released as tetrads (Preuss et al. 1994;Rhee and Somerville 1998;Francis et al. 2006). Rhee and Somerville (1998) have demonstrated that the enzymatic removal of callose at the tetrad stage is not sufficient to release the microspores. In QRT1 and QRT2 mutants, pectic components were detectable at the time of tetrad separation, which was not the case in the wild type. This suggests that the persistence of pectin in the pollen mother cell wall is associated with tetrad separation failure (Rhee and Somerville 1998). Pollen mother cell primary cell walls have been proposed to play a significant part in cell -cell adhesion mechanisms (Rhee and Somerville 1998). The pectins of the primary cell wall have been shown to consist mostly of homogalacturan, a polymer of b-1,4-galacturonic acid (GalUA), rhamnogalacturonan I and rhamnogalacturonan II (branching polymers of GalUA, Ara and Rha) (Brett and Waldron 1996;Tucker and Seymour 2002). As pectin is synthesized, the backbone of GalUA is in a methylesterified state that can then be demethylesterified by pectin methylesterases and cleaved by endo-polygalacturonases, which results in loosening of the cell wall (Schols and Voragen 2002;Francis et al. 2006). QRT1 and QRT2 have been proposed to encode pectin methylesterases (Francis et al. 2006). Expression analysis has shown that QRT1 is expressed shortly after meiosis is complete (Francis et al. 2006). Additionally, Rhee et al. (2003) have identified QRT3 as being an endopolygalacturonase that degrades the pectic polysaccharides of pollen mother cells. It has been demonstrated that the QRT3 gene is specifically and transiently expressed in tapetal cells during microspore release from the tetrad (Rhee et al. 2003). Immunohistochemical localization of QRT3 suggests that the protein it encodes is secreted from the tapetum during the early stages of microspore development (Rhee et al. 2003). Genes associated with callose wall degradation have, to date, not been definitively identified. Frankel et al. (1969) and Stieglitz and Stern (1973) demonstrated that the tetrad callose wall is degraded by b-1,3-glucanase activity secreted from the tapetal cells. While a number of candidate b-1,3-glucanase-encoding genes have been identified, none has been confirmed as a callase (Hird et al. 1993). However, Hird et al. (1993) have proposed that the A6 gene may encode a component of the callase enzyme complex due to the fact that it is tapetum specific, has a strong sequence similarity to other b-1,3-glucanases, and is temporally expressed at peak levels when the plant normally expresses callase. Future identification of A6 mutant plants is needed to confirm the gene as a callase. Additionally, real-time reverse transcriptase-polymerase chain reaction analysis conducted by Zhang et al. (2007) has suggested that A6 is regulated by the AtMYB103/MS188 gene. Summary The genes and associated mutants described above have thus far provided clues with regard to wall development in arabidopsis, particularly with respect to exine formation and sporopollenin biosynthesis. They suggest that wall development is controlled by both the diploid sporophyte and haploid microspores, and have identified the sporophytic tapetum, in addition to the microspores themselves, as an important site for sporopollenin biosynthesis. However, large gaps in our understanding remain regarding the genetic network and biosynthetic route responsible for the formation of the pollen wall. Flowering plant homologue genes for spore wall development are present in the moss P. patens and the lycophyte S. moellendorfii Research into the molecular genetics of spore wall development in basal plants has thus far been extremely limited. Schuette et al. (2009), using immuno-light and immuno-electron microscopy, identified the presence of callose in the spores of physcomitrella where it was deposited in the inner exospore layer near the expanded aperture region (local expansion of the intine layer) at the proximal pole, suggesting that callose is involved in aperture expansion during wall development (Fig. 2). It is proposed that a CALS5 homologue is present in the physcomitrella genome and is involved in spore wall development (Schuette et al. 2009). However, expression studies of the CALS5 homologue, required to further address this proposition, have yet to be undertaken in physcomitrella. In addition to CALS5, the majority of the other arabidopsis genes described in this review have been annotated and their protein sequences are available on The Arabidopsis Information Resource (TAIR) website (Website 1). We have used these protein sequences to search for homologous genes in the physcomitrella (Website 2) and selaginella genomes (Websites 3 and 4). The results suggest that homologues of all known arabidopsis pollen wall-associated genes are present in the physcomitrella genome, with the number of proposed homologues ranging from one for DEX1 and TDE1/DET2 to in excess of 50 for CYP703A2 and AtMYB103/MS188. Similar results are observed in selaginella with homologues of all but one (QRT3) of the arabidopsis pollen wall-associated genes present in its genome, ranging from two homologues in DEX1, MS1 and NEF1 to more than 50 once again for CYP703A2, AtMYB103/MS188 and the callase-related A6 gene (Table 2). These results indicate that the vast majority of the pollen wall-associated genes belong to multigene families, and therefore, as with almost every developmental, signalling and metabolic context (Kafri et al. 2009), there is a high potential for genetic redundancy. Analyses of large collections of expressed sequence tag sequences have suggested that physcomitrella is a palaeopolyploid with a whole-genome duplication having occurred between 30 and 60 million years ago (Rensing et al. 2007), which may account for the presence of some of these gene duplications. While the selaginella sequence data do not indicate any ancient whole-genome duplication , the results here show that for most of the pollen wall-associated genes numerous copies (greater in number than in the physcomitrella genome) are also present in the selaginella genome. This suggests the occurrence of many small-scale duplication events and a greater level of gene redundancy and/or number of pseudogenes in selaginella compared with physcomitrella. Table 2 TBLASTN results of searches of the genomes of physcomitrella and selaginella with arabidopsis pollen wall genes. An e-value threshold of 1e 24 was used as an initial filter to determine number of homologues. Identity percentages (≥30%) and BLAST scores were then used to filter numbers further. Best match is defined as BLAST hit with the highest BLAST score. Conclusions and forward look The presence of pollen wall-associated genes in physcomitrella and selaginella provides a strong incentive for further study in this area. Survival in a terrestrial environment has been proposed to involve the evolutionary acquisition of a number of traits, including a specialized spore wall (Wellman 2004;Cronk 2009). As described above, the morphology of moss and lycopsid spores is markedly simpler yet bears similarities to that of 'higher' plant pollen. Can the development of a specialized spore wall be traced to the recruitment of a few key gene products for a novel function, or did it involve a more gradual accretion of cell wall constituents in novel architectures? To what extent were the gametophyte and sporophyte involved in the deposition of the spore wall in early evolutionary history, and can this be inferred from the study of extant bryophytes and other branches of the plant evolutionary tree? The finding that homologues of angiosperm pollen cell wallassociated genes are easily identifiable in extant bryophytes and lycopsids opens the door to functional analyses of these genes. Of course, the potentially high levels of redundancy observed in physcomitrella and selaginella present challenges to a functional approach, but expression studies will help direct future research in this area and circumvent some of these difficulties. The use of high-throughput sequencing strategies and/or microarray approaches will allow researchers to identify homologues that might potentially play a role in spore wall biogenesis, and the development of techniques for gene knock-outs and gene swap experiments will allow testing of hypotheses on the conservation of gene function in the spore wall, as has already been achieved in the area of leaf, root and stomata EvoDevo studies (Harrison et al. 2005;Menand et al. 2007;Chater et al. 2011;Ruszala et al. 2011). These studies strongly suggest that true spore/pollen wall gene homologues are likely to exist in lower land plants, particularly those genes associated with wall structures that are present across embryophytes. It is also reasonable to suggest that genes associated with more specialist wall elements, such as the primexine which is only present in higher land plant groups, are less well conserved. Although a focus in this area has been on bryophytes, such as physcomitrella (due to the availability of appropriate genetic resources), the development of novel experimental systems, such as selaginella and the liverwort Marchantia polymorpha, will allow a deeper insight into spore evolution and, more broadly, enable us to better assess whether the key mechanisms required for plant terrestrialization have been conserved over 400 million years of land plant evolution. Sources of funding The work was funded by the Natural Environment Research Council (UK). Contributions by the authors S.W. made the greatest contribution to this work and is therefore first author.
2014-10-01T00:00:00.000Z
2011-10-07T00:00:00.000
{ "year": 2011, "sha1": "d233b6c1926c5819e5a863f73b947c7cc4e26718", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/aobpla/article-pdf/doi/10.1093/aobpla/plr027/17461752/plr027.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d233b6c1926c5819e5a863f73b947c7cc4e26718", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119157908
pes2o/s2orc
v3-fos-license
Homotopy groups of K-contact toric manifold We compute the first and second homotopy groups of a class of contact toric manifolds in terms of the images of the associated moment map. Introduction In this paper I compute the first and second homotopy groups of certain toric symplectic cones or, equivalently, of certain contact toric manifolds. The main result of the paper is Theorem 1.1 (the terms used in the statement are explained below): Theorem 1.1. Let G be a torus with Lie algebra g and integral lattice Z G = ker{exp : g → G}. Let (B, ξ = ker α) be a contact toric G-manifold of Reeb type with moment cone C ⊂ g * , which is a strictly convex rational polyhedral cone. Let L denote the sublattice of Z G generated by the normal vectors to the facets of C. The fundamental group of B is the finite abelian group Z G /L. The second homotopy group of B is a free abelian group of rank N − dim G where N is the number of facets of the cone C. Let us recall the necessary definitions (see [L] for more details; see also [LS]). A manifold B with a contact structure ξ = ker α (α is a contact form) is a toric G-manifold if there exists an effective action of a torus G on B preserving ξ with dim B + 1 = 2 dim G. By averaging over the group, if necessary, one can always assume that the torus G preserves a contact form α defining ξ. Given an action of a Lie group G on a manifold B preserving a contact form α, the corresponding α-moment map Ψ α : B → g * (g * denotes the vector space dual of the Lie algebra g of G) is defined by for all b ∈ B, all X ∈ g. As usual ·, · denotes the canonical paring between g * and g, and X B denotes the vector field on B induced by X. If f ∈ C ∞ (B) G is an invariant function, then α ′ = e f α is another contact form defining the same contact distribution ξ as α. Clearly Ψ e f α = e f Ψ α , so the moment map is an invariant of the contact form and not of the contact distribution. On the other hand the subset C(Ψ) = C(Ψ α ) of g * for an α-moment map Ψ α : B → g * defined by depends only on the action of G on B and on the contact distribution ξ but not on the contact form α per se. We will refer to C(Ψ) as the moment cone of the action. Since a moment map Ψ α : B → g * completely encodes the action of G on (B, α) we regard a contact toric G-manifold as a triple (B, ξ = ker α, Ψ α : B → g * ). Note that the symplectization (M, ω) : Conversely, if a symplectic toric G-manifold (M, ω, Φ : M → g * ) is a symplectic cone, i.e., if there is a free proper action {ρ t } of R on M commuting with the action of G such that ρ * t ω = e t ω, then M/R is naturally a contact toric manifold. A contact manifold (B, ξ = ker α) with an action of a torus G preserving α is Reeb type if there is X ∈ g such that the function Ψ α , X = ι(X B )α is strictly positive. By a result of Boyer and Galicki [BG] (see also Theorem 4.3 in [LS]), the moment cone of a contact toric G manifold of Reeb type is a strictly convex rational polyhedral cone. "Strictly convex" means that the moment cone contains no linear subspaces of positive dimension, i.e., it's a cone on a polytope. "Rational polyhedral" means that there exist vectors µ 1 , . . . , µ N in the integral lattice Z G := ker(exp : g → G) of the torus G such that There are several reasons for wanting to compute the homotopy groups of contact toric manifolds of Reeb type. Date: March 29, 2022. Supported by the NSF grant DMS-980305. 1 1. All contact manifolds of Reeb type are K-contact (see Proposition 3.1 below), hence the title of the paper. In fact contact toric manifolds of Reeb type are Sasakian, as proved by Boyer and Galicki (Theorem 5.3 in [BG]). Methods recently developed by Boyer, Galicki, Mann and others use Sasakian structures to obtain explicit positive Einstein metrics. [L] shows that contact toric manifolds not diffeomorphic to the ones of Reeb type are easy to understand: they are either S 2 × S 1 , or products T k × S k+2l−1 (k > 1, l ≥ 0) or principal T 3 bundles over S 2 . So if one wants to understand the topology of contact toric manifolds, the manifolds of Reeb type are the ones to concentrate on. A classification of contact toric manifolds 3. One motivation for studying the topology of contact toric manifolds is their apparent difference from (topological) toric manifolds. Recall that in 1991 Davis and Januszkiewiecz defined (topological) toric manifolds as manifolds with torus action locally modeled on the standard action of T n on C n and having a simple polytope as the orbit space [DJ]. Such a manifold is determined by a polytope and a characteristic function, a function that assigns a 1-parameter subgroup of the torus to every facet of the polytope. They proved a beautiful formula for the integral cohomology ring of a toric manifold; it is the Stanley-Reisner ring of the polytope modulo an ideal determined by the characteristic function (for smooth projective toric varieties the formula is known as the Danilov-Jurkiewicz theorem). In particular the cohomology ring is generated by elements of degree two, odd dimensional cohomology vanishes and there is no torsion. They also proved that such manifolds are simply connected. In contrast, the odd dimension cohomology of a contact toric manifold need not vanish (cf. RP 3 ), there is torsion and the fundamental group need not be trivial. 4. Another motivation comes from the study of completely integrable geodesic flows. According to Toth and Zelditch [TZ], a geodesic flow on a manifold Q is toric integrable if there exists a homogeneous completely integrable action of a torus on the punctured cotangent bundle T * Q Q which preserves the geodesic flow. Naturally in this case the co-sphere bundle S(T * Q) is a contact toric manifold. It would be interesting to find a topological obstruction to the existence of a toric integrable geodesic flow on a compact manifold Q and for that one needs to understand the topology of contact toric manifolds. We now outline the proof of Theorem 1.1. 1) Since a contact manifold B is homotopy equivalent to its symplectization M = B × R, we compute the homotopy groups of the symplectization. 2) The symplectization M of B is the symplectic quotient at 0 of C N {0} by a compact abelian group T with π 0 (T ) = Z G /L and dim 4) Since the group T acts freely on Φ −1 T (0) {0}, we see from the long exact sequence of homotopy groups for the fibration The details of the argument are the subject of the next section. In the last section we explain the connection between torus actions of Reeb type and being K-contact. A note on notation. Throughout the paper the Lie algebra of a Lie group denoted by a capital Roman letter will be denoted by the same small letter in the fraktur font: thus g denotes the Lie algebra of a Lie group G etc. The natural pairing between g and its vector space dual g * is denoted by ·, · . If A : V → W is a linear map, we denote the corresponding map on the dual spaces by A * , A * : W * → V * . When a Lie group G acts on a manifold M we denote the action by an element g ∈ G on a point x ∈ M by g · x; G · x denotes the G-orbit of x and so on. The vector field induced on M by an element X of the Lie algebra g of G is denoted by For us a torus is a compact connected abelian group. If G is a torus, we denote its weight lattice by Z * G , it is a subgroup of g * . The dual lattice of Z * G is the integral lattice Z G . Recall that Z G = ker(exp : g → G). Thus G = g/Z G . Acknowledgments. I thank Charles Boyer, Sue Tolman and Bill Graham for a number of useful conversations. 2. Proof of the main result, Theorem 1.1 It was proved in [L] that the moment cone C(Ψ) of a (compact connected) contact toric G-manifold (B, ξ = ker α, Ψ α : B → g * ) of Reeb type is a good cone. This means the following. Let {F i } denote the set of facets (codimension one faces) of C(Ψ). Since C(Ψ) is rational, each facet is of the form for some primitive vector µ i in the integral lattice Z G of G. Then 1. every codimension ℓ, 0 < ℓ < dim G, face F of C(Ψ) can be written uniquely as where F ij 's are the facets containing F , and 2. the Z-module generated by the normals to the facets F i1 , . . . , F i ℓ is a direct summand of Z G of rank ℓ. There is also a corresponding existence result. Given a good polyhedral cone C ⊂ g * (where g * is the dual of the Lie algebra of a torus G) there exists a compact connected contact toric G-manifold (B C , ξ C = ker α C , Ψ αC ) with the moment cone C(Ψ C ) equal to C (Theorem 2.18(4) of [L]). Moreover (B C , ξ C = ker α C , Ψ αC ) can be constructed as a contact quotient of the standard odd dimensional sphere. In fact it is more convenient to construct the symplectization (M C , ω C , Φ C : M C → g * ) of (B C , α C , Ψ αC : B C → g * ). Then for any contact toric G-manifold (B ′ , ξ ′ = ker α ′ , Ψ α ′ ) with C(Ψ α ′ ) = C we have π 1 (M C ) = π 1 (B ′ ), π 2 (M C ) = π 2 (B ′ ) and so on. Note that the moment map image Φ C (M C ) is C {0}. Recall from [L] the construction of the symplectic toric manifold (M C , ω C , Φ C : M C → g * ). As above let µ 1 , . . . , µ N ∈ Z G denote the primitive inward normals to the facets of the good strictly convex cone C. Since C is strictly convex and has non-empty interior, span R {µ i } = g. Hence the abelian group which drops down to a surjective Lie group homomorphism Here [a 1 , . . . , a N ] denotes the class of (a 1 , . . . , a N ) ∈ R N in T N and exp : g → G denotes the exponential map. Let T = ker̟; it is a closed by not necessarily connected subgroup of T N . The standard linear action of T N on C N preserving the standard symplectic form √ −1 dz j ∧ dz j gives rise to a linear symplectic action of T ⊂ T N . Denote the corresponding homogeneous moment map by Φ T ; Φ T : C N → t * . The moment map Φ : C N → (R N ) * for the standard action of T N on C N is given by the formula where e * 1 , . . . , e * N is the standard basis of (R N ) * . Hence, if ι : t → R N denotes the inclusion of the Lie algebra t of T , we have Φ T = ι * • Φ. We recall from [L]: Lemma 2.1. We use the notation above. The set Φ −1 T (0) {0} is a manifold. The group T acts freely on this manifold. The symplectic manifold M := (Φ −1 T (0) {0})/T is the desired G = T N /T symplectic manifold, that is, it is a symplectic cone and the image of the G-moment map is C {0}. In particular Φ(Φ −1 T (0)) =̟ * (C) where̟ * : g * → (R N ) * is dual to̟ (cf. (2.2)). Our proof of Theorem 1.1 is based on two lemmas. The first one describes the group π 0 (T ) of connected components of T : Lemma 2.2. Let T ⊂ T N be as above. Then π 0 (T ) = Z G /L where, as above, L is the sublattice of the integral lattice Z G spanned by the primitive normals to the facets of the cone C. The second lemma shows that the manifold Φ −1 T (0) {0} has the homotopy type of C N (V 1 ∪. . .∪V r ) where V j ⊂ C N are complex linear subspaces of complex codimension at least 2. In fact the subspaces V j being deleted are determined by the combinatorics of the polyhedral cone C. To make this precise we need a few definitions. For a subset I ⊂ {1, . . . , N } define the corresponding coordinate subspace V I by For each j ∈ {1, . . . , N } the jth facet F j of the cone C satisfies Proof of Theorem 1.1. As was remarked previously, it is enough to prove that the symplectic toric manifold M C = M = (Φ −1 T (0) {0})/T has the properties that π 1 (M ) = Z G /L and that π 2 (M C ) = Z d where d = N − dim G. Since T acts freely on Z := Φ −1 T (0) {0}, we have a long exact sequence of homotopy groups · · · → π 2 (Z) → π 2 (M ) → π 1 (T ) → π 1 (Z) → π 1 (M ) → π 0 (T ) → π 0 (Z) → π 0 (M ). Proof of Lemma 2.2. This is a simple application of Snake lemma. Consider the commuting diagram By Snake lemma we have a long exact sequence By construction̟ is onto, hence coker̟ = 0. On the other hand coker ̟ = Z G /L. By definition ker̟ = T , ker̟ = t and the map ker̟ → ker̟ is simply the exponential map exp : t → T . Since coker(exp : t → T ) is π 0 (T ) we get π 0 (T ) ≃ Z G /L. Proof of Lemma 2.3. We keep the notation of the discussion above. The proof is an elementary application of the correspondence between symplectic quotients and Geometric Invariant Theory (GIT) quotients as developed by Mumford, Guillemin, Sternberg, Kirwan, Neeman, Sjamaar and others. The key point is that the GIT quotient C N //T C and the symplectic quotient Φ −1 T (0)/T are isomorphic as stratified spaces. It will be most convenient for us to quote [S] where Kirwan's results on the isomorphism between symplectic and GIT quotients were suitably refined. (2) By Example 2.3 of [S], Φ T is admissible in the sense of [S] p. 109, and the set of analyticly semistable points (C N ) ss for the action of T on C N is all of C N . (3) By Proposition 1.6 of [S] for any point z ∈ C N the stabilizer in the complexified group is the complexification of the stabilizer: Hence by (1), (T C ) z is trivial for all z ∈ Z. (4) By Proposition 2.4(ii) of [S] the orbit T C ·z is closed in (C N ) ss = C N if and only if T C ·z ∩Φ −1 (2.6) (5) Since the actions of (T N ) C and T C commute, the union (2.6) of closed T C orbits is (T N ) C invariant. Hence, since {0} is fixed by (T N ) C , the set (6) Proposition 2.4(iii) of [S] implies that (T C · Z)/T C = Z/T . Combining this with (3) we see that S is a T C /T -bundle over Z. Since T C /T is diffeomorphic to the Lie algebra t of T , the manifolds S and Z are homotopy equivalent. (7) For any subset I of {1, . . . , N } define the "interior" of the coordinate subspace V I . The setV I is a single (T N ) C orbit. It satisfies We claim thatV Proof of (2.7). Note that since S is (T N ) C invariant andV I is a (T N ) C orbit,V I ⊂ S ⇔V I ∩ S = ∅. Also, since z ∈ S ⇔ T C · z ∩ Z = ∅ and since S is (T N ) C invariant, we have As before let µ j ∈ Z G denote the (primitive inward pointing) normal to the facet F j of C. Suppose F I := j∈I F j is a nonzero face of C. Pick a point η in the relative interior of F I . Then η, µ k > 0 for all k ∈ I. Let z η j = η, µ j ; z η := (z η 1 , . . . , z η N ) satisfies Φ(z η ), e j = |z η j | 2 = η, µ j = η,̟(e j ) = ̟ * (η), e j for all j, where, as before, e 1 , . . . , e N is the standard basis of R N , Φ : C N → (R N ) * is the moment map for the standard action of T N on C N (see (2.3)) and̟ : R N → g is the surjective map defined earlier by (2.2). Hence Φ(z η ) =̟ * (η), so z η ∈ Φ −1 (̟ * (eta)). Since η = 0 we have where we used the fact that Φ −1 (̟ * (C)) = Z ∪ {0}. Also z η ∈V I since |z η j | 2 = η, µ j for all j and η, µ j > 0 for j ∈ I. This proves that if the intersections j∈I F j is a nonzero face of C thenV I ∩Z = ∅. HenceV I ∩ S = ∅ and thereforeV I ⊂ S. (8) If j∈I F j = {0} then for any I ′ ⊃ I, j∈I ′ F j = {0} as well. Since V I = I ′ ⊇IV I ′ , (2.7) implies that is homotopy equivalent to S = C N I∈U V I and the result follows. Reeb type and K-contact In this section we prove a version of Proposition 2.1 of Yamazaki [Y] that relates torus actions and K-contactness. Recall that the Reeb vector field R α on a contact manifold (B, α) is the unique vector field defined by the equations ι(R α )dα = 0, ι(R α )α = 1. The Reeb vector field defines a splitting of the tangent bundle of B: where ξ = ker α is the contact distribution. Since (ξ, dα| ξ ) is a symplectic vector bundle, there exists a complex structure J on ξ compatible with dα| ξ so that g ξ = dα| ξ (·, J·) is a metric on ξ. Using (3.1) we may extend g ξ by zero to all of T B. Then g = g ξ ⊕ α ⊗ α is a Riemannian metric on B in which ξ and R α are orthogonal and the length of the Reeb vector field is 1. The metric g is said to be adapted to the contact form α. If additionally the Reeb vector field is Killing with respect to an adapted metric g, i.e., if L Rα g = 0, then the pair (α, g) is called a K-contact structure on B. If given a contact distribution ξ on a manifold B there exists a K-contact structure with ker α = ξ we will say that (B, ξ) admits a K-contact structure. Note that if a Lie group G acts on B preserving a contact form α then it preserves the Reeb vector field R α , the contact distribution ξ = ker α and the symplectic structure dα| ξ . Therefore if G is compact we may choose the complex structure J (and hence the adapted metric g) to be G-invariant. Proposition 3.1. A compact contact manifold (B, ξ = ker α) admits the structure of a K-contact manifold if and only if there exists an action of a torus G on B preserving α and a vector X ∈ g such that the function ι(X B )α = Ψ α , X is strictly positive, i.e., the G action is of Reeb type. Here as before X B denotes the vector field on B induced by X ∈ g and Ψ α denotes the α-moment map. Proof. Suppose the action of a torus G on (B, ξ = ker α) is of Reeb type, i.e., suppose there is a vector X ∈ g such that Ψ α , X is strictly positive (note that this is a condition on the co-oriented contact distribution ξ and not just on the contact form α). We then can multiply α by a positive G-invariant function f so that Ψ f α , X = 1 ( take f = 1/ Ψ α , X ). Therefore it is no loss of generality to assume that α(X B ) = Ψ α , X = 1. Since G action preserves α, we have 0 = L XB α = dι(X B )α + ι(X B )dα = d1 + ι(X B )dα. Therefore X B is the Reeb vector field of α. Now choose an G-invariant metric g adapted to α. Then, since α is G-invariant, L XB g = 0, and so (α, g) is a K-contact structure on (B, ξ). Conversely suppose (α, g) is a K-contact structure on B. Since B is compact, the group of isometries of (B, g) is a compact Lie group H. Take the closure inside H of the flow of the Reeb vector field R α . The closure is a compact abelian group G, i.e., a torus. Since the flow of R α preserves the contact form α, the action of G preserves α as well. By construction R α = X B for some vector X in the Lie algebra of G. Since R α is a Reeb vector field we have 1 = ι(R α )α = Ψ α , X , where Ψ α : B → g * is the moment map for the action of G on (B, α). Hence the action of G on (B, ξ = ker α) is of Reeb type.
2019-04-12T09:10:28.716Z
2002-04-05T00:00:00.000
{ "year": 2002, "sha1": "cca33e7226125911ee4df01b059bbb5ae4b48df0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cca33e7226125911ee4df01b059bbb5ae4b48df0", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
211231870
pes2o/s2orc
v3-fos-license
Predicting Clinical Effects of CYP3A4 Modulators on Abemaciclib and Active Metabolites Exposure Using Physiologically Based Pharmacokinetic Modeling Abstract Abemaciclib, a selective inhibitor of cyclin‐dependent kinases 4 and 6, is metabolized mainly by cytochrome P450 (CYP)3A4. Clinical studies were performed to assess the impact of strong inhibitor (clarithromycin) and inducer (rifampin) on the exposure of abemaciclib and active metabolites. A physiologically based pharmacokinetic (PBPK) model incorporating the metabolites was developed to predict the effect of other strong and moderate CYP3A4 inhibitors and inducers. Clarithromycin increased the area under the plasma concentration‐time curve (AUC) of abemaciclib and potency‐adjusted unbound active species 3.4‐fold and 2.5‐fold, respectively. Rifampin decreased corresponding exposures 95% and 77%, respectively. These changes influenced the fraction metabolized via CYP3A4 in the model. An absolute bioavailability study informed the hepatic and gastric availability. In vitro data and a human radiolabel study determined the fraction and rate of formation of the active metabolites as well as absorption‐related parameters. The predicted AUC ratios of potency‐adjusted unbound active species with rifampin and clarithromycin were within 0.7‐ and 1.25‐fold of those observed. The PBPK model predicted 3.78‐ and 7.15‐fold increases in the AUC of the potency‐adjusted unbound active species with strong CYP3A4 inhibitors itraconazole and ketoconazole, respectively; and 1.62‐ and 2.37‐fold increases with the concomitant use of moderate CYP3A4 inhibitors verapamil and diltiazem, respectively. The model predicted modafinil, bosentan, and efavirenz would decrease the AUC of the potency‐adjusted unbound active species by 29%, 42%, and 52%, respectively. The current PBPK model, which considers changes in unbound potency‐adjusted active species, can be used to inform dosing recommendations when abemaciclib is coadministered with CYP3A4 perpetrators. Abemaciclib is an oral cyclin-dependent kinase (CDK) 4 and 6 inhibitor approved for the treatment of hormone receptor-positive, human epidermal growth factor 2-negative advanced or metastatic breast cancer. [1][2][3] The pharmacokinetics of abemaciclib has been characterized in patients and in healthy subjects, with no significant differences between groups. 4 Following oral administration, abemaciclib is almost completely absorbed with a time of observed maximum plasma concentration of about 6 to 8 hours. 1,5 Abemaciclib is highly bound to plasma proteins, with a fraction unbound in plasma (fu) of 0.0557. Abemaciclib is extensively distributed to tissues, with a systemic volume of distribution at steady state (Vd ss ) estimated to be 724 L in the absolute bioavailability study. The mean half-life and systemic clearance (CL) of abemaciclib are 29.3 hours and 24 L/h, respectively. 6 Following oral administration of a 200-mg dose, the oral bioavailability of abemaciclib is 45%. 6 In vitro and human disposition studies have demonstrated that abemaciclib is extensively metabolized via cytochrome P450 (CYP)3A4, but not CYP3A5, in liver to multiple active metabolites. 5 These oxidative metabolites are present in significant concentrations in plasma and accounted for approximately 45% of total plasma radioactivity in the human mass balance study. 5 Metabolites M2 and M20 are formed from abemaciclib by CYP3A4, and metabolite M18 can be formed by CYP3A4 from either M2 or M20. 5 The active metabolites are either eliminated unchanged in bile or further metabolized by CYP3A4 or via sulfate conjugation and eliminated via biliary excretion. 5 Clinical drug interaction studies with the CYP3A4 inhibitor clarithromycin and inducer rifampin demonstrated the extensive involvement of CYP3A4 metabolism of abemaciclib. 5 Because the patient population for abemaciclib may include individuals taking other medications that can be inhibitors or inducers of CYP3A4, the primary objective of this study was to predict the pharmacokinetics (PK) of a single dose of abemaciclib and its active metabolites (M2, M18, and M20) in the presence of known moderate and strong CYP3A4 inhibitors and inducers using physiologically based pharmacokinetic (PBPK) modeling. Clinical Studies With Abemaciclib All clinical studies used for modeling were approved by the respective institutional review boards or independent ethics committees, and all subjects who participated in the studies provided written informed consent. The studies were conducted in accordance with the principles of the Declaration of Helsinki and consistent with good clinical practices. A description of the clinical studies can be found in the supplemental information. Simulation Strategy A scheme of the overall simulation strategy is shown in Figure 1A. Briefly, multiple models were developed and/or verified including models for abemaciclib and its 3 active metabolites, and the CYP3A4 inducers (efavirenz, modafinil, bosentan, and rifampicin) and inhibitors (ketoconazole, itraconazole, clarithromycin, diltiazem, and verapamil). The abemaciclib and metabolites models were built using both physicochemical and biological in vitro data as well as data from the absolute bioavailability study and the human radiolabel disposition study. The fractions formed and metabolized via CYP3A4 for abemaciclib and metabolites models were furthered optimized using the results from the rifampicin and clarithromycin interaction studies. Software Simcyp version 14 (Sheffield, UK) was used to develop and/or verify the pharmacokinetics of abemaciclib, 3 active metabolites (M2, M18, and M20), ketoconazole, itraconazole, clarithromycin, diltiazem verapamil, rifampin, efavirenz, bosentan, and modafinil. These models were used to simulate and predict drug-drug interactions between 200 mg abemaciclib and the various inhibitors and inducers, including those that have not been studied in human clinical trials. Input Data The input parameters to the PBPK models are summarized in Table 1. Simulation Assumptions Absorption. The fraction of 200 mg abemaciclib absorbed (Fa) from the intestine after an oral dose was calculated using equation 1: % of parent in feces % of dose recovered (1) where the percentage, of parent in feces, determined in the 14 C study, was 6.76 and the percentage of dose quantified was 75.4% (total radioactivity recovery was 84%). Therefore, the Fa was determined to be 0.91. The Fa and the absorption rate constant (Table 1) were input into the first-order absorption model within Simcyp. Distribution. A full PBPK model was selected for the parent compound, and minimal PBPK models were selected for the metabolites. The tissue compositionbased model (method 2) implemented in Simcyp as proposed by Rodgers et al 7-10 was selected to predict the volume of distribution of abemaciclib at steady state. The tissue-to-plasma partition coefficient scalar was adjusted to 2.5 to match the observed Vd ss , calculated by noncompartmental analysis from intravenous (IV) data in the ABA study. 6 The Vd ss of the 3 active metabolites was estimated manually, while clearance was kept fixed (see elimination section), based on the assumption that volume is the primary factor driving the metabolites' peak pasma concentration (C max ). First-Pass Hepatic Extraction. The fraction of a 200-mg oral dose of abemaciclib escaping first-pass metabolism in the liver (F H ) was calculated according to equations 2, 3, and 4, by first calculating the IV blood clearance (CL B,iv ): where CL iv was the IV plasma clearance (24 L/h) and B:P was the blood to plasma concentration ratio (0.84). The CL B,iv was used to calculate the hepatic extraction ratio (E H ): where Q was the hepatic blood flow, assumed to be 80 L/h. 11 Finally, F H was calculated from E H : resulting in a calculated F H of 0.64. First-Pass Gut Extraction. The fraction of a 200-mg oral dose of abemaciclib escaping first-pass metabolism Figure 1. A, Simulation strategy. B, Proposed disposition scheme for abemaciclib and active metabolites after a 200-mg dose. *F G was set to 1 for a 50-mg dose of abemaciclib. ADME indicates absorption, distribution, metabolism, and excretion; CL, clearance; Fa, fraction absorbed; Fe, fraction eliminated; F G , fraction escaping first-pass metabolism in the gut; F H , fraction escaping first-pass metabolism in the liver; fm, fraction metabolized by CYP3A4; fu, fraction unbound; ka, absorption rate constant; pKa, acid dissociation constant; PopPK, population pharmacokinetics; Vd, volume of distribution. in the gut (F G ), was calculated according to equation 5: where the absolute bioavailability (F) was 0.45, F H was 0.64, and Fa was 0.91, giving an F G value of approximately 0.77. It should be noted that when abemaciclib is dosed at 50 mg, the dose-normalized abemaciclib exposure is higher than when it is dosed at 200 mg. The difference is thought to represent a lower gut extraction (and higher F G ) at 50 mg than at 200 mg rather than a difference in the intrinsic clearance, absorption, or any other parameter. This possibility is further considered in the Discussion. Metabolism and Elimination. In order to calculate the fraction of abemaciclib metabolized by CYP3A4 (f m ), equation 6, previously described, 12 was rearranged to equation 7: where the area under the concentration time curve (AUC) ratio of inhibited to uninhibited abemaciclib (AUC R ) was 3.37 in the clarithromycin interaction study, and A describes the proportion of CYP3A4 intrinsic clearance in the liver that was inhibited by 500 mg of clarithromycin given orally twice daily for 5 days before the administration of abemaciclib. The parameter A was calculated for clarithromycin using static modeling with IV midazolam as the substrate drug (according to the observed 3.5-fold increase in midazolam AUC extrapolated to infinity 13 ). The percentage of the oral abemaciclib dose recovered in feces of the parent compound and the 6 metabolites are listed in Table S2. M1, M2, M20, and M22 are primary metabolites. M21 is a secondary metabolite formed from M20. M18 is a secondary metabolite that could be formed from either M2 or M20. In order to calculate the fraction of metabolites formed, the percentage of analytes recovered in feces was adjusted to 100, assuming the proportion of each metabolite remains constant (Table S2). The fraction of M2 formed from the parent compound was calculated by summing the percentage of M2 and M18 present in feces and dividing by 100. The fraction of M20 formed was calculated by summing the percentages of M20 and M21 recovered in feces and dividing by 100. In the human mass balance study, 4% of the radioactivity was excreted in urine, and for modeling purposes it was assumed to be parent compound (cold profiling of urine suggests parent is the predominant species in urine). Initial estimates of the systemic clearances of M2 and M20 (CL met ) were calculated from equation 8: where f m,p-m was the fraction of the parent forming each of the metabolites (calculated as described above), CL parent was the observed systemic (IV) clearance of the parent compound, AUC 0-ÝMet was the observed area under the plasma concentration-time curve of the metabolite M2 or M20, and AUC 0-Ýparent was the observed area under the plasma concentration-time curve of the parent (from the rifampin interaction study 5 ). M18 can be formed from M2 or M20; however, for modeling purposes it was assumed that it is formed solely from M2 (supported by the observation that M2 is the major route of metabolism). Hence, the systemic clearance of the secondary metabolite M18 (CL M18 ) was calculated according to equation 9. 14 where f m,2-18 is the fraction of M2 forming M18, CL M2 was the systemic clearance of M2 calculated from equation 8, AUC 0-ÝM2 was the observed area under the plasma concentration-time curve of M2 (from rifampin interaction study), and AUC 0-ÝM18 was the observed area under the plasma concentration-time curve of M18 (from the human mass balance study). The fraction of M20 metabolized via CYP3A4 (0.74) was determined by manual fitting to be within 0.8-and 1.25-fold of the observed AUC 0-Ý in the human mass balance study and the observed AUC 0-Ý ratio of for M20 in the rifampin interaction study. 5 The fraction of M2 metabolized via CYP3A4 (0.4) was similarly fitted to match the AUC 0-Ý and the AUC 0-Ý ratios of M2 and M18 in the rifampin interaction study. The fraction of M18 metabolized via CYP3A4 (0.06) was fitted to match the observed AUC 0-Ý and the AUC 0-Ý ratio of M18 in the rifampin interaction study. The CL int values of M2, M18, and M20 were manually optimized within Simcyp to match the observed AUC 0-Ý of each of the metabolites in both arms of the rifampin interaction study. Renal clearance (1 L/h) was estimated as 4% of systemic clearance, based on the recovered radioactivity in urine in the mass balance study. Simulation Design. The simulations were all performed with the Simcyp Healthy Volunteer population aged 40 to 65 years, and 80% female to approximate to the clarithromycin and rifampin clinical interaction study populations. All simulations were performed under fasted conditions with 100 virtual individuals (10 trials of 10 individuals each). Inhibition Simulations. The predictions of the effect of CYP3A4 inhibition by clarithromycin (500 mg twice daily [BID]), diltiazem (120 mg 3 times daily) and its n-desmethyl metabolite, and verapamil (120 mg 3 times daily) were performed using standard Simcyp v14 library files with modifications described below. Inhibitors were dosed orally for 12 days, and on day 7 a dose of 200 mg of abemaciclib was given 2 hours after the first dose of the inhibitor. Modifications to Inhibitor Files. The ketoconazole and itraconazole files available in Simcyp version 14 predicted AUC ratios for abemaciclib of 7-and 3.8-fold, respectively. This is significantly lower than would be expected, given that ketoconazole and itraconazole are known strong inhibitors of CYP3A4, 15,16 and a high proportion (>0.89) of abemaciclib metabolism is CYP3A4 mediated. Other authors have addressed the systematic underprediction of CYP3A4 drug-grug interaction (DDI) with the Simcyp itraconazole and ketoconazole files and described several necessary modifications to the ketoconazole and itraconazole files, [17][18][19] including the need to incorporate uptake and efflux transporters in the liver to increase the unbound concentration of ketoconazole at the site of inhibition and a considerable decrease in the inhibition constant for itraconazole. Our approach is as follows. The observed AUC ratio of midazolam in the presence of ketoconazole was reported to be 19. 20 Assuming an F G and f m of midazolam of 0.5 and 0.9, 19,21 respectively, this ratio suggests that ketoconazole completely inhibits CYP3A4 activity in the gut and liver. In order to simulate this complete inhibition, the CYP3A4-mediated intrinsic clearance of abemaciclib was reduced to 0 in the Simcyp compound file. Similarly, a 90% reduction in CYP3A4 intrinsic clearance by itraconazole was used to reproduce the reported AUC ratio of midazolam in the presence of itraconazole. 11,22,23 Therefore, the CYP3A4-mediated intrinsic clearance was multiplied by 0.1 in the abemaciclib and 3 metabolite files to simulate the itraconazole interaction. The interactions assumptions were verified against midazolam as described in the Model Verification section. Because the clarithromycin CYP3A4-mediated interaction was overpredicted, the competitive inhibition of CYP3A4 by clarithromycin was removed, leaving time-dependent inhibition only as reported in the literature. 24 This resulted in improved predictions compared with observed interactions with abemaciclib. The prediction of the clarithromycin-midazolam interaction was also acceptable and is described in the Model Verification section. Induction Simulations. The predictions of the effect of CYP3A4 induction by rifampin (600 mg once daily [QD]), modafinil (200 mg QD for 7 days and then 400 mg QD), efavirenz (600 mg QD), bosentan (125 mg BID), were performed using either standard Simcyp library files (efavirenz and rifampin) with a slight modification to the rifampin file as described below or custom files described below (modafinil and bosentan). [25][26][27] The prediction of the effect of efavirenz (600 mg QD) was performed using the Simcyp v16 library file, implemented in Simcyp v14. The inducers were dosed orally for 12 days, and on day 7, a 200-mg dose of abemaciclib was given concurrently with the dose of the inducer, except for the interaction with modafinil, where modafinil was dosed QD for 40 days, and abemaciclib was given on day 27 to reproduce dosing schedules from several published clinical trials. [28][29][30][31] Modifications to Simcyp Inducer Files. The inhibition of CYP3A4 by rifampin was removed from the model to better replicate the observed interaction with abemaciclib given that rifampin is not a CYP3A4 inhibitor. 32 The prediction was verified against the observed interaction with known CYP3A4 substrates (see the Model Verification section). Inducer Files for Bosentan and Modafinil. The bosentan PBPK model was developed using physicochemical and biological in vitro data and published in vivo data. The model included hepatic and nonhepatic (targetmediated) clearances as observed at therapeutic bosentan plasma concentrations 33 and autoinduction of CYP3A4-mediated bosentan clearance. These assumptions were verified using bosentan single-and multipledose data. 34 Active uptake into the liver via organic anion-transporting polypeptides (OATPs) was considered in the model to reproduce in vivo hepatic clearance of bosentan as well as in the calculation of in vitro induction parameters (maximal induction fold and 50% of the maximal induction unbound concentration). 35 Because the model was able to accurately predict the autoinduction of bosentan with regard to CYP3A4 metabolism in both the gut and liver, no induction of OATPs by bosentan was necessary in the model. The modafinil induction file was built based on a published model, with minor changes to the input parameters. 30,31 The input parameters for bosentan and modafinil are shown in Table S3. Abemaciclib Active Species Calculations Calculations for the prediction of the AUC ratio for the active species are shown in equation 10: where AUC has the units nanomole-hour per liter, and the subscript "In" indicates the AUC of the parent or metabolite when coadministered with the inhibitor or inducer. To account for protein-binding and potency differences among the 4 active analytes, the AUCs of parent and metabolites were adjusted according to equations 11 and 12, respectively: AUC parent adjusted = AUC parent × fu parent (11) where fu parent is the fraction unbound of parent abemaciclib in plasma, and where fu metabolite is the fraction unbound of the metabolite(s) in plasma, and half-maximal inhibitory concentration for parent/metabolite represents the in vitro potency for CDK4/cyclin D1 of each analyte. The mean ± SD CDK4/cyclin D1 half-maximal inhibitory concentration values were measured in vitro and are as follows: 1.57 ± 0.6 nmol/L for abemaciclib, 1.24 ± 0.4 nmol/L for M2, 1.46 ± 0.2 nmol/L for M18, and 1.54 ± 0.2 nmol/L for M20. 36 The adjusted AUC ratios of active species were then calculated as in equation 10. Abemaciclib Simulations The proposed disposition of abemaciclib and active metabolites is shown in Figure 1B. The observed and the model-simulated plasma concentration-time profiles of abemaciclib and its 3 active metabolites M2, M20, and M18 after a 50-and 200-mg dose of abemaciclib are shown in Figure 2 and (Table 2). Interaction Simulations The observed and the model-predicted plasma concentration-time profiles of abemaciclib and active metabolites M2, M20, and M18 when a 50-mg dose of abemaciclib was coadministered with clarithromycin are shown in Figure 2. The model-predicted concentration-time profiles are consistent with observed abemaciclib and metabolites in the presence of clarithromycin. The C max and AUC 0-Ý ratios for the interaction are listed in Table 3. The observed and the model-predicted plasma concentration-time profiles of abemaciclib and its metabolites M2, M20, and M18 when a 200-mg dose of abemaciclib was coadministered with rifampin are shown in Figure 2. The models reproduced the observed concentrationtime profiles in the presence of rifampin. These C max and AUC 0-Ý ratios for the interaction are also listed in Table 3. Using the criteria published by Guest and collaborators, 37 with a value of 1.3 based on the percentage coefficient of variation (CV) of abemaciclib AUC after intravenous dosing, the models were acceptable and able to capture the effect of clarithromycin and rifampin on abemaciclib and active metabolites within the appropriate limits (Figure 3). The predicted C max and AUC 0-Ý ratios for a 200-mg dose of abemaciclib and its active metabolites when coadministered with other CYP3A inhibitors and inducers are listed in Table 4. The model predicted 7.11-and 15.7-fold increases in abemaciclib AUC in the presence of strong CYP3A4 inhibitors itraconazole and ketoconazole, respectively; and, 2.27-and 3.90-fold increases with the concomitant use of moderate CYP3A4 inhibitors verapamil and diltiazem, respectively ( Table 4). The model predicted AUC ratios of 0.31, 0.32, and 0.54 in abemaciclib AUC in the presence of CYP3A4 inducers efavirenz, bosentan, and modafinil, respectively ( Table 4). The predicted AUC ratios for the potency-adjusted unbound active species with diltiazem, verapamil, itraconazole and ketoconazole ranged from 1.62 to 7.15 ( Table 4). The predicted AUC ratios for the potency-adjusted unbound active species with efavirenz, bosentan, and modafinil ranged from 0.48 to 0.71 (Table 4). Model Verification The abemaciclib PBPK model was verified against multiple clinical studies, including studies that were not used in model building (Table S4). All assumptions and conditions used in the inhibition simulations were verified using midazolam as a victim drug. Models of various CYP3A4 inhibitors were qualified by comparing the simulated and observed AUC 0-Ý and C max ratios of midazolam in the presence and absence of these inhibitors (Table S5). The models of the strong CYP3A4 inducer, rifampin, qualified by Simcyp. 26 The modafinil PBPK model was qualified by comparing the predicted PK parameters after a single and multiple dose with the observed values. 30,31,38 Furthermore, the observed AUC and C max ratios of palbociclib, 39 triazolam, 29 and midazolam 30 in the presence and absence of modafinil are within 0.78-to 1.14-fold of the observed values (Table S7). Open symbols represent the ratios with clarithromycin and solid symbols the ratios with rifampin. Open squares represent abemaciclib + clarithromycin. Open circles represent M2 + clarithromycin. Open diamonds represent M20 + clarithromycin. Open triangles represent M18 + clarithromycin. Closed squares represent abemaciclib + rifampin. Closed circles represent M2 + rifampin. Closed diamonds represent M20 + rifampin. Open triangles represent M18 + rifampin. The solid black lines represent the lines of unity, the solid gray lines represent the 2-fold limits, and the dotted lines represent the upper and lower limits defined by Guest and collaborators 37 using a value of 1.3. AUC indicates area under the plasma concentration-time curve; C max , peak plasma concentration. The bosentan model adequately reproduced the reported PK of bosentan after a single 125-mg dose in healthy male volunteers. 34 The observed over-predicted ratios of AUC 0-τ and Cmax are within 1.06 and 1.25 (Table S6). The model also adequately reproduced the reported CYP3A4 autoinduction in bosentan PK after multiple dosing (125 mg BID for 10 days). 34 The observed over predicted ratios of AUC 0-τ and C max are within 0.97 and 1.12 (Table S6). Furthermore, the model accurately predicted the gut-specific CYP3A4 autoinduction of bosentan, which can be calculated with data from the Tracleer (bosentan) clinical pharmacology biopharmaceutics review 40 following 5 days of bosentan 125 mg BID in the absence and presence of ketoconazole 200 mg QD. The observed and predicted F G values for bosentan on day 5 were 0.62 and 0.67. The observed F G value is calculated from a reported C max ratio of 1.62 with and without ketoconazole and assuming an F G of 1.0 in the presence of ketoconazole (calculated using reported data from the Tracleer U.S. Food and Drug Administration clinical pharmacology biopharmaceutics review). 40 The predicted F G is calculated by applying a Qgut model to the PBPK model-predicted CYP3A4 gut intrinsic clearance on day 5. 41 The induction of CYP3A4 by bosentan was additionally verified using midazolam, a well-characterized CYP3A4 substrate. The interactions were simulated using the midazolam model developed and verified by SimcypV14 (Table S7). Sensitivity Analyses A sensitivity analysis was conducted to determine the effect of individually changing the f m by CYP3A4 of abemaciclib (parent), M2, M18, and M20 on the AUC ratio associated with the clarithromycin interaction. A complementary sensitivity analysis was performed to determine the effect of changing individual f m values on the potency-adjusted unbound AUC ratios of the active species. Figure 4 shows the effect of changing CYP3A4 f m on the AUC ratio with clarithromycin of the individual species (abemaciclib, M2, M20, and M18) ( Figure 4A). As expected, the AUC ratio of parent drug associated with the clarithromycin interaction was sensitive to changes in the f m of parent drug because of the high dependence on CYP3A4 for elimination. In contrast, the clarithromycin DDI AUC ratios for the active metabolites were insensitive to changes in f m because the metabolites are not highly dependent on CYP3A4 for elimination. A similar trend was seen for the effect of changing individual f m values on the potency-adjusted unbound AUC ratios for parent drug and active metabolites ( Figure 4B). The predicted AUC ratio (ratio with clarithromycin coadministration to no inhibitor control) for total active species was sensitive to parent drug f m but not the f m of the individual metabolites for the reasons described previously. Together, these data indicate that the f m for parent drug has been accurately estimated because suboptimal values would not reproduce the observed extent of DDI with clarithromycin. Within the Simcyp framework, the fraction unbound in the gut (fu,gut) is employed to set the desirable value of gut wall availability (F G ) at given values of membrane permeability and CYP3A4 intrinsic clearance in the Qgut model. 42 For the abemaciclib PBPK model, the value of F G for the 50-mg dose was 0.98 and the corresponding fu,gut was 0.008, whereas for the 200-mg abemaciclib model, the F G was 0.74 and the corresponding fu,gut was 0.7. The boundaries of possible fu,gut values are therefore established by the observed DDIs with clarithromycin and rifampin. Sensitivity analyses were performed to determine the effect of changing F G and fu,gut on the AUC ratio with clarithromycin of abemaciclib and potency-corrected unbound active species. As expected, the AUC ratio of abemaciclib is moderately sensitive to changes in fu,gut and F G over the feasible range, whereas the unbound AUC ratio of active species adjusted for potency is insensitive to changes in fu,gut ( Figure 4C and 4D). Misspecification of fu,gut is not expected to significantly influence AUC ratios associated with coadministration of CYP3A4 inhibitors. Discussion The use of PBPK modeling to support dosing recommendations for DDI scenarios in regulatory submissions and prescribing labels has increased in recent years as exemplified by recent publications. [43][44][45] indicates area under the concentration-time curve from 0 to infinity; C max , maximal concentration observed; NC, not calculated. a Concentration values of M18 after administration of clarithromycin were below the limit of quantitation of 1 ng/mL. Regulatory acceptance of such PBPK-driven recommendations appears to have increased concurrently, as the models become more robust and can simulate ever more complex scenarios. 46,47 However, various regulatory authorities have also indicated the need for thorough model qualification and testing. 48,49 Early in the development cycle, the metabolic profile of abemaciclib was understood to include a significant CYP3A4 component. Hence, clinical DDI studies were conducted with clarithromycin and rifampin. 5 In these studies, abemaciclib AUC ratios of 3.4 and 0.05, respectively, and total active species AUC ratios of 2.5 and 0.23, respectively, were observed, indicating that CYP3A4 did indeed play an important role not just in the metabolism of abemaciclib (f m = 0.89; Figure 1B) but also in its active metabolites (f m = 0.1-0.7; Figure 1B). It would therefore be important to include prescribing recommendations for other types of CYP3A4 modulators (ie, strong and moderate) in the drug label. Fortunately, PBPK modeling allows for the efficient use of the vast amount of clinical knowledge gathered over the last 30 years in multiple drug interaction studies and the translation of the in vitro characteristics (ie, induction and inhibition parameters) of several CYP3A4 perpetrators. This is evident by the high number of applications of this type of modeling by academia, regulatory agencies, and industry. 50-55 A PBPK modeling approach was used to predict the effect of CYP3A4 inhibitors and inducers that have not been tested with abemaciclib but have been tested in the clinic with well-characterized sensitive CYP3A4 substrates (eg, midazolam, triazolam), for which the used PBPK models have been thoroughly verified. However, in order to have value in accurately predicting CYP3A4mediated interactions, the model would need to be unusually complex, incorporating first-pass elimination by the gut and liver, 3 metabolites, and multiple routes of elimination of the parent compound and the metabolites (CYP mediated and non-CYP-mediated metabolism and renal elimination). It was critical to capture both parent and metabolite exposure in the model due to the metabolites having similar potency at CDK4 and CDK6 as the parent. Similar to abemaciclib, each of the metabolites was known to undergo CYP3A4-mediated metabolism. Although inhibition of CYP3A4 would result in a likely increase in exposure to abemaciclib and a decrease in the formation clearance of metabolites, the elimination clearance of the metabolites would also decrease, resulting in a complex disposition scenario whereby the overall effect on the exposure to total active species can only be understood through the use of an integrated The ketoconazole interaction was modeled by reducing the CYP3A4-mediated clearance to 0. model. Despite the complexity of this task, which involved integrating and reconciling data from multiple studies, high interindividual variability in observed abemaciclib pharmacokinetics, challenges associated with fitting multiple analytes (abemaciclib, M2, M20, and M18), and fitting different dose levels with apparently different absorption characteristics, it was possible to achieve an acceptable and useful match to the observed data. As shown in Figure 2, most of the predicted concentration-time profiles matched the observed median, and all were within the 90% prediction interval (Figure 2), demonstrating that the current parameter set is well justified and based on sound assumptions. Further, the observed versus predicted ratios ranged from 0.92 for the C max of M18 after a 200-mg dose, up to 1.53 for the AUC of abemaciclib after a 50mg dose. Although not perfect, the fit of the model predictions to the observed clinical data can certainly be considered acceptable, given the large variability in observed parameters. For example, the AUC of abemaciclib and its metabolites demonstrated observed CV values ranging from 30% to 176%, depending on the analyte and the dose (Table 2). Similarly, observed CV values for C max ranged from 32% to 73%, depending on analyte and dose. Figure 3 displays the observed versus predicted (1) AUC and (2) C max ratios for abemaciclib, M2, M18, and M20 when coadministered with clarithromycin or rifampin. The data are within the predictability limits from 2 different methods used to assess the accuracy of the predictions. The first method is the traditional 2-fold measure, and the second is the method introduced by Guest and collaborators, 37 in which the limits are narrower when the ratios are close to 1 and approach the 2-fold traditional method with larger ratios. For this work, the variability term was taken from the percentage CV in abemaciclib after intravenous administration, as in the approach taken with the variability in midazolam in the aforementioned publication. Figure S1 displays similar plots with a variability term of 1.65 reflective of the larger variability in AUC and C max observed for the active species after oral administration of abemaciclib. As expected, the predicted and observed AUC ratios are also within the successful prediction these wider limits. Clarithromycin is designated as a strong inhibitor of CYP3A based on the observed interaction with sensitive substrate midazolam, 13 which demonstrated an AUC ratio of 6.32 (Table S5). However, both itraconazole and ketoconazole are known to more strongly inhibit CYP3A4 in both the gut and liver. 20,22,23 Hence, it was important to demonstrate the potential "worst case" interaction ratios that would result when abemaciclib is dosed with stronger inhibitors. Following qualification of all the inhibitor models against midazolam data taken from the literature, the interaction with abemaciclib was simulated. The resulting AUC ratios with various moderate to strong inhibitors ranged from 2.27 to 15.7 for the parent alone and from 1.62 to 7.15 for the potency-adjusted unbound total active species (Table 4). Similar differences in the magnitude of the effect of moderate inducers on parent alone compared with total active species demonstrate the importance of considering the total active species. If a dose reduction recommendation were based on the AUC ratio of abemaciclib alone, it could potentially result in patients being under-or overdosed. This PBPK model encompassed several assumptions. First, in the human absorption, distribution, metabolism, excretion ADME study, 85% of the radioactivity was recovered in feces and urine over 336 hours. Given the low amounts of radioactivity recovered at later time points, only feces samples were pooled (up to 216 hours) and profiled using accelerator mass spectrometry for parent and metabolites. Therefore, in this modeling exercise it was assumed that the percentage of the parent excreted in feces calculated from the profiled samples using Equation 1 would have been the same if measured in the total recovered radioactivity. We believe this assumption holds true even for the capsule and tablet formulations, given the high permeability, high solubility, and fast dissolution in stomach pH of abemaciclib. Furthermore, the effect of a high-fat and high-calorie meal increased the AUC of abemaciclib and active species by 9%. 3 Another important assumption was that CYP3A4 f m of abemaciclib and the metabolites could be determined using the data from the clarithromycin and rifampin interaction studies. The f m assumptions in the model could be further verified with other studies. Although, the current model is able to reproduce the known interactions with these 2 perpetrators, and the sensitivity analyses show that changes in f m do not have a high impact on the AUC of the potency-corrected unbound active species. Another assumption of the model is that the partition of abemaciclib and metabolites into the liver is perfusion rate limited. This holds true because abemaciclib and active metabolites have good permeability and are not substrates of hepatic uptake transporters organic cation transporter 1, organic anion-transporting polypeptide 1B1 or 1B3. Therefore, the changes observed in plasma caused by CYP3A4 perpetrators are expected to reflect the changes inside the liver. Given this, we were able to use the plasma AUC and C max values for the parent and the metabolites, in the presence and absence of rifampin and clarithromycin, to calculate the fractions formed and eliminated via CYP3A4. Although abemaciclib in vitro is a substrate of P-glycoprotein and breast cancer resistance protein, these intestinal efflux transporters were not incorpo-rated into the model given the high permeability and solubility and the expected lack of an effect of inhibitors of P-glycoprotein and breast cancer resistance protein on the PK of abemaciclib. 56,57 In addition to facilitating an understanding of optimal prescribing in the presence of CYP3A-modulators, the approach taken to building the PBPK model led to a deeper understanding of the mechanisms underlying abemaciclib PK and identified some interesting characteristics. For example, the bioavailability of abemaciclib appears to be different with 50-versus 200-mg doses. This difference is thought to reflect a difference in F G and was further confirmed with population analyses conducted throughout development. These analyses showed that hepatic intrinsic clearance and absorption were not dependent on dose, whereas F G was shown to change with dose. 4 Further, no difference in the fraction absorbed was expected between dose levels due to the compound having high predicted human effective jejunal permeability (2.46 × 10 −4 cm/s), fast dissolution in the stomach, and a lack of precipitation. A comparison of AUC-based metabolite:parent ratios (Table S1) between the clarithromycin study conducted at 50 mg and 2 studies conducted at 200 mg (rifampin interaction and ABA studies) reveals approximately 2-fold higher ratios in the 200-mg studies, compared with the 50-mg study. Hence, there was a greater extent of metabolism at 200 mg, and the observed difference in exposure between doses is therefore the result of metabolic differences and not absorption changes. In addition, there is no change in half-life between 50 mg and 200 mg (Table S4), which adds further weight to the argument that changes in CL int or F H were not the source of the difference in exposure. Therefore, F G was concluded to be the most likely source of the apparent dose dependency. Unfortunately, there is no crossover study with doses between 50 and 200 mg available to assess different doses in the same individuals with adequate power, so it is not possible to conclude a true dose dependency, only that there is some difference between these 2 studies that leads to a conclusion of a difference in F G . The prediction of the DDIs is determined by f m and F G . Because there is some degree of computational difficulty in identifying the true value of f m and F G , sensitivity analyses were conducted on these parameters. The absolute bioavailability study informs the value of the product of Fa and F G ; however, it does not allow the identification of the individual parameters. Instead, F G was calculated indirectly by assuming Fa is 0.91 (based on permeability, solubility, and parent drug excreted in feces). As would be expected, altering F G has the biggest effect on a predicted AUC ratio for abemaciclib but a minimal effect on the total active species bound and unbound ( Figure 4B). Another sensitivity analysis was conducted to determine the effect of individually changing the CYP3A4 f m of abemaciclib (parent), M2, M18, and M20 on the AUC ratio associated with the clarithromycin interaction. As expected (Figure 4), the AUC ratio of parent drug associated with the clarithromycin interaction was sensitive to changes in the f m of parent drug because of the high dependence on CYP3A4 for elimination. In contrast, the clarithromycin DDI AUC ratios for the active metabolites were insensitive to changes in f m because the metabolites are not highly dependent on CYP3A4 for elimination. Conclusions In conclusion, a complex PBPK model for abemaciclib and active metabolites was developed and verified. The model output demonstrated the importance of considering total active species when using PBPK to determine dose adjustments for molecules with active metabolites. The current PBPK model, which considers changes in unbound potency-adjusted active species, can be used to inform dosing recommendations to support prescribing practices when abemaciclib is coadministered with inhibitors and inducers of CYP3A4. Data Sharing Lilly provides access to all individual participant data collected during the trials, after anonymization, with the exception of pharmacokinetic or genetic data. Data are available to request 6 months after the indication studied has been approved in the US and EU and after primary publication acceptance, whichever is later. No expiration date of data requests is currently set once data are made available. Access is provided after a proposal has been approved by an independent review committee identified for this purpose and after receipt of a signed data-sharing agreement. Data and documents, including the study protocol, statistical analysis plan, clinical study report, blank or annotated case report forms, will be provided in a secure data-sharing environment. For details on submitting a request, see the instructions provided at www.vivli.org.
2020-02-22T14:04:02.781Z
2020-02-20T00:00:00.000
{ "year": 2020, "sha1": "c127118b231050c15591e369ba2eca93a33d5ff5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1002/jcph.1584", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "785f8448765a4d852ddeafc2298543d665f5fc24", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
53777358
pes2o/s2orc
v3-fos-license
Phylogenetic and mutational analyses of human LEUTX, a homeobox gene implicated in embryogenesis Recently, human PAIRED-LIKE homeobox transcription factor (TF) genes were discovered whose expression is limited to the period of embryo genome activation up to the 8-cell stage. One of these TFs is LEUTX, but its importance for human embryogenesis is still subject to debate. We confirmed that human LEUTX acts as a TAATCC-targeting transcriptional activator, like other K50-type PAIRED-LIKE TFs. Phylogenetic comparisons revealed that Leutx proteins are conserved across Placentalia and comprise two conserved domains, the homeodomain, and a Leutx-specific domain containing putative transcriptional activation motifs (9aaTAD). Examination of human genotype resources revealed 116 allelic variants in LEUTX. Twenty-four variants potentially affect function, but they occur only heterozygously at low frequency. One variant affects a DNA-specificity determining residue, mutationally reachable by a one-base transition. In vitro and in silico experiments showed that this LEUTX mutation (alanine to valine at position 54 in the homeodomain) results in a transactivational loss-of-function to a minimal TAATCC-containing promoter and a 36 bp motif enriched in genes involved in embryo genome activation. A compensatory change in residue 47 restores function. The results support the notion that human LEUTX functions as a transcriptional activator important for human embryogenesis. Recent studies by our group and others have shown that many PRD-LIKE genes are transcribed in human reproductive tissues and during the preimplantation development; for example, NOBOX in ovary 16 , testis and oocytes 17 ; and ARGFX, CPHX1, CPHX2, DPRX, LEUTX, OTX2, TPRX1, and TPRX2 in oocytes to 8-cell stage blastomeres 16,18,19 . Among them, LEUTX is induced at the 4-cell stage 18 , and our further studies revealed that its expression is restricted to the 4-cell to 8-cell stage of the preimplantation embryo 15 . This period overlaps with human embryonic genome activation (EGA) 20 . Present evidence indicates that LEUTX is not expressed in any other cell types, including human embryonic stem cells (hESCs). Transfection and overexpression of human LEUTX in hESCs was able to activate the transcription of ~25% of the secondary genes induced at the 8-cell stage 15 , although hESCs are later descendants of the 8-cell blastomeres, by about 6-7 cell divisions. Interestingly, 36 bp DNA elements containing a TAATCC sequence motif, referred to as EEA motif (EGA-enriched Alu-motif), are over-represented in the promoters of genes activated in early embryos 18 , and also in genes upregulated by human LEUTX overexpression in hESCs 15 . Co-activation of this EEA-motif together with five TFs known for induction of pluripotent cells improved the reprogramming efficiency of primary skin fibroblasts 21 . Together, these data suggest an important contribution of human LEUTX for human embryogenesis. A number of PRD-LIKE TFs (Argfx, Leutx, Dprx, Tprx, Leutx), which are expressed in early development 16,18 , have been evolving rapidly by gene duplication and diversification, and are thought to be derived from Crx, an Otx homeobox family member of the PRD-LIKE class 2,18,22 . Gene duplications are an essential source for evolutionary innovation, since one copy is free to diverge 23 , although in many instances the duplicated genes are likely to become pseudogenes 24 ; Leutx genes have been reported to be absent in mouse or rat 15,25 . Moreover, the other Crx-derived PRD-LIKE TFs are also expressed in the same early embryonic period. Therefore, functional studies of the transcribed products as TFs are needed, and the implied importance of LEUTX for human embryogenesis needs further investigation to confirm its role, though this is a difficult task in human embryos. In the present study, we address the role of LEUTX in human embryogenesis using various comparative genomics approaches. First, the function of LEUTX as a TF activating transcription was compared with other early human PRD-LIKE TFs using reporter assays with a minimal TAATCC-containing promoter; this showed that wild-type human LEUTX is not just a transcribed pseudogene. Then functionally critical regions and residues of Leutx proteins were defined by phylogenetic analyses. This revealed species variation among the specificity determining residues within the Leutx homeodomain. Further, conserved putative nine amino acid transcription activation domain (9aaTAD) motifs were identified in the C-terminus of Leutx. Next, we investigated the genetic variation of human LEUTX using population genetics, and did not find any putative deleterious homozygous mutations. Finally, we demonstrated that experimental point mutations in the homeodomain of human LEUTX lead to the loss of function as transcriptional activator towards a minimal TAATCC-containing promoter motif and an EEA-motif in vitro. Using homology modeling of the homeodomain structure we determined the likely mechanism of this loss-of-function. Our findings support the notion that human LEUTX acts as transcriptional activator with an important role for human embryogenesis. Results Human LEUTX activates a reporter containing TAATCC sequences in the promoter. We have previously shown that human LEUTX can activate a reporter construct containing the 36 bp EEA motif in vitro 15 . To focus only on PRD-LIKE TFs and to reduce the possibility of other TFs binding in this motif, we engineered a new luciferase reporter, containing only an 11 bp core region centered around the predicted PRD-LIKE binding site, TAATCC (see Materials and Methods). This consensus DNA-binding site has been defined in previous studies for DPRX, OTX2 and related PRD-LIKE TFs 14,26 . With this core 11 bp reporter we performed luciferase assays using DPRX, OTX2, TPRX1, TPRX2, ARGFX, CPHX1, CPHX2, and LEUTX ( Fig. 1). LEUTX, OTX2, and TPRX1 activated the reporter strongly with statistical significance (p < 0.05 by t-test; Fig. 1A), while ARGFX, CPHX1, and CPHX2 induced no significant activation. The activation level of LEUTX is equivalent to that of OTX2, suggesting that it is not simply a transcribed pseudogene. DPRX and TPRX2 show weak, though statistical significant (p < 0.05 by t-test) activation when compared to the control lacking a TF vector (marked "ref " , Fig. 1A,B). However, when the transcriptional effect of DPRX is compared to the empty promoter vector without the 11 bp motif -similar to our previous publication 15then DPRX shows a downregulation (Fig. 1A,C). The strength of the downregulation depends on the basal activity of the empty promoter vector (Fig. 1A). DPRX acting as a repressor is supported by the fact that a large set of genes in hESC is downregulated when DPRX is overexpressed 15,16 . A comparison of the homeodomain sequences of the TFs explains some of the observed transactivation differences. DPRX, OTX2, TPRX1, TPRX2, and LEUTX are all PRD-LIKE TFs with a homeodomain of the K50 type, while ARGFX has R50, and CPHX1 and CPHX2 have Q50 (Fig. 1D). Thus, the latter three, because of the difference in the crucial specificity-determining residue 50, are not expected to bind to the TAATCC motif. The K50 type factors are all expected to bind to the motif, though only LEUTX, OTX2, and TPRX1 show strong activation. Previously we have shown that DPRX and TPRX2 act primarily as repressors when over-expressed in hESCs 15,16 . As we show below, LEUTX has conserved predicted transactivation domains in its C-terminus that are also present in ARGFX. These are not found in DPRX, which could explain why DPRX functions differently, i.e. as a repressor. How the different transactivation potentials between the related factors TPRX1 and TPRX2 arise needs further investigation. Thus, while the homeodomain binding-specificity determines which promoters are targeted, the transcriptional outcome as activator or repressor likely depends also on interactions with cofactors; these interactions are possibly mediated via motifs in N-or C-terminal flanking sequences. Phylogenetic distribution of Leutx genes and characterization of the Leutx domain. To examine the phylogenetic distribution and natural variation of residues and to identify evolutionarily conserved functional elements, we retrieved Leutx genes from Genbank, using blastp and tblastn (see Materials and Methods). Over 70% of the sequences retrieved and examined in depth required manual correction of their ORFs to optimally match the conserved domains as well as the canonical Leutx gene structure, which is comprised of three exons. One intron is located just upstream of the homeobox, while the second intron is located between codon 46 and 47 in the homeobox, a canonical splice site in PRD-LIKE TFs. Leutx sequences were retrieved from all four branches of Placentalia (Xenarthra, Afrotheria, Laurasiatheria, Euarchontoglires), but were not found in other animals (Additional file 1: Tables S1-S2), in agreement with research by Maeso et al. 22 . This suggested that Leutx has arisen in early Placentalia. A multiple sequence alignment (MSA) showed that Leutx proteins are comprised of two distinct domains, the homeodomain, and a conserved C-terminal region of about 110 amino acids, which we refer to as the Leutx domain (Additional File 1: Figs S1-S3). In Glires, i.e. rodents and lagomorphs, numerous evolutionary changes are observed. On the one hand, several independent Leutx gene duplication events have given rise to tandem duplicated gene loci in several species (e.g., O. cuniculus, C. porcellus, C. lanigera) (Additional File 1: Table S1, Figs S1-S3 and S8-S11). On the other hand, intragenic repeats are seen between the homeodomain and the Leutx domain in some species. In hamsters (C. griseus and M. auratus) and desert wood rat (N. lepida) Leutx has a long repeat of 11 amino acids that separates the homeodomain from the Leutx domain, confirming the bipartite nature of Leutx. Two putative Leutx genes in prairie deer mouse (P. maniculatus) also encode repeats upstream of the Leutx domain, though the contigs are fragmentary and no homeobox was recovered (Additional file 1: Fig. S2). Rat and mouse, although reportedly lacking Leutx 22 , also have the 11 amino-acid repeat as well as the Leutx domain. However, no homeobox was recovered in the upstream region, although some repeats of the residues WFNQ, which are similar to WFQN in the homeodomain, were found (Additional File 1: Fig. S2). Further, exhaustive tblastn searches with several diverse Leutx homeodomains also failed to detect any Leutx-like homeodomains in mouse and rat. So far there is no evidence that either of these Muridae genes is transcribed, but the Leutx domain has been conserved for at least 7-12 Million years after the separation of rats and mice 27 . In Lagomorpha, the first part of the Leutx domain has apparently been lost: in rabbits the homeodomains are linked to the remainder of the Leutx domain via short polyproline repeats. In American pika, 5 repeats of about 40 residues in length link the homeodomain and the Leutx domain; a conserved PWAS sequence element of the Leutx domain is part of this repeat (Additional File 1: Fig. S2). Overall, the Leutx genomic region seems to be subject to instability in Glires, given the various gene duplications and intragenic repeats observed. "-" at the bottom "Promoter" line denotes co-transfection with plain firefly reporter (pGL4.25), while "4 × 11 bp" denotes four copies of the 11 bp core motif (containing TAATCC) inserted into the promoter of the firefly reporter (pGL4.25-4 × 11 bp). "-" on the X-axis (TF) denotes no co-transfection of pFastBac-based TF over-expression vector, for co-transfection the TF names are indicated. Asterisks indicate statistically significant (p < 0.05, t-test) differences of RNL compared to the without TF control (labelled "ref. ") in each promoter. (B) RNL fold change calculated from the ratio of "with" and "without" TFs. Fold change values were calculated from (A), but only using experiments, where pGL4.25-4 × 11 bp was co-transfected. (C) RNL fold change calculated from the ratio of "with" and "without" 4 × 11 bp core promoter. Fold change values were calculated from (A), but only using experiments, where TF over-expression vectors were co-transfected. (D) Homeodomain amino acid sequences of the PRD-LIKE TFs. The numbering on the top is relative to the N-terminus of the full-length human LEUTX, and on the bottom is relative to the start of homeodomain. Period, colon, and asterisk indicate degree of conservation; the scorings are ≤0.5, >0.5 and <1, and =1 in the Gonnet PAM 250 matrix, respectively. "#" indicates position 47 and 54 of the homeodomain. Few sequences were retrieved from Afrotheria at present. The sequence from cape elephant shrew (E. edwardii) is a pseudogene (Additional file 1: Table S1). In elephant, there are two Leutx genes as also noted earlier 22 . We note that one of these homeodomains is unusual, having an asparagine residue at position 50 of the homeodomain instead of a lysine (Additional File 1: Fig. S1). Functional elements in the Leutx domain. The C-terminal region of the Leutx domain is the most conserved section of that domain (Additional File 1: Fig. S3). Secondary structure prediction of the multiple aligned Leutx domains showed only two to three short regions predicted to be helical, while the remainder is mostly unstructured and solvent exposed (Additional File 1: Fig. S4). This suggests that the LEUTX domain overall is a flexible domain with limited globular structure. We noticed numerous proline, serine, threonine, and acidic residues in the Leutx domain. Motifs enriched in these residues are referred to as "PEST" sequences and are involved in rapid protein degradation 28,29 . Analysis of PEST sequences showed that they tend to be enriched in disordered protein regions, lacking secondary structure 30,31 , which agrees with our secondary structure predictions. We examined several LEUTX protein sequences using the computer program epestfind and found that there are usually three to four low scoring PEST regions in the Leutx domain, e.g., in human LEUTX (Additional File 1: Fig. S5). Motifs containing acidic and hydrophobic residues have been shown to function as transcription activation domains. In particular, the 9aaTAD sequence has been well studied 32,33 . We tested a number of different Leutx protein sequences with the 9aaTAD web server (Additional File 1: Fig. S6). Two regions in the C-terminus of Leutx were consistently predicted to be potential 9aaTAD motifs. These regions lie in the most conserved part, where the two highly conserved tyrosine residues are located, and overlap with the regions predicted to be alpha helical. Some weak similarity within the C-terminal region has also been observed with Argfx and Crx 22 . We independently noticed that Argfx proteins also have a conserved C-terminal region that shares sequence similarity with the Leutx domain (Additional File 1: Fig. S7). This similarity is highest in the region where the 9aaTAD motifs are predicted. Analysis of human ARGFX also reveals putative 9aaTAD motifs in this area (Additional File 1: Fig. S7). Furthermore, the ARGFX C-terminal region is also rich in PEST residues (data not shown). We conclude that the C-terminal regions of Leutx, Argfx, and possibly other protein families that have similar features (such as Crx, which we have not examined further) have been subject to evolutionary selection, most likely due to the putative transactivation domains. Furthermore, the presence of putative PEST motifs suggested that these proteins are rapidly degraded, which would be consistent with the early, transient expression observed in the human embryo. Phylogenetic analysis of Leutx sequences. Phylogenetic analysis reveals that the Leutx homeodomain as well as the Leutx domain co-evolve, and that the cladogram reflects the placental systematics reasonably well ( Fig. 2 and Additional File 1: Figs S8-S10). However, we noted that the branch lengths are much longer for Leutx homeodomains, than, for example, for Crx homeodomains. This higher level of divergence is also exemplified by the fact that Leutx homeodomains can be quite divergent from each other (Additional File 1: Table S3). For example, even within primates, the homeodomain of human LEUTX is only 67% identical to that of the Bolivian squirrel monkey (S. boliviensis), while the Crx homeodomain of Latimeria is 87% identical to that of human CRX. Within Laurasiatheria, homeodomain sequence identity can be as low as 55%, and within Glires even less than 40%. This indicates that Leutx genes are evolving much faster than most other homeobox genes, which are well conserved in evolution 3 . In Laurasiatheria, the phylogenetic tree generated from the homeodomain shows a large overlap with recent phylogenetic studies ( Fig. 2 and Additional File 1: Fig. S8) 34 . The Even-toed (Artiodactyla) and Odd-toed ungulates (Perissodactyla) as well as Carnivora are well resolved with good bootstrap support. Furthermore, bats (P. alecto, black flying fox) are found among Laurasiatherias, as the more recent phylogenetic studies showed 34,35 . This suggests, despite the limited information content of the 60 residues of a homeodomain, that Leutx evolution coincides with the phylogenetic evolution of Placentalia. In Euarchontoglires the situation is more complex. In primates, we also observe well-separated clades for the new world monkeys (Platyrrhini), old world monkeys (Cercopithecidae) and Hominoidea ( Fig. 2 and Additional file 1: Fig. S8). Recent results for the Chinese tree shrew suggested that it is more closely related to primates than to rodents 36 , but our results do not clearly resolve this issue ( Fig. 2 and Additional File 1: Figs S9 and S10). Within the Glires clade we notice a more dramatic sequence divergence. While there is still an overall agreement with rodent and lagomorph evolution 27,37 , branch lengths are longer ( Fig. 2 and Additional File 1: Fig. S10). In Ord's kangaroo rat (D. ordii) the spliced copy of Leutx shows noticeable changes in the MSA within the Leutx domain and long branch lengths within the phylogenetic tree (Additional File 1: Figs S1,S2, and Additional File 1: Fig. S10). In Caviida, we found multiple tandem gene duplications containing introns. Thus, one might expect most of them to be transcribed. It is also noteworthy that the gene duplications in guinea pig (C. porcellus) and chinchilla (C. lanigera) occurred independently in the two lineages (Additional File 1: Fig. S11). In the common ancestor of Cricetidae and Muridae 27 , the 11 amino acid repeat located between the homeodomain and the Leutx domain seems to have arisen for the first time. In the Muridae family, the homeodomain subsequently seems to have been lost. But also in the Cricetidae family the homeodomain changed, losing the basic arginine residues at position 2/3, and 5 of the homeodomain that contact the minor groove of the DNA. In the Cricetinae subfamily (hamsters), the homeodomain underwent a further critical change, with K50 changing to R50. Thus, in the Cricetidae family, the Leutx homeodomain has undergone substantial changes, suggesting a dramatic shift in specificity. In summary, the evolutionary conservation of Leutx in many Placentalia combined with the early expression suggests a critical role in embryonic development. Furthermore, the presence of independent putative reverse-transcribed pseudogenes in several species -although we did not conduct an exhaustive search -also suggests that Leutx genes are active in the germ line or early embryo. While Leutx does evolve faster than other homeobox genes, it is well conserved in Laurasiatheria and Primates; the phylogeny of the latter is similar to the expansion of Alu elements in primate genomes 38,39 . Neither common nor homozygotic missense mutations are found in the recognition helix of human LEUTX. Based on the 1000 Genome Samples 40 , one in every 10 individuals has one heterozygous mutation within the coding region of LEUTX; homozygotic mutations are even less frequent in human as a taxon (Fig. 3). Moreover, the mutation frequencies in LEUTX are lower than the average of all human protein coding genes, although the length of the coding region is short, suggesting that LEUTX is relatively constrained in human individuals. In contrast, the higher mutation rates of LEUTX compared to TFs conserved across the bilaterian divide, such as OTX2 or SOX2, may be due to less mutational pressure in somatic cells, because LEUTX is expressed only during a short developmental time window unlike OTX2 and SOX2; only germ line mutations in LEUTX can cause any changes in phenotype. For further intra-species comparisons, we investigated seven available human genotype resources: phase 3 of 1000 Genome 40 , ExAC release 0.3 41 , NHLBI Exome Sequencing Project Genome of Netherlands 42 , The Human Genetic Variation Database (Japanese) 43 , Gnomad 41 , and the deCODE Icelanders database (made available to us by Dr. Kári Stefánsson and his colleagues). Altogether we found 116 variants in LEUTX (Additional file 1: Table S4). Four of the 116 variants (p.R9H, p.S93P, p.A116H and p.T177P) are common (~ max allele frequency >1%) missense variants in any cohort (Additional File 1, Fig. S1, blue arrows). Also, two of the 116 variants (p.R9H and p.T177P) are encountered as homozygotes in the Gnomad cohort. However, all mutated amino acids are evolutionarily accepted in the other species (Additional File 1, Fig. S1), suggesting permissive changes. The other variants are neither common, nor are there any rare allele homozygotes. We examined the 116 alleles considering phylogenetic conservation and possible effects on the homeodomain and the 9aaTAD. We expect that 24 of these variants (missense, splice donor, frameshift, and nonsense mutations) could impair the function of LEUTX (Additional file 1: Table S4, marked in yellow) and hence would not be expected to propagate, possibly explaining their low allelic frequency. One of these 24 mutations, an individual in the 1000 Genome data, possessed a heterozygotic missense mutation at a DNA specificity-determining residue within the recognition helix (p.A61V, which is at position 54 in the homeodomain; sample ID HG02597). As we show below experimentally, this change from alanine to valine leads to a loss of function. This rare allele has not been transmitted to the son (sample ID HG02599). Therefore, the father's mutation might be somatic, or might not have been inherited, because it is deleterious. We also examined the Neanderthal and Denisova genome sequences 44 , and found one change in the Leutx domain of the Denisova genome that results in an amino acid change (G149V; marked in green Additional File 1, Fig. S1). That position is not highly conserved, but no valine has been found there so far, so the functional consequences are unclear. To identify critical residues for protein-DNA interactions we wanted to examine the molecular structure of human LEUTX. Since no experimental structures are available yet, we made a 3D structural model of its homeodomain using the engrailed homeodomain of Drosophila melanogaster (PDB ID: 2HDD, chain A 13 ) as template (Fig. 4A). Because the template was missing coordinates for the first four residues of the homeodomain, the N-terminal region of the LEUTX model was built using the X-ray structure of the human DLX5 homeodomain (PDB ID: 4RDU, chain D; Fig. 4B,C; see also Materials and Methods). The high sequence similarity with the binding residues present in the template structure allowed us to evaluate the possible interactions that may take place in the LEUTX homeodomain-TAATCC complex and are likely to determine the promoter sequence specificity (Fig. 4B). In the modeled homeodomain of human LEUTX (Fig. 4B,D and F), the residues I47, K50, N51, and A54 in the third alpha helix (residues 42-58) are conserved in the template X-ray structure of the engrailed homeodomain, and residue R58 shares similar physicochemical properties with K58 of the template structure; all these residues bind to the TAATCC motif. In human LEUTX, K50 would be hydrogen-bonded to guanine G5* and G6*, complementary to the cytosines of the TAATCC motif. We recently reported 16 that the K50A mutation (K57A in full-length human LEUTX) abolished the reporter activity in a luciferase reporter assay, in agreement with the loss of hydrogen bonds mediated by the K50 residue of the native protein (Fig. 4E). Conserved N-terminal arginines at position 2/3 and 5 recognize the DNA minor groove. Although the third helix contains several residues giving specificity to DNA recognition, structural data also suggested that other residues play a role in DNA-binding. The guanidinium group of R5 is required for the recognition of thymine 5 and in the human LEUTX homeodomain R5 is positioned to form hydrogen bonds with the first thymine base in the TAATCC motif and via a water molecule with adenine on the opposite strand that base pairs with the thymine base itself (Fig. 4B). Another nearby arginine residue, R2 of LEUTX, matching R138 of the 4RDU structure template (chain D), would form hydrogen bonds possibly via water molecules -as seen in the 4RDU template structure -to the two adenine bases in the TAATCC motif and interact with thymine of the opposite strand that base pairs with the first adenine base in the motif (Fig. 4B). Arginine residues at position 2, 3 and 5 are well conserved -84%, 77% and 93% respectively (Additional File 1: Figs S1,S3) -as in homeodomains in general 3 . In the case of the Drosophila Hox protein Sex combs reduced (Scr) two arginine residues make contact in the minor groove, and the shape of the DNA, i.e. the width of the minor groove, plays a role 45 . More specifically, only two arginines -either R2 or R3, and always R5 -can simultaneously be involved in contacts with the minor groove of DNA. For example, in the human PAIRED homeodomain protein PAX3 (PDB ID: 3CMY, chain A 46 ) and the D. melanogaster paired protein (PDB ID: 1FJL; chain A 47 ) the residues in contact with the minor groove are R2 and R5, whereas in the D. melanogaster aristaless homeodomain (PDB ID: 3A01, chain B; PDB ID: 3LNQ, chain A 48 ) and the even-skipped homeodomain (PDB ID: 1JGG, chain A 49 ), R3 and R5 are the reported contact residues. Consistently, the RefSeq LEUTX sequence (GenBank: NP_001137304.1), which lacks the N-terminal part of the homeodomain, had less transcriptional activity than full-length LEUTX 15 . Mutation of A54 into V in human LEUTX abolishes TF function while a second compensatory mutation in I47 restores function. Detailed inspection of the recognition helix residues in the Leutx phylogeny revealed that two of the five specificity-determining residues in the third helix exhibited differences, especially between primates and laurasiatherians. One key residue is isoleucine at position 47 (Fig. 2); while most examined primates have I47, in several instances (orangutan, gibbon, night monkey, and squirrel monkey) threonine is found at that position. In laurasiatherians the residue is threonine instead of I47. The second variant key residue is at position 54; it is conserved as alanine among primates, however the residue is predominantly valine in laurasiatherians (Fig. 2). By making intra-species and inter-species comparisons of the LEUTX recognition helix, we noticed that there is no I47-V54 residue combination (Figs 2 and 4F-I), although each of these residues can easily be changed by one transition mutation; position 47 has more variability than 54 among primates, but both are highly constrained in Laurasiatheria (Fig. 4J,K). Why does the I47-V54 mutant combination not appear, although it is readily reachable evolutionarily? We hypothesized that a I47-A54V combination might be lethal or deleterious, because DNA-binding would be compromised. To confirm this hypothesis, we studied the functionalities of the four variants (2 variants in 2 positions) by using the luciferase reporter assay (Fig. 4L-N). As expected, the single mutation of A54V in the presence of I47, which is not encountered evolutionarily, strongly reduced the activating function on the TAATCC-containing 4 × 11 bp promoter ( Fig. 4M; p < 0.05 by t-test, RNL fold change compared to wild type). Although the single mutation of A54V activated the 36 bp EEA-motif promoter also, the activity was less than the other mutants as well as the 4 × 11 bp promoter. The single mutation I47T and the double mutation I47T-A54V showed significant activity on the 4 × 11 bp promoter, indicating that primate Leutx with I47T and laurasiatherian Leutx can still bind TAATCC. The increased activity of the I47T and the I47T-A54V variants on the EEA-motif promoter might be due to increased contributions of other TFs, which are expressed in the HEK293 host cells. We then investigated the alterations based on the structural models. The mutation of I47 (found in some non-human primates, Fig. 4G) to a more compact but polar T47 residue provides extra potential for making stabilizing hydrogen bonds. I47 makes two weak hydrophobic interactions with nucleotides T4 and A3. In the model of human LEUTX, where the I47T mutation was introduced, the methyl group of T47 would be able to interact with the methyl group of T4, maintaining one of the hydrophobic interactions seen in the wild-type complex. The side-chain hydroxyl oxygen atom of T47 is ideally positioned to form a strong hydrogen bond to the amide group of the N51 side chain and thus help to stabilize and rigidify this region and its interactions with the TAATCC motif and with two water molecules that are key to base recognition as seen in the 2HDD structure. The A54V mutation (Fig. 4H) leads to a bulkier hydrophobic side chain that significantly reduces binding to the DNA motif. In the modelled complex with V54, the bulkier side chain of valine would likely interfere with the location of water molecules that form the network of hydrogen bonds linking N51 to R58 and that are critical for binding the adenine base A4* of the antisense strand of the TAATCC motif. Interference with this water mediated hydrogen-bonding network is the most obvious cause of the reduced binding of the A54V mutant. In the double mutant (I47T and A54V; Fig. 4I; found in laurasiatherians), experimental binding to the motif is restored and is similar to that seen in the I47T mutation alone. This may be explained by the key role of the side chain of N51 coupled with its enhanced stabilization due to the hydrogen bond from T47. This stabilization of N51 may be sufficient in itself to compensate for any disruption in the water-mediated hydrogen bonding network due to the A54V mutation. It is likely that this stabilization of N51 by T47 also allows the water mediated network to persist despite the presence of the disruptive influence of the A54V mutation, maintaining these indirect interactions between the homeodomain and the adenine base A4*. Discussion We applied several comparative genomics and experimental approaches to assess the importance of human LEUTX for developmental processes. To examine the evolution of the Leutx gene family, we retrieved Leutx sequences from Genbank. The presence of Leutx in Xenarthra (armadillo), Afrotheria (elephant, and pseudogenes in Cape elephant shrew), Laurasiatheria, and Euarchontoglires, but their lack in Marsupialia and Monotremata, indicates that Leutx originated after the divergence of Placentalia from Marsupialia as also suggested by 22 . We observed that Leutx proteins are conserved among most placental clades, but do exhibit evolutionary variation in the rodent and lagomorph lineages, including loss of the homeodomain in rats and mice, which clearly represents a secondary loss. Within human individuals (as intra-species comparison) LEUTX is relatively uniformly conserved, and deleterious homozygotic mutations, in particular also missense mutations within the recognition helix of the homeodomain were completely lacking. Potential deleterious alleles are only found heterozygously at low frequency. The evolutionary and structural comparisons allowed us to identify potentially important DNA-binding residue combinations. Although the importance of positions 47 and 54 for DNA-binding and activity using mutational analysis has already previously been established 6-10 , those studies were not conducted with PRD-LIKE TFs. We found that the mutation I47T in the homeodomain of human LEUTX has little effect on LEUTX activity -T47 is also found in several primate sequences -, while the mutation A54V abolishes activity. In contrast, V54 is commonly found in Laurasiatheria, indicating it is functional, but there it is rigidly accompanied by a threonine non-favourable interactions as dashed red lines. (D) Close-up view around residue K50; lysine can form many possible interactions and assumes two conformations in the 2HDD X-ray structure. (E) Structural model with the K50A mutation; alanine is unable to interact with DNA like K50. (F) Wild type LEUTX homeodomain showing interactions of the specificity-determining residues I47, N51, A54, and R58. (G), Model of the I47T mutation (H), Model of the loss of function mutation A54V (I) Double mutation of I47T and A54V, which restores function. (J,K) Residue and codon usage comparison between primates and Laurasiatheria at position 47 (J) and 54 (K) of the LEUTX homeodomain. Dotted lines represent possible paths for transitions, which are more frequent than transversions, between two amino acids. (L-N) RNL for different promoter reporter constructs and different LEUTX protein expression vectors. "-", "4 × 11 bp", "4 × 36 bp"at the bottom "Promoter" line denote co-transfection of plain firefly reporter (pGL4.25), "4 × 11 bp" reporter, or "4 × 36 bp" reporter, respectively. X-axis labels (marked LEUTX) indicate the different LEUTX TFs used: "-": no co-transfection of pFastBac-based TF over-expression vector; "Wild type": co-transfection of wild-type human LEUTX; I47T, A54V, and I47T + A54V: cotransfection of mutant LEUTX proteins. Asterisks indicate statistically significant (p < 0.05, t-test) differences of RNL compared to the control without TF ("-", labelled also by "ref. ") for each promoter construct. Batch 1 to 3 represent the three experimental replicates. (M) RNL fold change derived from "with" and "without" LEUTX protein expression constructs. Fold change values were calculated from (L), where either 4 × 11 bp or 4 × 36 bp was co-transfected. (N) RNL fold change derived from "with" and "without" promoter reporter constructs. Fold change values were calculated from (L), where TF over-expression vector was co-transfected. at position 47. Our experiments here show that a compensatory change in human LEUTX (I47T, in combination with A54V) restores activity again in a luciferase assay (Fig. 4L-N). Chu et al. 6 showed already that there is a correlation of residues 47 and 54 with respect to binding a specific base. Our experiments and structural models confirm this important interplay between these two positions for function. Interdependence and interference between residues 50 and 54 for DNA-binding has also been observed, e.g., the A54M mutation in combination with K50 in Goosecoid is not DNA-binding 50 . However, we do note that mole rats and a guinea pig sequence (Fig. 2), as well as the third homeodomain of the C. elegans CEH-91 51 have this combination. Although the effects on DNA affinity and specificity are not known, the evolutionary conservation of K50 M54 in mole rats at least suggests it is functional. Quite a number of amino acid changes have occurred in the homeodomain of primate Leutx during evolution, but residues critical for structure and DNA-binding have been conserved. For example, A54V has been avoided, despite the fact that this position can easily be mutated by a single transition. If another gene could complement for the missing function of LEUTX, or if LEUTX did not have an important role in preimplantation development, then such an A54V loss-of-function variant might have arisen during the evolution of primate species or in human populations. However, such a variant is presently not observed, supporting the notion that LEUTX plays an important role in the developmental processes of the early embryo as previously suggested 15 . It is perhaps counterintuitive why an apparently essential gene can be lost in some species. However, empirical observation indicates that while the mid-embryonic phylotypic period is most conserved in evolution, early embryogenesis and late developmental stages are subject to more evolutionary flexibility [52][53][54] . A prime example for early divergence in evolution is the fly gene bicoid, which was derived from a Hox cluster gene and "inserted" itself into the regulatory network of the zygote, where it is setting up the fundamental antero-posterior axis [55][56][57] . More dramatic losses of "essential" genes have been observed in C. elegans, where the Hox cluster has degenerated 58 , and where the hedgehog signalling gene has been lost 59,60 . Embryonic implantation differs among mammals, being either superficial or interstitial. The basic mode of implantation is superficial, and the interstitial mode developed multiple times independently in mammalian lineages, including Muridae and Humanoids 61 . Hence, it does not correlate with our observed pattern of Leutx sequence conservation in primates and laurasiatherians, versus the dramatic evolutionary changes of Leutx in Glires. Nevertheless, given that interstitial implantation has developed several times de novo, it cannot be excluded that part of the regulatory cascade involved in the implantation mode involved the change or loss of Leutx in the Glires lineage. A much more obvious correlation of the phylogenetic pattern of Leutx sequence conservation is with gestation times. It is noteworthy that species, where the homeodomain has undergone substantial changes, or was completely lost, have very fast gestation times, often less than 25 days (Additional file 1: Table S2). These species have apparently been under strong evolutionary selection to develop short gestation times. This may have resulted in necessary adaptive changes in the molecular regulatory networks during early embryonic development, including Leutx. Other early PRD-LIKE homeobox genes, such as Argfx, Dprx, and Pargfx have also been lost in the mouse lineage 22 , indicating that not just a single gene, but a network was modified or lost. Indeed, we previously observed that human LEUTX was able to activate only about 25% of human EGA genes 15 , an indication that other TFs play a role in activating the complete set of EGA genes. Support for such a shift in regulatory networks comes from a recent analysis, which shows that the Obox and Croxs homeobox genes have substantially expanded in mouse and seem to have replaced the function of the missing Argfx 62 . Further studies are needed though to understand how combinations of PRD-LIKE and other TFs as well as their cofactors regulate embryonic events, including gestation periods. One worst-case scenario from the evolutionary perspective is infertile interspecies offspring (such as mule), resulting in the spending of parental resources toward no next-generation reproduction. The earlier in development an illegitimate embryo is eliminated, the better for reproductive fitness. The early elimination of cross-species embryos would be advantageous for both species, promoting rapid evolution of critical EGA mechanisms. Leutx is evolving much faster than other homeobox genes, most of which are well conserved in evolution across Bilateria 3 . Intriguingly, the Leutx phylogeny correlates with the expansion of Alu elements in primate genomes 38,39 , and, the 36 bp DNA element containing the TAATCC motif displays similarity to Alu elements 18 . It is tempting to speculate that any essential gene for EGA might be rapidly evolving in order to build strong inter-species barriers between closely related species. In this context, we note that another rapidly evolving PRD-LIKE homeobox gene, odysseus, has been shown to be involved in speciation in Drosophila 63 . Taken together, we have applied a combination of inter-species comparative analysis, intra-species human genome resources, structural predictions, and experimental testing of critical amino acid substitutions to address the hypothesis that LEUTX plays a role for human embryogenesis as suggested earlier 15 . All evidence supports this notion; among a vast number of human genomes assessed, only one individual with a heterozygous variant was noted. Further, in Placentalia Leutx is in general well conserved, only in Glires do we observe rapid changes and degeneration of the homeodomain. This could be explained by special evolutionary pressure for rapid gestation times. We conclude that human LEUTX is highly constrained and is likely to play an important role in human embryogenesis, but further studies are needed to confirm the notion. Construction of expression vectors and mutated LEUTX expression constructs. In order to overexpress ARGFX, CPHX1, CPHX2, DPRX, LEUTX, OTX2, TPRX1 and TPRX2 protein in human cells, the respective ORFs were cloned into a modified pFastBac expression vector CMVe.EF1α.eGFP-WPRE as described in 15,16,18 . Accession numbers of the sequences are given in Additional file 1: Table S5. In order to mutate two key amino acids (I47T and A54V) of human LEUTX in the pFastBac vector a QuikChange II site-directed mutagenesis kit (Agilent, Santa Clara, CA) was used according to manufacturer's instructions with primers described in Additional file 1: Table S6. To mutate both residues simultaneously, I47T and A54V, the construct with mutation I47T was further mutated using primers for the mutation A54V. All constructs were verified by Sanger sequencing. Construction of luciferase reporter vectors and luciferase reporter assay. In order to study the effect of the PRD-LIKE TFs on the PRD-LIKE binding site contained in the 36 bp EEA-motif 18 , two reporter constructs were designed: a 216 bp construct comprising four EEA-motifs [CAGCCTCCCAAAGTG CTGGGATTACAGGCATGAGCC] in tandem with intervening restriction sites, and a PCR-amplified 131 bp construct comprising four repeats of a shorter 11 bp core motif [CTGGGATTACA] in tandem with intervening restriction sites. The "36 bp" construct was fully synthesized (Eurofins, Ebersberg, Germany) as described in 15 , and the "11 bp" construct (Additional File 1: Fig. S12) was generated by PCR amplification using primers 11_bp_Fw and 11_ bp_Rv (Additional file 1: Table S6) as follows: each primer contained two copies of the 11 bp motif with intervening restriction sites and an SfiI restriction site at one end enabling the ligation to the vector backbone, and a BsaI restriction site at the other end enabling the ligation of the two amplified fragments. The two fragments were amplified using primers 11_bp_Fw and pGL4_F, or 11_bp_Rv and pGL4_Rv, respectively (Additional file 1: 25 vector was correspondingly digested with SfiI. The digested fragments were purified from an agarose gel using NucleoSpin Gel & PCR Clean-up kit (Macherey-Nagel) and ligated using T4 DNA ligase (Thermo Scientific) according to the manufacturer's protocol. The reporter constructs or empty renilla luciferase vector pGL4.74 were co-transfected with expression vectors for each TF one-at-a-time into HEK293 human embryonic kidney cells (ATCC, Middlesex, UK). The cells were seeded on 48-well plates in Dulbecco's modified Eagle medium containing 4.5 g l −1 glucose, sodium pyruvate and sodium bicarbonate (Sigma-Aldrich) and supplemented with 10% FBS and 1 x GlutaMAX (both from Gibco). Cells were grown overnight at 37 °C in 5% CO 2 and subsequently transfected with different combinations of luciferase vector constructs, pFastBac vector constructs and Renilla luciferase vector pGL4.74. The concentrations of single constructs were as follows: Luciferase vector 100 ng per well, pFastBac vector 100 ng per well and Renilla luciferase vector 5 ng per well. To estimate the optimal pFastBac concentration for TF expression a titration was performed, see Additional File 1: Fig. S13. The transfections were performed using FuGENE HD Transfection reagent (Promega) using 1 µL per well according to the manufacturer's instructions. Cells were incubated at 37 °C in 5% CO 2 , harvested 24 h after transfection and subjected to Dual luciferase assay (Promega) according to the manufacturer's protocol; the luciferase reporter assays were performed in 3 biological replicates. Luciferase signals were measured using a FLUOstar Omega microplate reader (BMG Labtech, Ortenberg, Germany). Renilla normalized luminescence (RNL) is firefly luminescence level divided by renilla luminescence level. Bioinformatic and phylogenetic analyses. LEUTX sequences were retrieved from NCBI using blastp of the non-redundant databases, and, for more selective searches, using tblastn of specific WGS, EST or REFSEQ databases 64 . Retrieved records were stored in a database, and then manually curated. Retrieved sequences with their accession numbers are shown in Additional file 1: Table S1. Gene identifiers were automatically generated from extracted database information. Species abbreviations and prefixes used in the figures are shown in Additional file 1: Table S2. MSAs were carried out using MUSCLE 65 and SEAVIEW (Clustalo option) 66 . Phylogenetic analyses were carried out using the Neighbor joining variant BioNJ 67 and the Maximum Likelihood version PhyML 68 as implemented in SEAVIEW (BioNJ option, and PhyML, model LG default option) 66 . Clustal_X was also used for MSA handling 69 . Based on the MSAs numerous sequences contained obvious errors in their ORF predictions. Corrections were applied using sequence similarity (the homeodomain and the Leutx domain) as well as genomic synteny between orthologs. The results show that the canonical gene structure for Leutx follows that found in humans: a short first exon encoding often only 2 residues with a splice donor site in phase 1; an intron of several kb, a second exon that starts 14 nucleotides upstream of the homeobox, and encodes the majority of the homeodomain; a second, shorter intron (often around 1 kb) between codons for residues 46 and 47 of the homeobox in phase 0, which is a typical splice position for PRD-LIKE homeobox genes; the third exon, encoding the reminder of the homeodomain and the Leutx domain up to the C-terminus 15 . Additional tools used to aid in correct ORF identification were the Human Splicing Finder 70 , Fgenesh+(http://www.softberry.com) 71 , and the Java software "Sequence Analysis" (http://informagen.com/SA/). In some instances, closely related sequences were compared at the genomic sequence level to identify orthologous exons, using the dot matrix utilities dotlet (http://myhits.isb-sib.ch/cgi-bin/dotlet) 72 , and Gepard 73 . While the best efforts were made to obtain the most likely ORFs, sequencing errors, assembly errors, the fragmentary nature of contigs, or missing sequencing information obviously affect the predictions. Pseudogene predictions were based on either the lack of introns (a retrotransposed gene), or on multiple errors (stop codons, and/or frame shifts) that affect the most likely ORF, the most likely ORF being defined as the one having the best sequence similarity to orthologous sequences in other species. Prediction of the short N-terminal exons are subject to more uncertainties. Homology modelling of LEUTX-DNA interaction. Homeodomain-DNA complexes were identified in the Nucleic Acid Database (NDB) 78 . The X-ray structures of relevant complexes were downloaded from the Protein Data Bank (PDB) 79 ; the complexes were filtered based on the specific homeodomain binding motif TAATCC nucleotide sequence recognized by LEUTX. The sequences of the LEUTX homeodomain were searched against the PDB using blastp at NCBI 64 in order to identify the best matching PDB entries. The engrailed homeodomain from Drosophila melanogaster at 1.9 Å resolution (PDB ID: 2HDD 13 ; the quality of the initial model is shown in Additional File 1: Fig. S14) shares 40% sequence identity with LEUTX over the homeodomain and was chosen as the template protein for modelling LEUTX using the Homodge package in BODIL 80 and MODELLER 81 . The structure-based sequence alignment and optimization of the matching of the query and template protein sequences were done using Vertaa and Malign in BODIL 80 . Four N-terminal residues not seen in the template structure were modelled in LEUTX using the 1.85 Å resolution structure of human DLX5 (PDB ID: 4RDU, chain D; Joint Center for Structural Genomics (JCSG), Partnership for Stem Cell Biology (STEMCELL)). The model structures were visualized and side-chain rotamers were checked for clashes and altered where necessary using the rotamer utility in BODIL 80 . Next, the models were energy minimized using the OPLS-2005 force field in the Maestro protein preparation wizard panel (Maestro version-10.3.015 Schrödinger suite). Structural features of the models were examined for standard acceptable values (e.g., for acceptable torsion angles and atom-atom contacts) using the MolProbity web server 82 . The model was then examined in detail, mutations of specific residues were introduced, and figures were prepared using PyMOL (The PyMOL Molecular Graphics System, Version 1.6 Schrödinger, LLC.) and Inkscape (www.inkscape.org). Availability of Data and Material Sequence data are provided in the Supplementary Material, 3D models are available upon request.
2018-11-29T00:41:08.319Z
2018-11-08T00:00:00.000
{ "year": 2018, "sha1": "743840db14f05486ac7030aa97b5cacc6d913a9c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-35547-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d9734766ec294220ec84bc7e76e56f27a31b0a69", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
28301778
pes2o/s2orc
v3-fos-license
Transthyretin ALA 71: A new transthyretin variant in a Spanish family with familial amyloidotic polyneuropathy Maria do Rosario Almeida, Francisco Lopez-Andreu, Miguel Munar-QuCs, Pedro P. Costa, and Maria JoCo Saraiva* Centro de Estwlos de Paramiloidme (M. d. R. A., P. P. C., M.J. S.) ; Instituto de Cigncius Biorne'dicus, 4000 Porto, Portugal (M.d. R. A., M. J. S . ) ; Hospital General Uniuersithrio, Murciu, Spain (F. L-A.); Grupo de Estwlio PAF I, Hosp. General Mullorca, Spain (M.M. Q.); Fax: 351 -2606-6106 Familial amyloidotic polyneuropathy (FAP) is an autosomal dominant disease characterized by the systemic deposition of amyloid with a particular involvement of the peripheral nerves. In FAP the amyloid deposits are mainly composed of a mu, tated transthyretin (TTR). Transthyretin, a plasma protein carrier of thyroxine and retinol binding protein, is a tetramer of identical subunits, each containing 127 amino acid residues. Different TTR variants are known to be associated with FAP . The most common is a variant with a substitution of a methionine for valine at position 30-TTR Met 30, which was found in different populations (Saraiva et al., 1988). Several Spanish families have already been reported as carriers of the Met 30 mutation associated with FAP, as demonstrated by an immunoblotting technique (Munar-QuCs et al., 1990). This method is based on the detection by an anti-TTR antibody of the fragments originated by the cleavage of the protein by cyanogen bromide (Saraiva et al., 1985). We report the detection of a new TTR variant in a Spanish individual with FAP. The propositus was a 32-year-old man with a history of hereditary amyloidosis. He presented a sensory motor neuropathy of the lower limbs, loss of weight, and severe constipation. Amyloid was demonstrated in a nerve biopsy. Immunoblotting of transthyretin isolated from the serum showed that TTR Met 30 was not present. The patient's serum, however, crossreacted with an antiserum that recognizes distinct neuropathic TTR mutations . This led us to suspect a different TTR variant. In order to search for a different variant, we sequenced the TTR gene. DNA from the propositus was isolated from a liver paraffin biopsy. We used several sections, 5-10 p m thick and extracted the paraffin with xylene and ethanol. Then the tissue was digested with proteinase K, and the lysate was used for DNA amplification. To sequence the TTR gene, we first symmetrically amplified exons 2, 3, and 4 of the TTR gene with the appropriate pair of primers as previously described (Almeida et al., 1992). Then we performed an asymetric PCR with 1 pl of the symmetric PCR product and 50 pmol of one of the primers for each exon (McCabe, 1990). The asymmetric product was then precipitated with ethanol and sequenced with Sequenase version 2.0 (USB). The analysis of the entire sequence presented a cytosine and a thymine in the position of the second base of codon for aminoacid residue 71 of the polypeptide chain (Fig. 1A). This was the only mutation found and corresponds to a substitution of a residue of alanine (mutated residue) for a valine (normal allele)-TTR Ala 71. In order to make possible a simple and rapid diagnosis of other at-risk individuals, we searched for an alteration to the restriction site originated by this mutation and found that one restriction site for Aci I was created. One-fourth of the symmetric PCR amplified exon 3 was digested with the enzyme Aci I (NEB), and the samples were analysed by electrophoresis in a 4% Nusieve agarose gel. We then analysed DNA isolated from leucocytes of 3 brothers of the propositus by RFLP analysis. Digestion of the amplified exon 3 of the TTR gene with Aci I originates 2 fragments of 112 and 136 bp when the mutation is present. The fragment of 248 bp corresponds to the undigested amplified DNA. As can be seen in Figure IB, all the siblings of the propositus analysed were carriers of the mutation. One of these siblings has also developed clinical symptoms of the disease. This substitution represents a mutation rather than a polymorphism, since it was not found in screenings of the population, i.e., in 4,000 samples assayed by high-resolution hybrid isoelectric focusing (Altland et al., 1987). Furthermore, it was the only mutation in the TTR gene associated with clinical disease in two of the members of the kindred analysed here. This new TTR variant, TTR Ala 7 1, is also the first FAP Spanish family with a TTR variant different from TTR Met 30 and may exist in other kindreds from different origins, which raises the need of a precise and careful molecular genetic diagnosis of each FAP family. Although for some variants the clinical expression of the disease is quite similar, like this variant and TTR Met 30, it is important to investigate differences in their function, since this could help in the choice of the therapeutic approach to be used.
2018-04-03T01:09:41.663Z
1993-01-01T00:00:00.000
{ "year": 1993, "sha1": "646afd71c942969d38ee5730c1355342a06db261", "oa_license": null, "oa_url": "https://doi.org/10.1002/humu.1380020516", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "3febdaa73f0fcc75daf03e19ee7c15bdbeb1bc27", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
257617204
pes2o/s2orc
v3-fos-license
Sustainable Development of Underground Coal Resources in Shallow Groundwater Areas for Environment and Socio-Economic Considerations: A Case Study of Zhangji Coal Mine in China Coal resources in China are developed in several regions with shallow groundwater, and large mining-related surface subsidence can have negative impacts on agriculture, land and water resources as well as existing and future socio-economic resources. All these are important for sustainable resource development. Dynamic subsidence reclamation (DSR) planning concepts are evaluated here for another case study with analyses over a 11-year period. In DSR topsoil, subsoil, farming, and water resources management are dynamically synergized concurrent with mining ahead of and behind the projected dynamic subsidence trough. The study area involved mining five longwall faces (and post-mining reclamation) to assess if DSR could have improved both the environment and socio-economic conditions for post-mining land use as compared to using traditional reclamation (TR) and TR-modified (TR(MOD)) approaches. The results show that: (1) Upon final reclamation, farmland area and water resources in DSR and TR (MOD) will have increased by 5.6% and 30.2% as compared to TR. Removing soils ahead of mining before they submerge into water is important for farmland reclamation and long-term economic development. (2) Due to topsoil and subsoil separation and storage in the DSR plan, reclaimed farmland productivity should recover quickly and agriculture production should be larger than TR and TR(MOD) plans. (3) For a simplified economic model, the total revenue in the DSR plan should be 2.8 times more than in TR and 1.2 times larger than in TR (MOD) plan. (4) The total net revenue of the TR(MOD) plan should be increased by 8.1% as compared with the TR plan. The benefits will be much greater for analyses over longer periods. Overall, the DSR plan will allow for an improved socio-economic environment for new businesses to support disrupted workforces during and after mining. Introduction and Problem Statement Mineral extractive industries are an important global industry and form the foundations of our lives [1]. World Mining Data indicates that the industry extracted over 17-billion tons of raw materials with a value of about USD 2.03 trillion or about 2% of the global GDP in 2022 [2]. The industry is expected to grow consistent with societal needs [3]. During minerals-related production activities, our land [4,5], water [6,7], and air and ecosystem resources [8] are disturbed short-term and can also be negatively impacted long-term unless disturbed areas are appropriately reclaimed. Society supports mineral extraction activities since they have the advantages of significant economic, social, and mining plans, Feng et al. [39] developed optimum mining plans for the Guqiao coal mine. Li et al. [40] considered coal production and aboveground development or protection to optimize the layout of underground coal mining in Jining city of China. Previous research mainly focused on reclamation time, modifying mining planning, multiple coal seams, and the soil reconstruction procedure. DSR planning discussed here advances previous DSR tools to mitigate both environmental and socio-economic negative impacts. In the DSR technology, reclamation unit operations (topsoil and subsoil removal and replacement, farming and crop harvesting, and management of water resources) are dynamically implemented ahead of and behind the current mining areas to minimize negative impacts to land and water resources and to nurture new and/or old socio-economic enterprises in reclaimed areas. Hypothesis and Goals Several authors [14,31] have assessed the benefits of DSR for a few mines in China. An opportunity developed to perform similar studies at another case study mine presented here. The authors extended the earlier planning concepts to include creative ideas, such as: (1) the development of super-farmland areas that should have higher agricultural productivity compared to pre-mined lands to offset the loss of some farmland in water. These areas would have thicker topsoil and subsoil replacement and better farm management practices in marginally impacted subsided areas and create business and employment opportunities for displaced workers, both in the short-term and long-term; (2) intentionally creating strategically located large water resources areas that could be used to support economic development through new towns, recreational sports, and water resources management facilities for multiple towns within the region; and (3) supporting community development efforts to positively impact socio-economic development during and after the mining ceases in the area. Items (2) and (3) above are considered very important now for planning for the closure of mines after active mining. Since the case study area had already been mined and reclaimed using the TR plan, the goal of this paper is to assess what benefits could have been achieved if the mining and reclamation had been performed using the authors' proposed TR-modified (TR (MOD)) approaches and DSR concepts. In the TR(MOD) concept, the soils are stripped ahead of mining and used to increase the amount of agricultural land. The research includes analyses and discussions with community leaders in the region to document future needs for improved ecosystems and socio-economic development for the community. Even though scientific economic comparisons could not be undertaken, it was thought that even simplistic subjective comparisons here should lead to meaningful DSR concept implementation projects. It should also help assess the relative importance and difficulties in implementing proposed DSR concepts. Surface Description of the Case Study Area The case study area is located in the northwest part of the Zhangji coal mine in Anhui Province, China (Figure 1). The relatively flat land represents the alluvial plains of the Huaihe River, with surface elevations varying from +17.3 m to +26.5 m above the mean sea level (MSL) (Figure 1a). Additionally, surface slopes are no more than 5 • . The topsoil and subsoil thicknesses in the area average about 0.5 m and 1.0 m, respectively. The ground water level (GWL) is about 1.5 m below the ground surface. The area is in a semi-humid monsoon/warm climate zone with four different seasons, an average annual temperature of 15.1 • C, and 926 mm of rainfall occurring mostly in summer (from June to August). The average wind speed is 3.18 m/s, with southeast and east winds in spring and summer, southeast and northeast winds in fall, and northeast and northwest winds in winter. temperature of 15.1 °C, and 926 mm of rainfall occurring mostly in summer (from June to August). The average wind speed is 3.18 m/s, with southeast and east winds in spring and summer, southeast and northeast winds in fall, and northeast and northwest winds in winter. Farmland accounts for about 69.6% of pre-mining land use for cultivating rice and wheat (Figure 1b and Table 1), with a multiple cropping index of 200%. Rice usually grows from early June to late September, while wheat is planted from October to June. The estimated production of rice and wheat are 7500 kg and 6750 kg per hectare (ha) per year, respectively. In addition, cucumber and tomato vegetables can be planted in the local area, with production rates of 15,000 and 22,500 kg, respectively, per ha per year. The Huainan government [41] has indicated the sale price for rice, wheat, cucumber, and tomato to be about 2.68 Renminbi (RMB)/kg, 3.08 RMB/kg, 9.66 RMB/kg, and 7.62 RMB/kg, respectively. The revenue for water resources was about 2.5 RMB/m 3 in 2022. Most people in the study area are farmers. Some businesses breed fish in small ponds. Carp, grass carp, and crucian carp are commonly produced twice a year, with total production rate of about 4500 kg/ha. Huainan Agricultural Products indicated the Farmland accounts for about 69.6% of pre-mining land use for cultivating rice and wheat (Figure 1b and Table 1), with a multiple cropping index of 200%. Rice usually grows from early June to late September, while wheat is planted from October to June. The estimated production of rice and wheat are 7500 kg and 6750 kg per hectare (ha) per year, respectively. In addition, cucumber and tomato vegetables can be planted in the local area, with production rates of 15,000 and 22,500 kg, respectively, per ha per year. The Huainan government [41] has indicated the sale price for rice, wheat, cucumber, and tomato to be about 2.68 Renminbi (RMB)/kg, 3.08 RMB/kg, 9.66 RMB/kg, and 7.62 RMB/kg, respectively. The revenue for water resources was about 2.5 RMB/m 3 in 2022. Most people in the study area are farmers. Some businesses breed fish in small ponds. Carp, grass carp, and crucian carp are commonly produced twice a year, with total production rate of about 4500 kg/ha. Huainan Agricultural Products indicated the prices of carp, grass carp, and crucian carp to be 12, 16, and 18.5 RMB/kg, respectively, in 2022 [41]. Several people work in the case study coal mine and two other factories-the Xueyao building material factory in the northeast and the Guanyin rotary kiln factory for firing red brick in the southwest. Mining Practices and Subsidence Analysis for the Case Study Area Five single-seam longwall faces (P1-P5) were mined in the area during the period 2015-2020 ( Figure 2), with an average mining thickness of 6.0 m and a seam dip of about 6 • , as shown. The mining depth varied from 480 m to 575 m. The underground mining area within the boundary was about 127.9 ha. The underground mining areas for Panels 1-5 are about 28.6 ha, 27.2 ha, 26.4 ha, 29.7 ha, and 16.0 ha, respectively. The underground mining area is about 38% of the case study area (333.4 ha). The probability integration approach is used to project surface subsidence after the mining of each panel with consideration of the original terrain. The subsidence projection parameters are shown in Table 2 and Figure 3. prices of carp, grass carp, and crucian carp to be 12, 16, and 18.5 RMB/kg, respectively, in 2022 [41]. Several people work in the case study coal mine and two other factories-the Xueyao building material factory in the northeast and the Guanyin rotary kiln factory for firing red brick in the southwest. Mining Practices and Subsidence Analysis for the Case Study Area Five single-seam longwall faces (P1-P5) were mined in the area during the period 2015-2020 ( Figure 2), with an average mining thickness of 6.0 m and a seam dip of about 6°, as shown. The mining depth varied from 480 m to 575 m. The underground mining area within the boundary was about 127.9 ha. The underground mining areas for Panels 1-5 are about 28.6 ha, 27.2 ha, 26.4 ha, 29.7 ha, and 16.0 ha, respectively. The underground mining area is about 38% of the case study area (333.4 ha). The probability integration approach is used to project surface subsidence after the mining of each panel with consideration of the original terrain. The subsidence projection parameters are shown in Table 2 and Figure 3. With mining progress, the subsidence-influenced land and waterlogged areas will increase gradually ( Figure 4 and Table 3). However, farmland area will continue to decrease due to mining subsidence. The maximum projected subsidence after all mining is about 4.4 m, with the final surface area influenced by mining as 333.4 ha (Table 3), which is the case study area. With mining progress, the subsidence-influenced land and waterlogged areas will increase gradually ( Figure 4 and Table 3). However, farmland area will continue to decrease due to mining subsidence. The maximum projected subsidence after all mining is about 4.4 m, with the final surface area influenced by mining as 333.4 ha (Table 3), which is the case study area. With mining progress, the subsidence-influenced land and waterlogged areas will increase gradually ( Figure 4 and Table 3). However, farmland area will continue to decrease due to mining subsidence. The maximum projected subsidence after all mining is about 4.4 m, with the final surface area influenced by mining as 333.4 ha (Table 3), which is the case study area. It is projected that where the post-mining surface elevation is less than +17.7 m, the land would get submerged under water due to mining subsidence. During panel 1 mining, some regions would submerge into water and land use patterns would begin to change. Mining and water-submerged land areas continue to grow with additional mining, as shown in Figure 4. After the mining of panel 5, subsidence waterlogged areas will account for 36.0%, while the farmland area will decrease from 69.6% pre-mining to 43.0% postmining ( Table 3). The mining of all five panels will result in: (1) the loss of about 26.6% of farmland for cultivation; and (2) increasing waterlogged areas to about 36.0% of the case study area ( Figure 4 and Table 3). These impacts will also have negative impacts on the socio-economic health of the communities in the region, both during active mining and after mining ceases. Furthermore, erosion, sedimentation, and acid mine drainage potential are likely to develop in some areas due to coal composition. Water resources would likely have high dissolved and suspended solids, and the utilization of water resources would involve additional treatment costs as well. TR Reclamation Planning This would involve grading the land surface appropriately after mining panel 5, replacing the available soils around farmland areas, and vegetating the areas after surface movements have stabilized. In this plan the topsoil and subsoil are not removed during active mining and backfilling. The proposed reclaimed farmland areas are designed to have a surface elevation of +21.2 m MSL based on the case study area characteristics. Soils in and around shallow water-submerged areas (cut area with surface elevations higher than +17.2 m and less than +17.7 m) will be stripped down to an elevation of +17.2 m, and these soils will be backfilled into areas with relatively small subsidence to reclaim them as farmland (filling area). Soils stripped from areas with small subsidence (stripping area with surface elevation higher than +21.2 m) will then be reclaimed as farmland. Using appropriate post-subsidence topographic maps and the above design values, it is estimated that about 0.04 million cubic meters (m cu. m) and 3.05 m cu. m of soils will be obtained from the cut and stripping areas. Thus, about 3.09 m cu. m of soils will be backfilled in filling areas to reclaim them as farmland. Upon completion of all reclamation, it is projected ( Figure 5) that: (1) there will be about 240.7 ha of farmland or about 72.2% of the case study area; and (2) 92.7 ha of water reservoir in the center with about 3.0 m cu. m of water in it. Upon reclamation, revenue sources will include rice and wheat planted on reclaimed farmland, water resources in the reclaimed water reservoir, and small businesses that might develop after reclamation is completed. TR (MOD) Reclamation Planning In order to modify the TR plan as a TR (MOD) plan, it was planned to strip soils ahead of mining and store them before they will submerge into water. However, these soils will not be used for reclamation until year 6, after all mining is completed. This will allow agricultural production on soils until just before they are projected to be submerged in water. A brief description of plans while mining each panel is given below. Mining panel 1: Subsidence projections show that the central red area (Figure 6, panel 1) will subside below the groundwater table. Therefore, prior to subsidence, soils in TR (MOD) Reclamation Planning In order to modify the TR plan as a TR (MOD) plan, it was planned to strip soils ahead of mining and store them before they will submerge into water. However, these soils will not be used for reclamation until year 6, after all mining is completed. This will allow agricultural production on soils until just before they are projected to be submerged in water. A brief description of plans while mining each panel is given below. Mining panel 1: Subsidence projections show that the central red area (Figure 6, panel 1) will subside below the groundwater table. Therefore, prior to subsidence, soils in the red area (cut area C) will be stripped in advance to achieve an elevation of +17.2 m. About 0.59 m cu. m of soils will be stripped from this area to form a water reservoir. Area C will eventually form a deep-water reservoir after the mining of panel 1. The stripped soils will be stored in the brown area (B) on both ends of the proposed water reservoir. Appropriate diches will be constructed to direct the surrounding surface water into area C to protect the current farmland. Mining panel 2: During this mining, soils in area C adjacent to the blue water reservoir will be stripped. About 0.13 m cu. m of soils are projected to be available here and will be stored in area B. The water reservoir area will be expanded toward the southwest. Some additional diches will be needed to direct the surrounding surface water into area C and to protect the current farmland. Mining panel 3: This panel is adjacent to panel 1 in the northwest mining area. Therefore, about 0.15 m cu. m of soils in area C will be stripped ahead of mining to extend the water reservoir toward the northwest direction. The stripped soils will be stored in Mining panel 2: During this mining, soils in area C adjacent to the blue water reservoir will be stripped. About 0.13 m cu. m of soils are projected to be available here and will be stored in area B. The water reservoir area will be expanded toward the southwest. Some additional diches will be needed to direct the surrounding surface water into area C and to protect the current farmland. Mining panel 3: This panel is adjacent to panel 1 in the northwest mining area. Therefore, about 0.15 m cu. m of soils in area C will be stripped ahead of mining to extend the water reservoir toward the northwest direction. The stripped soils will be stored in area B, and as before, some ditches may be needed to channel surface waters into area C and to protect the current farmland. Mining panel 4: Waterlogged areas due to mining subsidence are not projected to increase much during this mining since panel 4 is around the center of panels 1 and 2. Only a small area C in the southeast corner is expected to get submerged. Before that, about 0.02 m cu. m of soils will be stripped and stored in area B and some ditches will channel surface waters into area C to protect current farmland. Mining panel 5: During this panel mining, waterlogged areas will extend northwest. About 0.05 m cu. m of soils will be stripped from area C and stored in area B, and ditches will channel surface water into area C to protect current farmland. After mining panel 5: After the mining of all five panels, reclamation activities involving the grading and replacement of soils will begin. The green areas with a relatively small amount of subsidence will be backfilled with soils stored in area B over the past five years to achieve an elevation of +21.2 m for cultivating it as farmland (filling area F). Since the purple area has higher elevation than the designed farmland elevation, it will be stripped down to a +21.2 m level (stripping area S) with about 3.05 m cu. m of subsoil obtained from this area. The stripped soils from areas B and S will be spread out over area F to reclaim it as farmland. During the entire reclamation process, about 3.99 m cu. m of soil will be stripped and backfilled. In the TR(MOD) plan, the entire mining area will be reclaimed to 259.5 ha of farmland (77.9% of the case study area), and 73.9 ha of water reservoir will form around the center of the mining area, with a volume capacity of 3.85 m cu. m of water. This amounts to be about 5.6% more farmland available than in the TR plan. The water resources will however be reduced from 92.7 ha of water reservoir in the center in the TR plan to 73.9 ha in the TR(MOD) plan. As in the TR plan, revenue sources will include rice and wheat on reclaimed farmland, water resources in the reclaimed water reservoir, and any small businesses that may develop after reclamation. DSR Reclamation Planning: Concepts and Implementation Here, reclamation unit operations (topsoil, subsoil removal and replacement, farming and crop harvesting, and harnessing water resources) are dynamically implemented concurrent with mining ahead of and behind the projected subsidence areas. This is done to minimize land and water resource impacts and to nurture the construction of new and/or old socio-economic enterprises as mining progresses. Since topsoil and subsoil are relatively rich in organic matter and are critical for plant growth, these are stripped separately before the land submerges into water during the mining of each panel (Figure 7). The soils are backfilled in planned farming areas concurrent with mining. In planning DSR, two novel concepts are also considered by authors. (1) The development of super-farmland of high agricultural productivity to offset the potential loss of land area for farming and productivity in farming areas. In these areas, the separately stripped topsoil and subsoil will be spread out to have much larger thickness of soils than typically found on farmland. These areas will be further augmented by productivity-enhancement agricultural practices. The desired result will be to restore and/or to increase agricultural production to a higher quality and volume from DSR practicing on small reclaimed areas. These areas may also be used for planting vegetables and fruits for distribution in the region through cooperative Farmers Markets programs and for developing small businesses to enhance socioeconomic development activities. (2) The development of planned water accumulation reservoir/s to serve regional water supplies for different uses such as drinking water, irrigation, recreational sports, fish hatcheries, and other needed small businesses. These will be developed in concert with regional community leaders to support socio-economic planning post mining. DSR plans are illustrated below using Figure 7. (1) The development of super-farmland of high agricultural productivity to offset the potential loss of land area for farming and productivity in farming areas. In these areas, the separately stripped topsoil and subsoil will be spread out to have much larger thickness of soils than typically found on farmland. These areas will be further augmented by productivity-enhancement agricultural practices. The desired result will be to restore and/or to increase agricultural production to a higher quality and volume from DSR practicing on small reclaimed areas. These areas may also be used for planting vegetables and fruits for distribution in the region through cooperative Farmers Markets programs and for developing small businesses to enhance socio-economic development activities. (2) The development of planned water accumulation reservoir/s to serve regional water supplies for different uses such as drinking water, irrigation, recreational sports, fish hatcheries, and other needed small businesses. These will be developed in concert with regional community leaders to support socio-economic planning post mining. DSR plans are illustrated below using Figure 7. Mining panel 1: Subsidence projections show that the central red area will subside below the groundwater table (Figure 7, panel 1). Therefore, prior to subsidence, soils in the red area (cut area C) will be stripped in advance to achieve an elevation of +17.2 m. Mining panel 1: Subsidence projections show that the central red area will subside below the groundwater table (Figure 7, panel 1). Therefore, prior to subsidence, soils in the red area (cut area C) will be stripped in advance to achieve an elevation of +17.2 m. About 0.22 m cu. m of topsoil and 0.37 m cu. m of subsoil will be stripped from area C to form a water reservoir. Concurrently, the green area on both ends of the proposed water reservoir will be backfilled to reclaim it as farmland (filling area F). The subsoil from area C will be moved to backfill area F first. Then, the stripped topsoil from area C will be used to backfill area F to develop a highly productive super-farmland (SR) which will have about two times the typical topsoil thickness found on farmland. Area C will form a deep-water reservoir (reclaimed water area W) after the mining of panel 1. Mining panel 2: During this panel mining, soils in area C adjacent to the blue water reservoir W will be stripped. About 0.06 m cu. m of topsoil and 0.07 m cu. m of subsoil should be available here. The water reservoir area will expand southwest. Simultaneously, area F adjacent to the area W will be backfilled to reclaim it as a super-farmland. Thus, the stripped subsoil from area C will be spread out in area F first. Then, the stripped topsoil from area C will be used to backfill area F to create as SR with about two times the typical topsoil thickness on farmland. Mining panel 3: This panel is adjacent to panel 1 in the northwest mining area. Therefore, soils in area C will be stripped ahead of mining. About 0.07 m cu. m of topsoil and 0.08 m cu. m of subsoil will be stripped to expand the water reservoir toward a northwest direction. Simultaneously, area F adjacent to area C will be backfilled to reclaim it as a super-farmland. The stripped subsoil from area C will be backfilled in area F first. Then, the stripped topsoil from area C will be spread out over area F to create a super-farmland with about three-times the usual topsoil thickness. Mining panel 4: During this mining, waterlogged areas due to mining subsidence are not projected to increase much, since panel 4 is around the center of panel 1 and 2. Only a small area C in the southeast corner is expected to get submerged in water. Before that occurs, about 0.01 m cu. m topsoil and 0.01 m cu. m of subsoil will be stripped separately. Simultaneously, area F adjacent to it will be backfilled with the above soils to reclaim it as a super-farmland. The stripped subsoil from area C will be spread out over area F first. Then, the stripped topsoil from area C will be spread out over area F to create a super-farmland with about 1.1 times the usual topsoil thickness. Mining panel 5: During this mining, waterlogged areas will spread toward the northwest direction. Therefore, in area C 0.02 m cu. m of topsoil and 0.03 m cu. m of subsoil will be stripped separately. Simultaneously, area F will be backfilled with the above soils to achieve an elevation of +21.2 m for farmland production. Since the purple area has higher elevation than the designed farmland elevation, it will be stripped down to +21.2 m. The purple area is named as stripping area S. To achieve this, about 0.56 and 0.59 m cu. m of topsoil will be stripped first from areas F and S, respectively. About 3.05 m cu. m of subsoil will then be obtained from area S. Then, stripped subsoils from areas C and S will be spread-out over area F. Finally, the stripped topsoil from area C and F will be backfilled over area F to reclaim it as usual farmland. Stripped topsoil from area S will also be backfilled into area S to reclaim it as usual farmland. Using the above DSR approaches, the entire mining area will be reclaimed to 259.5 ha of farmland (77.9% of the case study area), and 73.9 ha of water reservoir will form around the center, with a volume capacity of 3.8 m cu. m of water. In the entire reclamation process, about 4.34 m cu. m of topsoil and subsoil will be stripped and backfilled. To support local socio-economic development and create more job opportunities, cucumber and tomato vegetables will be planted on reclaimed super-farmland, and rice and wheat on reclaimed usual farmland. Simultaneously, based on input from the local community organizations, other businesses can be developed to breed fish or develop water sports around the developed water reservoir. Land Resources In the DSR plan, the total reclaimed farmland area is always larger during the entire mining period than in TR and TR(MOD) plans since reclamation operations are conducted concurrently ahead of and behind the mining face and adjoining areas. Such increases persist until year 5, after which the differences between DSR and TR/TR(MOD) plans start to decrease because reclamation is initiated in the TR/TR(MOD) plans in year 6 ( Table 4). Upon completion of all land reclamation in the three plans, the farmland area is still 5.6% larger in DSR than in the TR plan. However, in TR(MOD), because of stripping soils before they submerge into water, the farmland area is the same as in the DSR plan. It is noted that stripping soils before they submerge into water is very important for farmland reclamation and long-term development. Water and Fishery Resources In the DSR and TR(MOD) plan, waterlogged areas due to mining subsidence are smaller than that in the TR plan during the five-year mining period, since soils are being stripped to form a deeper water reservoir. However, the water resource volume is much larger than in the TR plan because of the deeper depth of the water in the waterlogged areas. In the DSR plan, since stripped soils are backfilled concurrently with mining to create farmland, the water resource volume is larger than in the TR(MOD) plan until year 5. Starting in year 6, soil backfilling is performed in the TR(MOD) plan, and water resource volume is the same as in the DSR plan. Upon the completion of reclamation in the three plans, there is about 0.8 m cu. m more of water resources in the DSR and TR(MOD) plans, or about 30.2% more water than in the TR plan (Table 5). In the DSR plan, businesses will be developed to breed fish and introduce water sports in the developed water reservoir, which were not designed in previous DSR research [31]. Based on the local situation, the carp, grass carp, and crucian carp can be mixed bred in the ratio of 3:1:1. The expected three types of fish productions from the reservoir are shown in Table 6. An estimate of the revenues from fish production is included later. Agricultural Resources In this discussion of agriculture production in the TR, TR(MOD), and DSR plans, it is assumed that only rice and wheat are planted on reclaimed farmland. In the DSR plan, topsoil and subsoil are stripped and backfilled separately before they submerge into subsided waterlogged areas, which were not separated in previous research [10,13,14,30,31,34]. Therefore, the productivity of reclaimed farmland should recover quickly after reclamation and approach about 100% of the pre-mining values during the first year and beyond. In the TR(MOD) plan, such productivity will recover slower since topsoil and subsoil are mixed during the reclamation process, with projected productivity values of about 60%, 70%, 80%, 90%, and 100% over the five-year period after reclamation is initiated in year 6. In the TR plan, the productivity for the reclaimed farmland will be expected to recover even more slowly since topsoil and subsoil were mixed and some soils submerge into water during the process. The productivity of reclaimed farmland is projected to be 50%, 60%, 70%, 80%, 90%, and 100% over the six-year period after reclamation is initiated in year 6. Therefore, the differences in rice and wheat production between the DSR and TR/TR(MOD) plans will continue to increase until year 6 ( Table 7), but then should decrease slowly over the following five years because of reclamation in the TR/TR(MOD) plan and higher agriculture productivity. However, due to the separating of topsoil and subsoil and more reclaimed farmland in the DSR plan, rice and wheat production is always larger than in the TR and TR(MOD) plans. Thus, separating topsoil and subsoil can significantly affect farm productivity. In the DSR plan, cucumber and tomato are grown in super-farmland areas, which were never considered in previous research [10,13,14,28,[30][31][32][33][34][35][36][37][38][39][40]. In addition, rice and wheat are planted on the reclaimed, usual farmland. Because super-farmland is created along with mining during the first 4 years, the production of cucumber and tomato crops will steadily increase (Table 8) and then be level after that. However, rice and wheat production will decrease in the first four 4 years due to the decreased amount of farmland after mining, but then increase substantially after reclamation initiation in year 5, being level thereafter. Socio-Economic Impacts Costs are primarily related to soil handling, grading for reclamation, and the fact that stripping and filling one cubic meter of soil should cost 20 RMB. In the TR plan, since reclamation is initiated in year 6, there is no reclamation cost during the first five years. However, in the TR (MOD) plan, soils will be stripped during years 1-5 and then be used to reclaim farmland in year 6; some reclamation cost will occur during years 1-6. Similarly, in the DSR plan, since reclamation costs are primarily associated with soil stripping and backfilling, all the reclamation costs will be incurred by the end of year 5. In the TR and TR (MOD) plans, benefits are accrued from farming areas of rice and wheat production and water resources in the reservoir. However, in the DSR plan, revenues come not only from rice and wheat production from reclaimed usual farmland, but also from cucumber and tomato production on super-farmland. In addition, revenues are also obtained from fish breeding associated with the water reservoir. The estimates of the costs and benefits are shown in Table 9. These costs do not account for value of money over time and price inflation over later years and are therefore conservative and simple costs; these data therefore favor TR reclamation in comparison. Over the 11-year period, the total net revenue in the DSR plan is 2.8 times and 1.2 times more than that in the TR and TR (MOD) plans, respectively. The total net revenue of the TR(MOD) plan should increase by 8.1% as compared with the TR plan. The benefits will be much greater for analyses over longer periods. The net revenue in the DSR plan is always larger than in the TR and TR(MOD) plans except for year 5, since a lot of reclamation and soil handling activities are completed during this year. Similarly, the difference in net revenue between DSR and TR/TR(MOD) is greatest in year 6, when most reclamation work is initiated in the TR and TR(MOD) plans. Thus, one would expect much larger economic benefits with the DSR plan when analyzing performance over a longer period of time, which was not considered in previous research [10,13,30,[32][33][34][35][36][37][38][39][40]. Table 9. Net revenue in three plans. Discussion With significant expected coal production from underground mining in China over the next few decades and global pressures for sustainable development, there has been significant advances in research on reclaiming unstable subsided lands. Underground mining environments similar to China are not common globally and therefore most research on the topic has been in China, and it has been slow and evolutionary. As discussed earlier, Zhao [32] and Xiao [33] analyzed the initiation of reclamation time without considering the economic benefits. Zhang [37] extended the concepts to mining multiple coal seams. Li et al. [10,13] and Feng et al. [39] focused on underground mine plans to minimize subsidence impacts without considering original surface topography, and Chen and Hang [38] focused on soil reconstruction procedures. Hu et al. [31] developed DSR concepts for mining areas with shallow groundwater resources without considering topsoil and subsoil separation, socio-economic development concepts of the super-farmland, large water storage, and associated small-and large-scale business development. So, the authors here have significantly advanced DSR planning concepts to include: (1) creating superfarmland areas of high agricultural productivity to offset the loss of farmland submerged under water, and planting fruits and vegetables to diversify and enhance revenue streams; (2) creating large areas of planned quality water resources to serve existing and new or improved socio-economic ventures to support long-term community needs at the local and regional levels, and creating new water sports businesses and fish breeding areas; (3) perform simple cost-benefit analyses for modified DSR concepts; (4) develop a TR(MOD) plan to analyze the importance of stripping soils before they submerge into water and separating topsoil and subsoil; and (5) consider pre-mining surface topography to predict subsidence and develop the DSR, TR, and TR (MOD) reclamation plans based on dynamic post-mining topography. To the best of the authors' knowledge, such research is not being conducted elsewhere in the world, since such mining conditions do not exist. An invited oral presentation was made by Dr. Chugh to World Mining Congress professionals in India in 2019 (no publication) and received significant interest. Previous limited DSR concept field implementations in China [30,31,34] led to the following observations: (1) the soundness of the technical concepts was well received; (2) the farmers were hesitant to strip topsoil and subsoil prior to the soil being submerged in water due to reduced agricultural production; (3) mining companies allocated more resources to production than to DSR planning with less than optimum efficiencies in the reclamation processes; and (4) socio-economic considerations and community involvement were not considered. The above was to be expected considering the newness of the concepts. The authors propose to widely disseminate these research findings to mining professionals and to government agencies responsible for regional development to identify project opportunities for implementing these concepts in single seam and multiple-seam mining areas. Such projects should also consider optimizing both mine and DSR planning to minimize the production cost to the consumer. The authors hope that this study will encourage mine operators to implement DSR concepts in planning while considering new and ongoing mineral development projects. DSR is a powerful engineering and planning concept that can minimize the land, water [14,31], and air impacts of mining and enhance socio-economic conditions under a variety of mining conditions. Furthermore, it should not be limited to only reclamation planning. Mining and DSR planning should be integrated to develop both optimal mining and reclamation plans [10,13,39] to enhance the profitability of the mining venture and its sustainability. That should further develop new ideas for both mining and reclamation for the improved sustainability of mineral projects. Toward this goal the authors recommend that every mining project, irrespective of the stage of its development, should consider having a steering committee consisting of mining, business, and regional development professionals to review alternate opportunities for mining and reclamation and to improve project sustainability while maximizing the profit potential. That would also ensure the development of sound mine closure plans [42] and the slow implementation of the idea that "mine closure must begin the day mining starts" while maximizing profit potential. Conclusions This paper has attempted a conceptual implementation of DSR planning concepts at a large coal mine involving five single seam longwall faces in a mining area with a shallow groundwater table about 1.5 m below the ground surface and 4.4 m of maximum surface subsidence due to mining. The study involved mining and post-mining reclamation to assess if DSR could have improved both the environment and socio-economic conditions for post-mining land use as compared to using TR approaches in China. In DSR topsoil, subsoil, farming, and water resources management were dynamically synergized concurrently with mining. Within TR, two approaches were considered: (1) designated as TR, all reclamation activities are initiated only after all mining had been completed in the area, and land was allowed to submerge into water in the subsiding areas; and (2) TR (MOD), where soils (topsoil and subsoil) ahead of mining, likely to be submerged, were stripped and stored without separation into topsoil and subsoil for reclamation after all mining was completed in the area. The authors undertook the current analyses for this hypothetical case as soon as the data became available. A comparison of the three analyzed plans shows that: (1) upon final reclamation, farmland area and water resources in the DSR and TR (MOD) plans are increased by 5.6% and 30.2%, respectively, as compared to the TR plan. Stripping soils before they submerge into water is important for farmland reclamation and long-term economic development; (2) due to topsoil and subsoil separation in DSR plan, reclaimed farmland productivity should recover quickly and agriculture production would be larger than in the TR and TR(MOD) plans; 3) the total estimated net revenue in the DSR plan should be 2.8 times more than in the TR and 1.2 times more than that in the TR (MOD) plan; and (4) the total net revenue of the TR(MOD) plan should be increase by 8.1% as compared with the TR plan. The above net revenue benefits should be even greater for analyses over longer periods. Furthermore, in DSR, the availability of farmland and water resources will be for much longer throughout the active mining period. The mining and reclaimed areas in DSR can therefore provide an improved socio-economic environment through community business development and the settlement of labor markets both during mining and after mining ceases in the area. Although solutions and sample calculations presented are simple, the paper clearly demonstrates the usefulness of the DSR concepts to minimize negative impacts to the environment while enhancing the long-term socio-economic environment. Author Contributions: Conceptualization, R.Z. and Y.P.C.; methodology, R.Z. and Y.P.C.; validation R.Z.; formal analysis R.Z.; data curation, R.Z.; Writing-review and editing, R.Z. and Y.P.C.; supervision, Y.P.C. All authors have read and agreed to the published version of the manuscript. Funding: This study was financially supported by Startup fund for scientific research provided by Shijiazhuang Tiedao University (Grant No. 3010090). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author.
2023-03-19T15:05:28.571Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "012e6c7ee4b362aa9dba3bba31f7555427a4beb4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/20/6/5213/pdf?version=1679019672", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "938a0474bc9128019d355a41ce00eb334e0b1251", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
34225895
pes2o/s2orc
v3-fos-license
An Experimental Study of Algorithms for Geodesic Shortest Paths in the Constant-Workspace Model We perform an experimental evaluation of algorithms for finding geodesic shortest paths between two points inside a simple polygon in the constant-workspace model. In this model, the input resides in a read-only array that can be accessed at random. In addition, the algorithm may use a constant number of words for reading and for writing. The constant-workspace model has been studied extensively in recent years, and algorithms for geodesic shortest paths have received particular attention. We have implemented three such algorithms in Python, and we compare them to the classic algorithm by Lee and Preparata that uses linear time and linear space. We also clarify a few implementation details that were missing in the original description of the algorithms. Our experiments show that all algorithms perform as advertised in the original works and according to the theoretical guarantees. However, the constant factors in the running times turn out to be rather large for the algorithms to be fully useful in practice. Introduction In recent years, the constant-workspace model has enjoyed growing popularity in the computational geometry community [6]. Motivated by the increasing deployment of small devices with limited memory capacities, the goal is to develop simple and efficient algorithms for the situation where little workspace is available. The model posits that the input resides in a read-only array that can be accessed at random. In addition, the algorithm may use a constant number of memory words for reading and for writing. The output must be written to a write-only memory that cannot be accessed again for reading. Following the initial work by Asano et al. from 2011 [2], numerous results have been published for this model, leading to a solid theoretical foundation for dealing with geometric problems when the working memory is scarce. The recent survey by Banyassady et al. [6] gives an overview of the problems that have been considered and of the results that are available for them. But how do these theoretical results measure up in practice, particularly in view of the original motivation? To investigate this question, we have implemented three different constant-workspace algorithms for computing geodesic shortest paths in simple polygons. This is one of the first problems to be studied in the constant-workspace model [2,3]. Given that the general shortest path problem is unlikely to be amenable to constant-workspace algorithms (it is NL-complete [18]), it may come as a surprise that a solution for the geodesic case exists at all. By now, several algorithms are known, both for constant workspace as well as in the time-space-trade-off regime, where the number of available cells of working memory may range from constant to linear [1,12]. Due to the wide variety of approaches and the fundamental nature of the problem, geodesic shortest paths are a natural candidate for a deeper experimental study. Our experiments show that all three constant-workspace algorithms work well in practice and live up to their theoretical guarantees. However, the large running times make them ill-suited for very large input sizes. During our implementation, we also noticed some missing details in the original publications, and we explain below how we have dealt with them. As far as we know, our study constitutes the first large-scale comparative evaluation of geometric algorithms in the constant-workspace model. A previous implementation study, by Baffier et al. [5], focused on time-space trade-offs for stack-based algorithms and was centered on different applications of a powerful algorithmic technique. Given the practical motivation and wide applicability of constant-workspace algorithms for geometric problems, we hope that our work will lead to further experimental studies in this direction. The Four Shortest-Path Algorithms We provide a brief summary for each of the four algorithms in our implementation; further details can be found in the original papers [3,2,14]. In each case, we use P to denote a simple input polygon in the plane with n vertices. We consider P to be a closed, connected subset of the plane. Given two points s, t ∈ P , our goal is to compute a shortest path from s to t (with respect to the Euclidean length) that lies completely inside P . The Classic Algorithm by Lee and Preparata This is the classic linear-space algorithm for the geodesic shortest path problem that can be found in textbooks [14,11]. It works as follows: we triangulate P , and we find the triangle that contains s and the triangle that contains t. Next, we determine the unique path between these two triangles in the dual graph of the triangulation. The path is unique since the dual graph of a triangulation of a simple polygon is a tree [7]. We obtain a sequence e 1 , . . . , e m of diagonals (incident to pairs of consecutive triangles on the dual path) crossed by the geodesic shortest path between s and t, in that order. The algorithm walks along these diagonals, while maintaining a funnel. The funnel consists of a cusp p, initialized to be s, and two concave chains from p to the two endpoints of the current diagonal e i . An example of these funnels can be found in Fig. 1. In each step i of the algorithm, i = 1, . . . , m − 1, we update the funnel for e i to the funnel for e i+1 . There are two cases: (i) if e i+1 remains visible from the cusp p, we update the appropriate concave chain, using a variant of Graham's scan; (ii) if e i+1 is no longer visible from p, we proceed along the appropriate chain until we find the cusp for the next funnel. We output the vertices encountered along the way as part of the shortest path. Implemented in the right way, this procedure takes linear time and space. 1 Using Constrained Delaunay-Triangulations The first constant-workspace-algorithm for geodesic shortest paths in simple polygons was presented by Asano et al. [3] in 2011. It is called Delaunay, and it constitutes a relatively direct adaptation of the method of Lee and Preparata to the constant-workspace model. In the constant-workspace model, we cannot explicitly compute and store a triangulation of P . Instead, we use a uniquely defined implicit triangulation of P , namely the constrained Delaunay triangulation of P [9]. In this variant of the classic Delaunay triangulation, we prescribe the edges of P to be part of the desired triangulation. Then, the additional triangulation edges cannot cross the prescribed edges. Thus, unlike in the original Delaunay triangulation, the circumcircle of a triangle may contain other vertices of P , as long as the line segment from a triangle endpoint to the vertex crosses a prescribed polygon edge, see Fig. 2 for an example. The constrained Delaunay triangulation of P can be navigated efficiently using constant workspace: given a diagonal or a polygon edge, we can find the two incident triangles in O(n 2 ) time [3]. Using an O(n) time constant-workspacealgorithm for finding shortest paths in trees, also given by Asano et al. [3], we can thus enumerate all triangles in the dual path between the constrained Delaunay triangle that contains s and the constrained Delaunay triangle that contains t in O(n 3 ) time. As in the algorithm by Lee and Preparata, we need to maintain the visibility funnel while walking along the dual path of the constrained Delaunay triangulation. Instead of the complete chains, we store only the two line segments that define the current visibility cone (essentially the cusp together with the first vertex of each chain). We recompute the two chains whenever it becomes necessary. The total running time of the algorithm is O(n 3 ). More details can be found in the paper by Asano et al. [3]. Using Trapezoidal Decompositions This algorithm was also proposed by Asano et al. [3], as a faster alternative to the algorithm that uses constrained Delaunay triangulations. It is based on the same principle as Delaunay, but it uses the trapezoidal decomposition of P instead of the Delaunay triangulation [7]. See Fig. 3 for a depiction of the decomposition and the symbolic perturbation method to avoid a general position assumption. In the algorithm, we compute a trapezoidal decomposition of P , and we follow the dual path between the trapezoid that contains s and the trapezoid that contains t, while maintaining a funnel and outputting the new vertices of the geodesic shortest path as they are discovered. Assuming general position, we can find all incident trapezoids of the current trapezoid and determine how to continue on the way to t in O(n) time (instead of O(n 2 ) time in the case of the Delaunay algorithm). Since there are still O(n) steps, the running time improves to O(n 2 ). (a) The trapezoidal decomposition is obtained by shooting rays up and down at every vertex. (b) Shifting all points to the right by yε makes sure no two share the same x-coordinate. The Makestep Algorithm This algorithm was presented by Asano et al. [2]. It uses a direct approach to the geodesic shortest path problem and unlike the two previous algorithms, it does not try to mimic on the algorithm by Lee and Preparata. In the traditional model, this approach would be deemed too inefficient, but in the constant-workspace world, its simplicity turns out to be beneficial. The main idea is as follows: we maintain a current vertex p of the geodesic shortest path, together with a visibility cone, defined by two points q 1 and q 2 on the boundary of P . The segments pq 1 and pq 2 cut off a subpolygon P ⊆ P . We maintain the invariant that the target t lies in P . In each step, we gradually shrink P by advancing q 1 and q 2 , sometimes also relocating p and outputting a new vertex of the geodesic shortest path. These steps are illustrated in Fig. 4. It is possible to realize the shrinking steps in such a way that there are only O(n) of them. Each shrinking step takes O(n) time, so the total running time of the MakeStep algorithm is O(n 2 ). Our Implementation We have implemented the four algorithms from Section 2 in Python [15]. For graphical output and for plots, we use the matplotlib library [13]. Even though there are some packages for Python that provide geometric objects such as line segments, circles, etc., none of them seemed suitable for our needs. Thus, we decided to implement all geometric primitives on our own. The source code of the implementation is available online in a Git-repository. 2 In order to apply the algorithm Lee-Preparata, we must be able to triangulate the simple input polygon P efficiently. Since implementing an efficient polygon p q 2 q 1 P s t P (a) P is the subset of P cut off by the three points p, q1, and q2. Both points are convex, one is advanced. triangulation algorithm can be challenging and since this is not the main objective of our study, we relied for this on the Python Triangle library by Rufat [16], a Python wrapper for Shewchuk's Triangle, which was written in C [17]. We note that Triangle does not provide a linear-time triangulation algorithm, which would be needed to achieve the theoretically possible linear running time for the shortest path algorithm. Instead, it contains three different implementations, namely Fortune's sweep line algorithm, a randomized incremental construction, and a divide-and-conquer method. All three implementations give a running time of O(n log n). For our study, we used the divide-and conquer algorithm, the default choice. In the evaluation, we did not include the triangulation phase in the time and memory measurement for running the algorithm by Lee and Preparata. General Implementation Details All three constant-workspace algorithms have been presented with a general position assumption: Delaunay and Makestep assume that no three vertices lie on a line, while Trapezoid assumes that no two vertices have the same x-coordinate. Our implementations of Delaunay and Makestep also assume general position, but they throw exceptions if a non-recoverable general position violation is encountered. Most violations, however, can be dealt with easily in our code; e.g. when trying to find the constrained Delaunay triangle(s) for a diagonal, we can simply ignore points collinear to this diagonal. For the case of Trapezoid , Asano et al. [3] described how to enforce the general position assumption by changing the x-coordinate of every vertex to x + εy for some small enough ε > 0 such that the x-order of all vertices is maintained. In our implementation, we apply this method to every polygon in which two vertices share the same x-coordinate. The coordinates are stored as 64 bit IEEE 754 floats. In order to prevent problems with floating point precision or rounding, we take the following steps: first, we never explicitly calculate angles, but we rely on the usual three-pointorientation test, i.e., the computation of a determinant to find the position of a point c relative to the directed line through to points a and b [7]. Second, if an algorithm needs to place a point somewhere in the relative interior of a polygon edge, we store an additional edge reference to account for inaccuracies when calculating the new point's coordinates. Implementing the Algorithm by Lee and Preparata The algorithm by Lee and Preparata can be implemented easily, in a straightforward fashion. There are no particular edge cases or details that we need to take care of. Disregarding the code for the geometric primitives, the algorithm needs less then half as many lines of code than the other algorithms. Implementing Delaunay and Trapezoid In both constant-workspace adaptations of the algorithm by Lee and Preparata, we encounter the following problem: whenever the cusp of the current funnel changes, we need to find the cusp of the new funnel, and we need to find the piece of the geodesic shortest path that connects the former cusp to the new cusp. In their description of the algorithm, Asano et al. [3] only say that this should be done with an application of gift wrapping (Jarvis' march) [7]. While implementing these two algorithms, we noticed that a naive gift wrapping step that considers all the vertices on P between the cusp of the current funnel and the next diagonal might include vertices that are not visible inside the polygon. Figure 5 shows an example: here b is the next diagonal, and naively we would look at all vertices along the polygon boundary between v and w. Hence, u would be [2] state that one should check whether "t lies in the subpolygon from q to q 1 ." This subpolygon, however, is not clearly defined as the line segment q q 1 does not lie inside P . Considering pq instead and using q 1 pq to shrink the cutoff region gives the correct result on the right. considered as a gift wrapping candidate, and since it forms the largest angle with the cusp and v (in particular, an angle that is larger than the angle formed by w) it would be chosen as the next point, even though w should be the cusp of the next funnel. A simple fix for this problem would be an explicit check for visibility in each gift-wrapping step. Unfortunately, the resulting increase in the running time would be too expensive for a realistic implementation of the algorithms. Our solution for Trapezoid is to consider only vertices whose x-coordinate is between the cusp of the current vertex and the point where the current visibility cone crosses the boundary of P for the first time. For ease of implementation, one can also limit it to the x-coordinate of the last trapezoid boundary visible from the cusp. Figure 5 shows this as the dotted green region. For Delaunay, a similar approach can be used. The only difference is that the triangle boundaries in general are not vertical lines. Implementing Makestep Our implementation of the Makestep algorithm is also relatively straightforward. Nonetheless, we would like to point out one interesting detail; see Fig. 6. The description by Asano et al. [2] says that to advance the visibility cone, we should check if "t lies in the subpolygon from q to q 1 ." If so, the visibility cone should be shrunk to q pq 1 , otherwise to q 2 pq . However, the "subpolygon from q to q 1 " is not clearly defined for the case that the line segment q q 1 is not contained in P . To avoid this difficulty, we instead consider the line segment pq . This line segment is always contained in P , and it divides the cutoff region P into two parts, a "subpolygon" between q and q 1 and a "subpolygon" between q 2 and q . Now we can easily choose the one containing t. Experimental Setup We now describe how we conducted the experimental evaluation of our four implementations for geodesic shortest path algorithms. Generating the Test Instances Our experimental approach is as follows: given a desired number of vertices n, we generate 4-10 (pseudo)random polygons with n vertices. For this, we use a tool developed in a software project carried out under the supervision of Günter Rote at the Institute of Computer Science at Freie Universität Berlin [10]. Among others, the tool provides an implementation of the Space Partitioning algorithm for generating random simple polygons presented by Auer and Held [4]. Next, we generate the set S of desired endpoints for the geodesic shortest paths. This is done as follows: for each edge e of each generated polygon, we find the incident triangle t e of e in the constrained Delaunay triangulation of the polygon. We add the barycenter of t e to S. In the end, the set S will have between n/2 and n − 2 points. We will compute the geodesic shortest path for each pair of distinct points in S. Executing the Tests For each pair of points s, t ∈ S, we find the geodesic shortest path between s and t using each of the four implemented algorithms. Since the number of pairs grows quadratically in n, we restrict the tests to 1500 random pairs for all n ≥ 200. First, we run each algorithm once in order to assess the memory consumption. This is done by using the get traced memory function of the built-in tracemalloc module which returns the peak and current memory consumptionthe difference tells us how much memory was used by the algorithm. Starting the memory tracing just before running the algorithm gives the correct values for the peak memory consumption. In order to obtain reproducible numbers we also disable Python's garbage collection functionality using the built-in gc.disable and gc.enable functions. After that, we run the algorithm between 5 and 20 times, depending on how long it takes. We measure the processor time for each run with the process time function of the time module which gives the time during which the process was active on the processor in user and in system mode. We then take the median of the times as a representative running time for this point pair. Test Environment Since we have a quadratic number of test cases for each instance, our experiments take a lot of time. Thus, the tests were distributed on multiple machines and on multiple cores. We had six computing machines at our disposal, each with two quad-core CPUs. Three machines had Intel Xeon E5430 CPUs with 2.67 GHz; the other three had AMD Opteron 2376 CPUs with 2.3 GHz. All machines had 32 GB RAM, even though, as can be seen in the next section, memory was never an issue. The operating system was a Debian 8 and we used version 3.5 of the Python interpreter to implement the algorithms and to execute the tests. Experimental Results The results of the experiments can be seen in the following plots. The plot in Figure 7 shows the median and maximum memory consumption as solid shapes and transparent crosses, respectively, for each algorithm and for each input size. More precisely, the plot shows the median and the maximum over all polygons with a given size and over all pairs of points in each such polygon. We observe that the memory consumption for Trapezoid and for Makestep is always smaller than a certain constant. At first glance, the shape of the median values might suggest logarithmic growth. However, a smaller number of vertices leads to a higher probability that s and t are directly visible to each other. In this case, many geometric functions and subroutines, each of which requires an additional constant amount of memory, are not called. A large number of point pairs with only small memory consumption naturally entails a smaller median value. We can observe a very similar effect in the memory consumption of the Lee-Preparata algorithm for small values of n. However, as n grows, we can see that the memory requirement begins to grow linearly with n. The second plot in Figure 8 shows the median and the maximum running time in the same way as Figure 7. Not only does Delaunay have a cubic running time, but it also seems to exhibit a quite large constant: it grows much faster than the other algorithms. In the lower part of Figure 8, we see the same x-domain, but with a much smaller y-domain. Here, we observe that Trapezoid and Makestep both have a quadratic running time; Trapezoid needs about two thirds of the time required by Makestep. Finally, the linear-time behavior of Lee-Preparata can clearly be discerned. Additionally, we observed that the tests ran approximately 85 % slower on the AMD machines than on the Intel servers. This reflects the difference between the clock speeds of 2.3 GHz and 2.67 GHz. Since the tests were distributed equally on the machines, this does not change the overall qualitative results and the comparison between the algorithms. Conclusion We have implemented and experimented with three different constant-workspace algorithms for geodesic shortest paths in simple polygons. Not only did we observe the cubic worst-case running time of Delaunay, but we also noticed that the constant factor is rather large. This renders the algorithm virtually useless already for polygons with a few hundred vertices, where the shortest path computation might, in the worst case, take several minutes. As predicted by the theory, Makestep and Trapezoid exhibit the same asymptotic running time and space consumption. Trapezoid has an advantage in the constant factor of the running time, while Makestep needs only about half as much memory. Since in both cases the memory requirement is bounded by a constant, Trapezoid would be our preferred algorithm. We chose Python for the implementation mostly due to our previous programming experience, good debugging facilities, fast prototyping possibilities, and the availability of numerous libraries. In hindsight, it might have been better to choose another programming language that allows for more low-level control of the underlying hardware. Python's memory profiling and tracking abilities are limited, so that we cannot easily get a detailed view of the used memory with all the variables. Furthermore, a more detailed control of the memory management could be useful for performing more detailed experiments.
2017-06-28T09:43:33.676Z
2019-04-05T00:00:00.000
{ "year": 2019, "sha1": "914c43b490debb8ab4d2b355d85ea9e97dcff926", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1904.03050", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "914c43b490debb8ab4d2b355d85ea9e97dcff926", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
56410509
pes2o/s2orc
v3-fos-license
A topic to rack our clever brains on : Premodification in Hungar ian and English body-part idioms The aim of this paper is to examine the adjectival premodification tendencies in English and Hungarian V + NP idioms that involve body-part terms. Investigating the modified variations of twenty expressions in corpora, five major types of premodifiers have been found to occur with this particular class of verbal idioms: conjunction, expressive, external, intermediate, and internal. Conjunction adnominals modify only the literal referent of the body-part term; expressives provide some additional emotional content, while external modifiers have the whole VP in their scope, and function as adverbials. The intermediate type operates only at the figurative level, modifying the abstract meaning of the head noun. Within internal modification, two subclasses should be distinguished: (i) the Stathian (2007) literal-only, and (ii) the literaland-figurative proposed by Cserép (2010). Intermediate level as well as literal-and-figurative modifiers require the noun to be semantically autonomous, therefore, in principle, these two types can occur only with transparent idioms. It turned out, however, that this is not always true. Introduction One of the most contentious topics receiving the broadest attention and discussion within idiom research concerns the lexical and syntactic variability of idiomatic expressions.In addition to alterations such as passivization or lexical substitution, adding various modifiers into idiomatic strings belongs to the scope of this issue as well.Many previous studies (e. g.Ernst 1980;Nunberg et al. 1994;Langlotz 2006a, 2006b, Stathi 2007;Cserép 2010) have pointed out that premodification in V + NP idioms is a frequently employed phenomenon in actual language use.The adjectives inserted into idioms, however, seem to differ in their function, and, thus, can be classified into several categories (Ernst 1980;Stathi 2007). The corpus-based study presented in this paper focuses on a specific subclass of verbal idioms, examining the adnominal premodification patterns of expressions that contain body-part terms in their direct object positions.This investigation, which has been conducted in both the Hungarian and the English languages, is mainly concerned with the various categories of adjectival modifiers that can occur with these idioms.However, it also touches upon the semantic autonomy of the body-part constituents in relation with the premodification types.The theoretical background on which this study is based involves Ernst's (1980) three-way dis-tinction and Stathi's (2007) five-level taxonomy of modification.Therefore, before a detailed discussion of the results, these two studies will be summarized in section 2. 2 Previous studies Ernst (1980) The first of the most influential contributions to this particular area of idiom research is Ernst's (1980) taxonomy which -based on the results of an investigation of naturally occurring data in fiction, journalism, the television and radio -distinguishes three main types of premodifiers: external, internal, and conjunction (Ernst 1980: 51-53). The basic differences between these three groups can be captured along two dimensions: the semantic scope of the modifier on the one hand, and the referentiality of the modified noun on the other hand.The first dimension addresses the question whether a particular adjective has only its head noun in its scope (which is true for the internal and conjunction types) or modifies the whole VP expressed by the idiom (which is true for external modifiers).In contrast, referentiality of the noun is interesting for making a distinction between internal and conjunction modifiers, as this issue concerns the question whether the head noun, which is semantically under the scope of the adjective, has a referent or not.To illustrate how this works in practice, consider the following sentences (Ernst 1980: 51-53): (1) a. Carter doesn't have an economic leg to stand on.b.Economically, Carter doesn't have a leg to stand on. (2) In spite of its conservatism, many people were eager to jump on the horse-drawn Reagan bandwagon. (3) Malvolio deserves almost everything he gets, but ... there is that little stab of shame we feel at the end for having had such fun pulling his cross-gartered leg for so long.Ernst (1980) argues that the adjective economic in (1a) belongs to the external category because it -despite of being attached to the noun leg syntactically -semantically modifies the whole idiomatic string in the same way as the corresponding adverb economically does in (1b).1 Since external modifiers like economic play the same role as adverbials in a sentence, there is a wide agreement among researchers that the main function of this class of adjectives is to contextualize the idiom and to determine how it should be interpreted in the discourse (see also Moon 1998;Burger 2015;Dobrovol'skij 2000;Sabban 2000;andMinugh 2007, 2014).For this reason, Ernst (1980: 52) uses the term "domain delimiter" to refer to them.2 In contrast, the modifier horse-drawn in (2) is considered to be internal as it is attached to the head-noun bandwagon not only syntactically but also semantically.Bandwagon corresponds to the idiomatic interpretation 'movement', therefore, the phrase the horse-drawn Reagan bandwagon can idiomatically be understood as "Reagan's political movement is oldfashioned and behind the times" (Ernst 1980: 52). Although the adjective cross-gartered in (3) has only the noun leg in its scope, it does not belong to the class of internal modifiers.Given the fact that leg in pull someone's leg has no referent in the idiomatic sense of the expression, cross-gartered can only modify its literal referent, i. e. Malvolio's real body part, which is independent of the idiomatic meaning 'make fun of, fool someone'.Since the interpretation of (3) involves both the idiomatic meaning of the expression and the literal meaning of the idiom constituent leg, Ernst (1980) The Stathian (2007) term of internal modification is applied to cases where the modified noun is interpretable at the literal level.It is claimed that adjectives of this type typically occur with transparent metaphorical idioms and their main function is to activate the source domain, i. e. the underlying image of the expression, on the basis of which the modifier-noun sequence is mapped onto the target domain as a whole. [If the ratios of the budget -[…] -are proof of restrained pending behaviour, too, then the opposition is weakened significantly (lit.'the strongest wind is taken out of the sails').]5Frankfurter Allgemeine Zeigung, 08.07.1998, p. 17. In (4), for instance, the adjective stärkste 'strongest' evokes the image of sailing, and receives its interpretation at the literal level.It is known that the strength of the wind influences the speed of a sailing ship: if the wind is strong, the ship moves fast, whereas a weak wind slows the ship down causing it to be unable to go forward.The same concept of intensity applies to the target domain where someone is making a very fast progress in an activity. Regarding external modification, Stathi (2007) points out that one of the main properties of such modifiers is their semantic incompatibility with the head noun.This semantic clash may be due to the fact that external modifiers usually denote abstract concepts.Although these adjectives tend to function as domain delimiters, there are cases as well where they are used to express location, time, or cause.For instance, nach-saisonale 'after-season' in (5) denotes the time when Matthäus brought shame on himself: ( Karren 'cart' in (6) -being specified by a genitive attribution -obviously stands for Islamic Revolution at the figurative level of interpretation, and the adjective maroden modifies this particular abstract meaning of the noun.Intermediate modification, therefore, implies that the object nominal has some semantic autonomy within the expression.It is important, however, to note that this abstract sense of the noun is available only in the idiomatic string and is not likely to occur outside it. Conjunction modification is also a part of Stathi's (2007) typology.Interestingly, this is the only category where the modified noun receives the property of referentiality (see Table 1). That is, as also pointed out in section 2.1, nouns occurring with conjunction premodifiers play a double role: (i) they are used non-referentially as part of the idiom, but (ii) refer to real enti-ties at the literal level of meaning.Conjunctive premodifiers, thereby, are seen as contextembedding tools for idioms, which are used to provide as much background information about the referent of the noun as possible in the most economic way. According to Stathi (2007), this category of modification is typical for (but not restricted to) idioms containing body-part terms.In such cases, the premodifier characterizes the body-part of the person mentioned in the discourse; however, it is also possible that the adjective-noun sequence, by means of metonymical shift, refers to the person rather than to his/her body-part.In (7), for example, the adjective unverschämtes 'brazen' does not supply additional information about the mouth literally, but metonymically describes its bearer (der Herr 'man') as brazen. In addition to those discussed so far, adjectival modifiers also have a fifth type which is referred to as metalinguistic modification.Adjectives of this class (such as proverbial, metaphorical, and literal) are inserted into idiomatic strings in order to serve two closely related functions: (i) "to highlight and draw attention to the use of the idiom in the text", and (ii) "to signal that the expression, which could also be understood literally, is to be interpreted idiomatically" (Stathi 2007: 102): (8) Ständig verletzt er die Anstandsregeln, übernimmt sich, fällt selber auf die sprichwörtliche Schnauze. [He constantly violates the etiquette, he overreacts, he fails (lit.'falls on the proverbial snout').]die tageszeit, 11.08.1989,p. 16.Stathi's (2007) five classes of adjectival modification can be arranged hierarchically in a similar fashion as Fraser's (1970) seven-level hierarchy of idiom transformation.The order of the levels is presented above in Table 1 and should be read from the top to the bottom: if an idiom permits a certain level of modification, it is also open to all modifications that are below it in the hierarchy.That is, if an expression allows external modification, it also allows the conjunction and metaphorical types, but not intermediate level or internal modification. Adjectival modification in idioms with body-part terms The study presented in this paper examines the adjectival premodification patterns of V + NP idioms containing body-part terms, in both Hungarian and English corpora.Moreover, it also concerns the semantic autonomy of the body-part constituents and its potential relationship with the various modification types.It is primarily based on the above-discussed five-class typology introduced by Stathi (2007), who found that all of the three German body-part idioms selected for her research occurred only with the conjunction and the hierarchically lowerlevel metalinguistic types of modification.Therefore, this investigation addresses the following three research questions: Question 1: Can body-part constituents in idioms be modified by internal, intermediate-level, and external adjectives as well, or do they only allow the conjunction and metalinguistic types of modification?Question 2: Can the body-part constituents be assigned semantic autonomy in any of the expressions?Question 3: Does semantic autonomy have any influence on the modificational potentiality of the expressions? Research data and methods For both the English and the Hungarian parts of the study, 10 V + NP idioms with body-part constituents have been selected from various idiom dictionaries (Spears 2000; Siefring 2004; O. Nagy 1985;Bárdosi 2003).Both the Hungarian and the English groups of the expressions contained 5 idioms that could be found only in that particular language, while the other 5 idioms occurred in both languages.6 The adjectival premodification properties of the 20 idioms have been studied in large corpora. For the English part, the Corpus of Global Web-Based English (GloWbE) has been used, which contains 1.9 billion words of text from 20 different countries.The texts in this corpus consist of web-based materials such as personal blogs, company websites, magazines, and newspapers. The premodified alterations of the Hungarian idioms have been checked in the Hungarian National Corpus (HNC) (Oravecz et al. 2014).It is the largest freely-available Hungarian corpus, containing more than 1 billion words from texts of five genres: official, press, spoken, personal, and academic.However, since only a small number of occurrences of premodified body-part idioms were found in this corpus, some search in the Google has also been done in order to collect more data for my study.Sentences that have occurred multiple times have been filtered out in both parts of the investigation. Results The results of this corpus-based study are presented and discussed in two subsections.Section 3.2.1 focuses on the first research question, hence, deals with the types of adjectival premodifiers that have occurred with the selected idiomatic expressions in the corpora, whereas section 3.2.2aims to answer Questions 2 and 3, accounting for semantic autonomy and its role in premodification.Before a detailed look at these particular issues, however, two important facts should be noted with respect to the results: (i) All of the Hungarian idioms studied here have been found to occur with adjectives inserted in front of their body-part constituents, while no occurrence of such alterations could be detected for the English expression lose one's head. (ii) There were some ambiguous cases where it was quite difficult to make a decision about the type of the modifier.For this reason, no frequency statistics will be provided in this paper. Types of adjectival modification Consistent with Stathi's (2007) The sentences in (9) illustrate the case when the adjective (long and csokibarnára sült 'chocolate-brown-tanned' in these particular instances) modifies the literal body-part referent of its head noun (nose and láb 'leg', respectively).As opposed to this, a metonymic shift takes place in both (10a) and (10b), and not the body parts feet and segg 'ass', but their bearers are characterized as talented and városi 'urban' by the inserted adjectives.The role of conjunctive adnominals, however, is not always as clear as in these four examples.A not so unequivocal occurrence of this class of premodification is illustrated in (11) where a three-way ambiguity can be detected with respect to the possible functions and meanings of the adjective fat. (11) And they precisely know that poor countries have nothing they can do except to kiss their fat asses. That is, similar to the sentences in ( 9) and ( 10), fat can refer to the physical property of either the body part or the people (bankers) who have been mentioned previously.A third -perhaps the most likely -interpretation, however, is implied by the larger context in which various financial institutions and their unfair policies against poor countries are discussed.In this sense, the adjective fat might metaphorically be understood as 'rich', emphasizing the huge opulence of banks and the distance between their and the poor countries' power and financial situation.Therefore, the phrase kiss their fat asses in (11) may metaphtonymically be interpreted as 'flatter the rich financial institutions/banks'. In addition to conjunction modification, metalinguistic adjectives like proverbial or metaphorical have been found by Stathi (2007) to be the other category that can potentially be inserted into idiomatic strings with body-part terms.Interestingly, no occurrences of such premodifiers could be identified in this present study for either the English or the Hungarian expressions.Nevertheless, not conjunction modification was the only type available for the idioms in our research data: the other three categories, i. e. external, intermediate level and internal, were shown to be compatible with them as well. [In this new situation, one prime minister, certain Primakov has already died in the political sense/politically (lit.'taken in the political sense/politically left his tooth there').] (13) a.When marketing was beginning to find its academic feet 100 years ago, these ideas had an immediate appeal.b.When marketing was beginning to find its feet academically 100 years ago, these ideas had an immediate appeal. Due to limitations of space, it is not possible to include all instances of external modification in this paper, but some of them are listed below in the form of concordance lines. ( 19) can be paraphrased as the locative adverbial a központban 'in the centre'.In (18), the adjective unwelcome is somewhat ambiguous: it can function as an external modifier expressing the meaning 'it was not gladly received that the Anglo Saxons and the Normans interfered', or as a conjunction modifier metonymically characterizing these two peoples as unwanted. The next level in Stathi's (2007) hierarchy constitutes intermediate modification which is claimed to be semantically compatible only with the abstract meaning of the noun.Only the following instance has been found in our data to which this function may be assigned: (20) Esze ágában sincsen Pest környékén harcolni, még kevésbé otthagyni becses fogát. [He doesn't want to fight near Pest, even less to die there (lit.'to leave his precious tooth there').] The Hungarian adjective becses 'precious/respectable' is generally used to characterize entities and abstract concepts that have an outstandingly high value (either in the material or in the emotional sense), as in the literal phrases becses drágakő 'precious gem' or becses hagyomány 'precious tradition'.It is, however, not a regular collocation of the noun fog 'tooth'; therefore, it can function neither as a conjunction nor as an internal modifier in (20).10Moreover, external modification can also be ruled out in this case as becses does not modify the idiom holistically.Following from the figurative meaning 'to die' of the expression otthagyja a fogát (lit.'leave his tooth there'; fig.'to die') the most likely explanation for the use of this adjective in ( 20) is that the noun fog is assigned a meaning that corresponds to one of our greatest values, i. e. life.Hence, the string otthagyni a becses fogát (lit.'to leave his precious tooth there') becomes interpretable in a similar way as the expression lose his precious life.This assumption is also supported by the fact that the phrase becses élet 'precious life' as well as the co-occurrences of the noun élet 'life' with the synonyms of becses (drága 'precious', értékes 'valuable', tiszteletre méltó 'honorable') are used relatively frequently in the Hungarian language.11 In contrast, consistent with Stathi's (2007) definition of internal modification, the adjectives in ( 21)-( 23) modify the noun literally, and they are mapped onto the target domain with the noun as a unit.For both the idiom felnyitja valaki szemét valamire and its English equivalent open someone's eye to something, the premodifiers leragadt 'stuck' and blind in ( 21) and ( 22) evoke the same underlying mental image.That is, based on our general background knowledge, we know that if someone's ability of seeing is impaired, that person is unable to perceive the world in its full details.If he gets his eyesight (back), information that can only be accessed via vision becomes also available for him.This image relates to the target domain where someone is provided with previously unknown facts, by which he is able to approach something from a new perspective.Enyves 'gluey' behaves according to the same principle in ( 23) where the activated image involves a gluey hand to which every single item that the person touches sticks.This is in accordance with the often-used meaning of the idiom 'to steal something'. [This movie may make us realize that we are all diverse (lit.'open our stuck eyes').] (22) However, my heart is not to condemn them but to pray to God to open their blind eyes. [Many people helped the deporters, then stole the robbed Jews' wealth (lit.'put their lazy, gluey hands on').] Cserép (2010: 107) points out that there are cases in which both the adjective and the noun have their own figurative senses and contribute to the overall meaning separately rather than as a unit.This assumption is supported by ( 24) and ( 25) where the adjectives mocskos 'filthy' and dirty can be interpreted as 'unethical/unfair', while the noun hand (kéz in Hungarian) may refer to 'influence/possession'.Since the unethical nature of the act is conveyed even by the canonical form of the idiom, these adjectives can be seen as tools that intensify this particular feature of the event. [The Fidesz Party expanded its influence to the [Hungarian] Red Cross as well (lit.'put its filthy hands on').] (25) That way, we don't have to worry about our leaders being corrupt or not because there will be nothing for them to lay their dirty hands on. The research data collected for the current study also contained some instances that did not really fit into any of Stathi's (2007) five categories.These adjectives did not give any information about the head noun or the VP at either the idiomatic or the literal level of interpretation.Consider the following sentences: (26) I am quite sure we are capable of looking after ourselves, we don't need the Americans poking their bloody noses in.All adjectives in ( 26)-(30) can be considered semantically "empty" in the sense that they do not contribute to the propositional meanings of the sentences.Instead, their role is to provide some extra emotional content, expressing the speaker's feelings and attitude about the event denoted by the VP.A closer look at the dictionary meanings and the general uses of the adjectives supply evidence in favour of such an interpretation.According to the Oxford Advanced Learner's Dictionary (2005: 154, 627), both bloody and fucking can be defined as "a swear word that [...] is used to emphasize a comment or an angry statement".To the Hungarian word átkozott 'cursed', the online version of A Magyar Nyelv Értelmező Szótára [The Hungarian Language's Explanatory Dictionary] assigns the function of referring to things that cause someone an extreme extent of annoyance.By analogy with these dictionary senses, bloody, fucking and átkozott in the above examples can all be treated as intensifiers that express the speaker's anger about the American interference in (26), the lack of his own knowledge in ( 27), and the addressee's act of losing his self-control in both ( 28) and ( 29). Although the intensifier role is not listed for redvás 'carious' in the dictionary, its occurrences in ordinary phrases prove the existence of this function.That is, all of its 41 instances in the Hungarian National Corpus behaved in the same way as the above-mentioned three adjectives.Adapting McClure's (2011) term, I will use the name expressive to refer to this type of modification. Semantic autonomy of the noun In addition to investigating the function of the adjectives that occurred with the 20 idioms, this study also concerns the semantic autonomy of the body-part constituents in these expressions.As Langlotz (2006b) points out, the concept of semantic autonomy refers to the phenomenon when an idiom component develops a lexicalized figurative meaning that is available not only within that particular idiom but also outside it.For example, both the verb swallow and the noun phrase bitter pill in the expression swallow the bitter pill can be used with the meanings 'accept' and 'unpleasant fact' in contexts other than the idiomatic phrase. This should be distinguished from the term "relative semantic autonomy" which applies to cases where the constituents acquire abstract senses that can be accessed only when they occur in the idiom.The components of rock the boat, for instance, have such phrase-induced figurative senses since neither rock nor boat can be interpreted as 'spoil' and 'comfortable situation', respectively, when they are not part of the idiomatic string. In our study, there were only two idioms whose nominal constituents had a lexicalized figurative sense: lay one's hands on something and its Hungarian equivalent ráteszi a kezét valamire.In these expressions, the noun hand (kéz in Hungarian) can figuratively be interpreted as 'influence/occupancy'.This meaning of the noun, however, is not restricted to this idiom, but can be detected in some other phrases as well; for example, in the English These were the only idioms in our investigation whose body-part constituents could be considered as showing some degree of semantic autonomy.In the case of the other 15 strings, no correspondences could be established between the individual idiom components and (parts of) the overall figurative meanings.These expressions, therefore, should be treated as semantically opaque or non-decomposable idioms. With respect to the relationship between semantic autonomy and adjectival premodification, we could earlier see that only two types, intermediate level and internal modifications are able to indicate that the nominal constituent of the idiom has its own (either lexicalized or phraseinduced) figurative sense.Intermediate level modification has been claimed to operate only at the abstract level, which obviously requires the head noun to have an independent meaning that contributes to the overall idiomatic interpretation.I agree with Stathi (2007: 104) in that her internal modification is not necessarily a signal of the semantic autonomy of the noun, since it constitutes the mapping of the modifier-noun sequence onto the target domain as a whole.Nevertheless, the literal-and-figurative type of internal modification proposed by Cserép (2010: 107), which refers to cases where both the adjective and the noun individually contribute to the meaning of the idiom, does presuppose that the nominal head is semantically autonomous. In light of these, my prediction was that only those five idioms would be compatible with the intermediate level and literal-and-figurative types of adjectival modification whose nouns could be assigned an independent abstract meaning.Two expressions, lay one's hands on something and ráteszi a kezét valamire behaved according to the expectations.As could be seen in ( 24) and ( 25), both of them occurred with the adjectives dirty and mocskos 'filthy', which internally modified their head nouns in a way that they also contributed their own figurative senses to the meaning of the string.Although the other three transparent expressions had no instances in the corpora with either the literal-and-figurative or the intermediate class of premodifiers, it does not necessarily mean that they are not open to such adnominals. A lot more interesting and surprising finding of the current study is that the Hungarian idiom otthagyja a fogát 'to die' occurred with the intermediate level adjective becses 'precious', as has been shown in (20).Since the image evoked by its literal meaning (i.e. leaving your tooth somewhere) seems to have nothing to do with the idiomatic interpretation 'to die', and no correspondences can be detected between the meaning and the idiom components, this expression -at least in principle -should be considered as semantically opaque, hence, unable to combine with intermediate premodifiers.In this particular case, however, the speaker managed to remotivate the idiom, as a result of which, the constituent fog 'tooth' received some degree of semantic autonomy.That is, possibly on the basis of the meaning 'to die', he analyzed the individual idiom parts in accordance with another death-related expression életét veszti ('lose one's life').Consequently, the verb otthagy 'leave something there' corresponds to veszt 'lose', while the noun fog 'tooth' is assigned the abstract meaning élet 'life', as illus-trated in (34).Since the adjective becses 'precious' is a regular collocation of élet 'life', it operates in (20) only at the figurative level, modifying the ad hoc abstract meaning 'life' of the constituent fog. LOSE LIFE Although it is not possible to draw general conclusions on the basis of one example, it should be noted that the above-discussed case supports Stathi's (2007: 105) assumption that "idioms of the kick-the-bucket-type [i.e. opaque/non-decomposable idioms] may be treated as analytical sequences by the speaker".Following from this, a very interesting question arises.Do opaque idioms really show no degree of decomposability?I agree with Cserép (2010: 111) in that in lack of sufficient psycholinguistic testing, this possibility should not be excluded yet.Some research focusing on this particular issue (also in light of other syntactic processes) may bring us closer to the answer. Conclusion To sum up, premodification seems to be a very common phenomenon in verbal idioms with body-part constituents.Although the majority of the adjectives belonged to the conjunction category, all of the higher-level types of Stathi's (2007) hierarchy were represented in our research data.In addition to the five classes of adjectival modifiers proposed by Stathi (2007), however, we also argued for two other categories.On the one hand, there have been adjectives that did not contribute to the proposition at either level of interpretation but functioned as tools to express the speaker's attitude and feelings about the event.Following McClure (2011), this class of premodifiers has been labeled as expressives.On the other hand, some evidence has been found in favour of Cserép's (2010) literal-and-figurative type of modification as well, which is applied to cases where both the adjective and its head noun can be assigned their own figurative senses.These two categories are missing from Stathi's (2007) taxonomy.With respect to semantic autonomy, it has been found that it may be possible in certain cases to assign ad hoc abstract meanings to the constituents of opaque idioms. Table 1 : Stathi's (2007) hierarchy of adjectival modification 4 Stathi (2007))typology is revised byStathi (2007), who examined the adjectival premodification patterns of idioms in the German language.Based on her research data, she argues for a five-level hierarchy of adjectival modification that consists of the above-mentioned three types proposed by Ernst and two newly introduced ones which she calls intermediate level and metalinguistic modifications. Es zeugt von einer gewissen Naivität, wenn Montazeri heute den maroden Karren der islami- schen Revolution mit dem Hinweis auf deren ursprüngliche Ziele aus dem Dreck ziehen will. assumptions, conjunction modification showed the highest rate of frequency with the selected body-part idioms in both languages.Except for one English (lose one's head) and two Hungarian ones (elveszti a fejét [lit.'loseone'shead';fig.'lose self-control'] and otthagyja a fogát [lit.'leaveone'stoothsomewhere'; fig.'to die']), all expressions occurred with this particular type of adnominals. 7has been pointed out in section 2.2 that conjunction modifiers can serve two functions when they modify body-part constituents; i. e., they can (i) describe the body part only, or (ii) metonymically supply information about the person whose body part is referred to by the head noun.Both possibilities were represented in my research data:(9) a.These disease carrying European Christian filth poke their long noses everywhere.b.A reggeli futás után, ha nem volt kedvem bemenni a céghez, akkor a hotel napfényes teraszán lógattam a csokibarnára sült lábam. [After the morning jogging, if I didn't feel like going in the company, I was idling at the sunny terrace of the hotel (lit.'hanged my chocolate-brown-tanned leg').](10) a. Alexis Sanchez had an excellent season at Udinese, Humberto Suazo took Zaragoza by storm and Matias Fernandez finally found his talented feet in Europe at Benfica after a disappointing period at Villareal.b.Ja persze, mert én meresztem a városi seggem a 2 munkahelyemen.[Oh, yes, because I am fucking about at my 2 workplaces (lit.'stiffen my urban ass').] have someone/something in hand or in the Hungarian rossz kezekbe kerül (lit.'wronghands-into get'; fig.'get into wrong hands').In contrast, the nouns in the pair of lose one's head and elveszti a fejét, as well as palm in grease someone's palm have phrase-induced figurative senses, i. e. they can metaphorically and metonymically be identified as 'self-control' and 'person', respectively, only in these particular expressions.
2018-12-15T06:44:47.346Z
2018-04-12T00:00:00.000
{ "year": 2018, "sha1": "e60e0532e0a9b385e60185d035b67d62a9b5c17c", "oa_license": "CCBY", "oa_url": "https://bop.unibe.ch/linguistik-online/article/download/4270/6383", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e60e0532e0a9b385e60185d035b67d62a9b5c17c", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
119215885
pes2o/s2orc
v3-fos-license
Time Evolution of Coronal Magnetic Helicity in the Flaring Active Region NOAA 10930 To study the three-dimensional (3D) magnetic field topology and its long-term evolution associated with the X3.4 flare of 2006 December 13, we investigate the coronal relative magnetic helicity in the flaring active region (AR) NOAA 10930 during the time period of December 8-14. The coronal helicity is calculated based on the 3D nonlinear force-free magnetic fields reconstructed by the weighted optimization method of Wiegelmann, and is compared with the amount of helicity injected through the photospheric surface of the AR. The helicity injection is determined from the magnetic helicity flux density proposed by Pariat et al. using Solar and Heliospheric Observatory/Michelson Doppler Imager magnetograms. The major findings of this study are the following. (1) The time profile of the coronal helicity shows a good correlation with that of the helicity accumulation by injection through the surface. (2) The coronal helicity of the AR is estimated to be -4.3$\times$10$^{43}$ Mx$^{2}$ just before the X3.4 flare. (3) This flare is preceded not only by a large increase of negative helicity, -3.2$\times$10$^{43}$ Mx$^{2}$, in the corona over ~1.5 days but also by noticeable injections of positive helicity though the photospheric surface around the flaring magnetic polarity inversion line during the time period of the channel structure development. We conjecture that the occurrence of the X3.4 flare is involved with the positive helicity injection into an existing system of negative helicity. INTRODUCTION The photospheric magnetic fields in the active region (AR) NOAA 10930 have been observed comprehensively by the Michelson Doppler Imager (MDI; Scherrer et al. 1995) on board the Solar and Heliospheric Observatory (SOHO) spacecraft and the Solar Optical Telescope (SOT; Tsuneta et al. 2008) on board the Hinode satellite. In recent years, following the observations, considerable attention has been paid in the investigation of the structure of magnetic field lines and its evolution in AR 10930 related to the occurrence of the X3.4 flare on 2006 December 13. There were studies of sunspot rotation associated with the flare such as the remarkable counterclockwise rotation of the positive polarity sunspot (Yan et al. 2009), interaction between the fast rotating positive sunspot and ephemeral regions near the sunspot (Zhang et al. 2007), and nonpotential magnetic stress (Su et al. 2008). AR 10930 was also investigated for a change of magnetic field lines at the flaring site before and after the flare, e.g., in the azimuth angle (Kubo et al. 2007). Moreover, time variations of the magnetic helicity injection rate Magara & Tsuneta 2008) and intermittency (Abramenko et al. 2008) were examined over a time span of several days around the time of the flare. To resolve the limitations of using photospheric magnetic field data, some studies have been carried out on the X3.4 flare with the three-dimensional (3D) coronal magnetic fields derived from nonlinear force-free (NLFF) extrapolation methods. Jing et al. (2008) reported that magnetic shear around the flaring magnetic polarity inversion line decreased after the flare at coronal heights in the range of 8-70 Mm. By calculating the 3D electric current in AR 10930, Schrijver et al. (2008) showed that there are long fibrils of strong current slightly above the photosphere that almost completely disappear after the flare. Later on, Wang et al. (2008) found that the strong current-carrying fibrils are associated with the magnetic channel structure of AR 10930 and the flare occurred during the period in which the channels rapidly developed. In addition, the free energy of the NLFF fields was studied to understand the energy buildup, storage, and release processes in the corona during the flare. The free energy release of 2.4×10 31 erg during the flare was measured by Guo et al. (2008), andJing et al. (2010) found that a significant amount of free energy continuously built up in the 2 days prior to the flare. Encouraged by interesting results of previous studies with NLFF fields, in this study, we investigate the variation of the coronal relative magnetic helicity in AR 10930 over a span of several days to determine its relationship with the flare. Magnetic helicity is a measure of how much the magnetic field lines in a flux tube are twisted around the tube axis, how much the tube axis is kinked, and how much the flux tubes are interlinked with each other in a magnetic field system. It has been studied in order to understand the energy buildup process and trigger mechanism for flare occurrence. We anticipate that the coronal magnetic helicity study will bring a better understanding of the long-term evolution of the large-scale magnetic field geometry in the corona related to the X3.4 flare despite of a critical assessment (e.g., De Rosa et al. 2009) in NLFF extrapolation that existing NLFF extrapolation models are not able to accurately reproduce coronal fields and physical quantities of interest in the AR corona due to problematic issues such as the non-force-free nature of the photospheric magnetic field, limited field of view (FOV), and noise level of vector magnetograms, etc. Coronal helicity will also be compared with the helicity injection through the photospheric surface to check their relationship and consistency. CALCULATION OF MAGNETIC HELICITY The relative magnetic helicity, H r , derived by Finn & Antonsen(1985) is used to calculate a topologically meaningful and gauge-invariant measure of helicity inside a volume, V : where P is the potential field having the same normal component as the magnetic field, B, on the boundary surface enclosing V . A and A p are the vector potentials for B and P, respectively. H r represents the amount of helicity subtracted from the corresponding potential field P. In our calculation of H r in a coronal volume of AR 10930, we adopt the code of Fan (2009) for the determination of the specific vector potentials, A and A p , proposed by DeVore (2000) in treating the photosphere as an infinite plane (z = 0) in a Cartesian coordinate system. The unsigned magnetic flux, Φ, through the photospheric surface, S, of AR 10930 is defined by where B z is the z-component of magnetic field and the integration is over the entire photospheric area (z = 0) of the computational domain of 3D NLFF field data. Note that outside of the computational domain of AR 10930, the magnetic field is assumed to be negligible, even though, on average, ∼30% of Φ passed through the domain of the actual 3D NLFF fields. Our helicity calculation, therefore, gives an approximate value of H r in a coronal volume above the photospheric surface of AR 10930. Throughout this paper, by magnetic helicity we mean the relative magnetic helicity. We estimate the rate of magnetic helicity injection,Ḣ r , into the coronal volume through S of AR 10930 using the method developed by Chae (2007): where G θ is the helicity flux density proposed by Pariat et al. (2005), and can be obtained with the normal component of magnetic field and the apparent horizontal velocity, u, of the photospheric field line footpoints. We determine u by applying the normal component of the magnetic induction equation and the differential affine velocity estimator (DAVE) method developed by Schuck (2006). Please refer to the procedure described in Chae (2007) for the details of theḢ r calculation. AfterḢ r is determined as a function of time, we integrate it with respect to time to determine the amount of helicity accumulation, ∆H r : where t 0 and t are the start and end time of the data set under investigation, respectively. DATA PREPARATION AND PROCESSING For the calculation of H r in a 3D coronal volume, the three components of the magnetic field in that coronal volume need to be obtained. Therefore, we follow the same method described in Jing et al. (2010) in deriving the coronal NLFF fields in AR 10930 from the Stokes profiles taken by the Hinode/SOT Spectro-Polarimeter (SP). We first derived the high-resolution vector magnetic fields in the photosphere from the Stokes profiles using an Unno-Rachkovsky inversion based on the Miler-Eddington atmosphere (e.g., Lites & Skumanich 1990;Klimchuk et al. 1992). In addition, the removal of the 180 • ambiguity in the transverse magnetic fields was accomplished using the minimum energy algorithm (Metcalf et al. 2006), and the photospheric vector magnetograms were projected onto the tangent plane at the heliographic location of the center of the magnetograms. To reduce the inaccuracy of NLFF field extrapolation, it is important to derive suitable boundary fields for the NLFF field modeling from the photospheric magnetograms. Therefore, using a preprocessing method developed by Wiegelmann et al. (2006), we minimized the effect of the Lorentz force acting in the photosphere and prepared the NLFF boundary fields to be in the condition of the low plasma-β force-free chromosphere. We then used the weighted optimization method (Wiegelmann 2004) to extrapolate the NLFF coronal fields from the photospheric magnetograms. This method has been well recognized as an outstanding performance algorithm by some model tests of NLFF fields (e.g., Schrijver et al. 2006;Metcalf et al. 2008). AR 10930 appeared on the east limb of the solar disk on 2006 December 6, and was successfully and continuously observed during the time interval of its entire disk passage by Hinode/SOT and SOHO/MDI. In this study, we determine H r in AR 10930 during the time span of 2006 December 8, 21:20 UT through 2006 December 14, 5:00 UT using 27 Hinode/SOT-SP vector magnetograms as the boundary fields for NLFF field extrapolation. The computational dimensions of the 3D NLFF field data were considered as 240×132×180 pixel 3 corresponding to 288×158×216 Mm 3 . To check the influence of preprocessing on the magnetogram data, we calculated L 1 and L 2 of the original data and those of the preprocessed data which Wiegelmann et al. (2006) proposed to investigate in order to determine how well a photospheric magnetic field agrees with Aly's criteria: L 1 and L 2 are related to the force-balance condition and the torque-free condition, respectively. Refer to Wiegelmann et al. (2006) for the details of the preprocessing method and the definitions of the L 1 and L 2 . As shown in Table 1, the preprocessed data satisfy the Aly criteria much better than the original data. It has been reported that this preprocessing procedure significantly improves the boundary fields toward a force-free condition (e.g., Wiegelmann et al. 2006Wiegelmann et al. , 2008. Recently, Jing et al. (2010) also showed the capability of the preprocessing method by comparing the unpreprocessed/preprocessed photospheric line of sight (LOS) magnetogram of AR 10930 with the co-aligned chromospheric LOS magnetogram. To evaluate the performance of the NLFF extrapolation, we also calculated the current-weighted sine metric (CWsin) and the |f i | metric proposed by Wheatland et al. (2000) for each extrapolated field. CWsin and |f i | measure the degree of convergence to a force-free and divergence-free field, respectively. For the 27 NLFF fields under investigation, the average CWsin was estimated as ∼0.39 and the average |f i | as ∼0.0014 indicating that residual forces and divergences exist in the NLFF fields. In addition, the error estimation of H r is carried out with a Monte Carlo method by only taking into account the sensitivity of the SP measurement as follows (e.g., see Guo et al. 2008): first, we add three sets of artificial noises to B x , B y , and B z of the original SP vector magnetogram at 20:30 UT on 2006 December 12. Each noise set consists of pseudorandom numbers in normal distribution with the standard deviation of 5 G for B z and 50 G for B x and B y . Note that these values of 5 G and 50 G are estimated as the maximum values of the SP sensitivity in the LOS direction and the transverse direction, respectively ). Then, we extrapolate the 3D NLFF fields from the noise-imposed vector magnetogram following the procedure described in the above paragraph, calculate H r , and repeat the same process 10 times. Finally, we consider the standard deviation of 10 sets of H r to be the uncertainty of the H r calculation. The uncertainty was found to be 8×10 41 Mx 2 corresponding to 2%-4% of |H r | during the measurement period. In order to calculateḢ r , we used the data set consisting of 63 full-disk MDI magnetograms at the 96 minute cadence in the time span of 2006 December 8, 20:51 UT through 2006 December 13, 16:03 UT. Note that the MDI magnetograms in the data set show the Zeeman saturation in the central part of the negative sunspot umbral region, which means that our calculation ofḢ r might be underestimated. The window function of DAVE used in theḢ r calculation is the top-hat profile which puts the same weight of unity to every pixel inside the window (e.g., Schuck 2006) and the window size is selected to be 10 arcsec. We also applied DAVE to two MDI images with the spatial derivatives calculated from the average of the two images (e.g., Welsch et al. 2007;Chae 2007;Chae & Sakurai 2008). The uncertainty ofḢ r corresponding to the measurement uncertainty (∼20 G) of MDI magnetograms was also estimated using the same Monte Carlo method used in the error estimation of H r . It is found that the uncertainty oḟ H r is 8.4×10 39 Mx 2 hr −1 which is equivalent to ∼3% of the averageḢ r during the measurement time. The uncertainty therefore does not significantly affect our study ofḢ r and ∆H r . RESULT AND DISCUSSION Our main objective in this study is to examine how well H r and ∆H r are correlated with each other and whether our H r calculation using the NLFF coronal fields is verified through a comparison of the H r derived from the Hinode/SOT-SP data with the ∆H r derived from the SOHO/MDI data. In Figure 1, therefore, we plot the temporal variations of H r (black solid line) and ∆H r (gray solid line). The estimated error in H r is marked with error bars. The initial value of ∆H r is set the same as that of H r . |H r |, the absolute value of H r , is also shown by a dotted line for convenience. We also investigate the day-to-day variations of H r in AR 10930 for a better understanding of the pre-flare conditions and a trigger mechanism of the X3.4 flare. For this, H r (black solid line) is plotted with the total unsigned magnetic flux (dashed line) and the GOES soft X-ray light curve (dotted line) in Figure 2. Note that Lim et al. (2007) have done a similar study in which they compared the coronal helicity in AR 10696 with the helicity injection through the photosphere. In their study, the coronal helicity was estimated as a probable range using a linear force-free (LFF) assumption with a force-free constant that gives the best fit with each of the individual coronal loops, even though the real coronal field is not LFF. The photospheric helicity injection was calculated by inferring the velocity of the apparent horizontal motion of the field lines determined by the technique of local correlation tracking (LCT), as originally proposed by Chae (2001), instead of using u determined by the DAVE technique. They found that the temporal variation of the coronal helicity is similar to that of the photospheric helicity injection with a discrepancy of ∼15%. During the first day of the helicity measurement, H r showed little change from its initial value, −2.8×10 43 Mx 2 , though there were small fluctuations in the range of 2%-15%. Then, |H r | decreased by 28% from 2.9×10 43 Mx 2 to 2.1×10 43 Mx 2 for 14 hr from December 10. Note that the decrease of |H r | could be due to (1) a pre-existing negative helicity being expelled from the volume of the NLFF field extrapolation, e.g., via coronal mass ejections (CMEs), and/or (2) a new magnetic flux with positive helicity being injected from outside into the volume or a positive helicity being produced by the shearing motions of pre-existing field lines. We found that there are three time periods (I, II b , and III) over which |H r | decreases consistently for more than nine hours, and they are shown as shaded areas in Figure 1. Between periods I and III, there was a consistently large increase of negative helicity, −3.2×10 43 Mx 2 , in the corona over ∼1.5 days (marked as period II a in Figure 1). After period III, a negative helicity kept on increasing for ∼1 day with flux increase. The detailed information of the characteristic periods is shown in Table 2. We compare the overall pattern of the temporal evolution of the H r calculated using the NLFF fields with that of the ∆H r measured using the MDI magnetograms. In general, the time profile of H r matches well that of ∆H r . Moreover, in both cases, the absolute amount of negative helicity accumulation during the entire measurement period of December 9-14 was similar (2.1×10 43 Mx 2 and 1.7×10 43 Mx 2 , respectively). This gives us confidence that the NLFF extrapolation and the H r calculation are reasonably well established. However, some detailed patterns of helicity evolution show a difference between H r and ∆H r . For example, the temporal variation of H r shows a rapid and large increase of negative helicity with flux increase at the time period of the fast rotational speed in the southern positive sunspot measured by Min & Chae (2009) and Yan et al. (2009). In addition, |H r | represents decreasing phases such as periods I, II b , and III, while |∆H r | increases monotonically during the entire period. Note that H r should not necessarily be exactly the same as ∆H r : e.g., the ejection of magnetic helicity via the launch of a CME would not be detected in ∆H r while it would be reflected in H r . What could cause the three periods of remarkable |H r | decrease? To investigate this, we first checked a possibility associated with the negative helicity ejection via CMEs that originated from AR 10930. The SOHO/Large Angle and Spectrometric Coronagraph (LASCO; Yashiro et al. 2004) CME catalog was used to search for all the CMEs that occurred during the three periods. We then identified only the CMEs inferred to be produced in AR 10930 with the following criterion: the position angle of a CME should be within ±5 • from that of AR 10930 on the solar disk at the first appearance time of the CME in the LASCO/C2 FOV. Note that there were no other ARs except for AR 10930 on the front side of the solar disk during the periods. We found two CMEs: one in period II b and the other in period III. Their initial appearances in the LASCO/C2 FOV were at 09:36 UT on December 11 and at 20:28 UT on December 12, respectively, which are marked with the vertical dashed lines in Figure 2. Although the uncertainty of our H r calculation is estimated to be 8×10 41 Mx 2 , we found that the decrease in |H r | is 2.4×10 42 Mx 2 between 08:31 UT and 11:48 UT on December 11 and 1.9×10 42 Mx 2 between 18:12 UT and 21:01 UT on December 12 covering the time of the occurrence of the first CME and that of the second CME, respectively. These values agree with the helicity content of a typical CME, 2×10 42 Mx 2 , estimated by DeVore (2000). Our finding of the CME-related change of |H r | is similar to the earlier finding by Lim et al. (2007) in which they found a helicity decrease of ∼4.1×10 42 Mx 2 after the occurrence of two CMEs. We also investigated the feasibility of positive helicity injection through the photospheric surface of AR 10930 into the corona. Note that Zhang et al. (2008) have calculatedḢ r in AR 10930 using the LCT method (Chae 2001). They found that the sign ofḢ r changes from negative to positive and then from positive to negative during the period (01:30 UT-04:30 UT) of the flare, whileḢ r is predominantly negative during 2006 December 8-14. Integrating the positive (negative) G θ over the photospheric surface of AR 10930, we determinedḢ + r (Ḣ − r ), i.e., the injection rate of positive (negative) helicity. Figure 3 shows the time variations oḟ H + r (diamonds),Ḣ − r (crosses), andḢ r (solid line) during the ∆H r measurement period. The characteristic periods are marked in the same way as in Figure 1, and the peak time of the X3.4 flare is shown as the vertical dotted line. We found that a remarkable accumulation of positive helicity into the corona is established over the entire period with an average injection rate of 2.8×10 41 Mx 2 hr −1 , even though most of the timeḢ − r is dominant with an average injection rate of -4.4×10 41 Mx 2 hr −1 . Especially during the span of December 11, 12:51 UT (middle of period II b ) through December 12, 04:48 UT (start of period III), the average ofḢ + r showed a large value of 4.5×10 41 Mx 2 hr −1 , andḢ + r was sometimes larger thanḢ − r . Additionally, we examined the G θ maps at several times (marked by vertical solid lines in Figure 3) to find out how the positive G θ is distributed and developed on the AR. Figure 4 shows the maps of the normal component of the magnetic field, B n (left panels), and G θ (right panels). Assuming that the magnetic field on the solar photosphere is normal to the solar surface, B n was approximately determined from the MDI LOS magnetograms. We found that there are noticeable injections of positive helicity around the flaring magnetic polarity inversion line (see the three G θ maps in Figure 4: -12-11 12:51 UT, 2006-12-12 04:48 UT, and 2006. In addition, the examination of the other G θ maps during the period of December 11, 12:00 UT through December 13, 16:00 UT revealed that positive helicity is consistently injected through the polarity inversion line. The location and time span of the positive helicity injection are similar to those of the magnetic channel structure development observed by Wang et al. (2008). Note that a simulation by Régnier (2009) shows that newly injected current from the photosphere can sensitively affect the coronal magnetic helicity in existing force-free bipolar fields: i.e., H r is increased by 2 orders of magnitude when the current strength is increased by a factor of 2. We therefore speculate that periods II b and III are associated with the helicity ejection via the two CMEs and/or the supply of positive helicity from the photosphere into the corona. Related to the occurrence of the X3.4 flare, we found two interesting patterns of the longterm H r evolution. First, there was a significant increase of negative H r for period II a of ∼1.5 days associated with the flare energy buildup. This pattern of increasing helicity prior to the flare is in agreement with that shown in the study of Park et al. (2008Park et al. ( , 2010. After the middle of period II a , a large amount of helicity of the opposite (positive) sign started to be injected through the photospheric surface around the flaring magnetic polarity inversion line during the time span (including periods II b and III) of the channel structure development observed by Wang et al. (2008). The X3.4 flare was preceded by the two characteristic patterns of H r . These two patterns have been already reported by previous studies of major flares related with helicity injection through photospheric surfaces of ARs (Park et al. 2008(Park et al. , 2010Chandra et al. 2010). Note that our finding of the long-term injection of positive helicity ∼2.5 days before the flare is different from the abrupt injection of positive helicity around the start of the flare found by Zhang et al. (2008). We conjecture that the occurrence of the X3.4 flare is involved with the emergence of a positive helicity system into an existing negative helicity system which may cause a reconnection between the two helicity systems. This idea is not only supported by numerical simulation (Kusano et al. 2003b) in which magnetic reconnection quickly grows in the site of helicity annihilation with different signs but also by observational reports for the opposite sign of helicity injection through the photosphere surface of ARs before flares (Kusano et al. 2003a;Yokoyama et al. 2003;Wang et al. 2004). In conclusion, after analyzing H r in the coronal volume of AR 10930 using the NLFF fields, we found that there are two characteristic phases of day-to-day variation of helicity related to the X3.4 flare: significant helicity accumulation (period II a ) followed by opposite sign helicity injection (periods II b and III). H r and ∆H r show a roughly similar variation during the entire measurement period. Further studies are needed to check whether the two characteristic patterns are shown in other major flaring ARs and to investigate the short-term variation of helicity in a flaring region related to a triggering mechanism. The Solar Dynamic Observatory (SDO) was recently launched and we expect to study the 3D coronal helicity using full-disk photospheric vector magnetograms with high spatial and temporal resolution taken by the Helioseismic and Magnetic Imager (HMI) on board the SDO. We are grateful to the referee for helpful and constructive comments. The authors thank Dr. Yuhong Fan for sharing the code to determine the 3D vector potential, and Dr. Thomas Wiegelmann for providing the weighted optimization and preprocessing codes for NLFF field extrapolation. SOHO is a project of international cooperation between ESA and NASA. Hinode is a Japanese mission developed and lunched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway). This work was supported by the National Research Foundation of Korea (KRF-2008-220-C00022). J.J. was supported by NSF under grants ATM 09-36665 and ATM 07-16950. C.T. was supported by DLR-grant 50 OC 0501 and the Office of Sponsored Program, NJIT. S.-H.P. and H.W. were supported by NSF grant AGS-0745744 and NASA grant NNX08BA22G.
2010-08-18T13:41:23.000Z
2010-08-09T00:00:00.000
{ "year": 2010, "sha1": "c7a287e24c0d2650d1327d1422453903a91d6039", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/0004-637X/720/2/1102/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "c7a287e24c0d2650d1327d1422453903a91d6039", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
144238799
pes2o/s2orc
v3-fos-license
From post-BRICS’ decade to post-2015: insights from global governance and comparative regionalisms 2015 is a symbolic turning point for “development” as eight UN MDGs are superseded by even more SDGs with a focus on global partnerships. This paper exploits the “post-2015” mantra to ask whether the rise of the BRICS and myriad non-state actors/coalitions, let alone the difficulties of the PIIGS in the Eurozone, suggests that “development” has become passé: Does the rise of “emerging” economies/powers in the current decade mean that we need another paradigm, such as “global governance” along with “new regionalisms”? This overview seeks to identify some of the parameters of international relations/organization/law/political economy for the embryonic Palgrave Communications network for its first 5 years, informed by our new US PhD in Global Governance & Human Security at UMass Boston. Introduction …the day-to-day transnational dealings among individuals, firms & non-governmental organizations (NGOs) dwarf intergovernmental relations by many orders of magnitude. In building on earlier … work … this program … responds to long-standing calls for greater attention to nonstate entities in IR. (Findley et al., 2013: 660) T he FIFA World Cup was not the only historic global event in Brazil in mid-2014: it also hosted the BRICS sixth summit in Fortalesa, that is, the start of the BRICS' second half-decade. This led to the establishment to the BRICS Bank. This, and other changes in and around the BRICS, will have some impact on who wins at the next Olympics in mid-2016 in Rio then Tokyo (2020) and next World Cup in Qatar (2022). This essay looks at the BRICS's decade, the first of this century, and debates about what follows the Millennium Development Goals (MDGs) after 2015. It seeks to situate the rise of the "global South" and its implications for global development/security through the analytic frameworks of global governance and comparative regionalisms. As the 2013 Human Development Report from the UNDP asserts with implications for both policy and theory: The South has risen at an unprecedented speed & scale … By 2050, Brazil, China & India combined are projected to account for 40% of world output in purchasing power parity terms … The changing global political economy is creating unprecedented challenges and opportunities for continued progress in human development. (UNDP, 2013: 1, 2) But, the post-2015 era is likely to be different from that anticipated by the UN post-MDGs (http://www.post2015hlp.org): as the global South comes to overshadow the hitherto hegemonic North (Abdenaur and Fonseca, 2013) (http://www.post2015.org, http:// www.beyond2015.org), so its own regionalisms may come to balance, even challenge, the EU as "model" (Fanta et al., 2013;Vivares, 2014). And the end of 2015 is the twenty-first UNFCCC COP in Paris, the last chance for a post-Kyoto environment agreement? Such novel regional directions are reinforced by burgeoning MNCs, including state-owned enterprises (SOEs), especially national oil companies (NOCs), based in the South (Nolke, 2014 What are the implications for the practice and analysis of comparative development/international or comparative political economy/ regionalisms? Together such compatible, exponential changes point towards a new international political economy (IPE) in both theory and policy: reorder/or disorder? This paper is informed by animating a new PhD at UMass Boston on Global Governance & Human Security and continuing after three decades to edit the IPE Series for Palgrave Macmillan with its focus on the global South. I juxtapose global governance and comparative regionalisms as approaches which have been advanced in both theory and policy by the global crisis and related restructuring. I build on the increasingly familiar and compatible concepts of the "transnational" (Hale and Held, 2011) and "global governance" (Harman and Williams, 2013;Weiss and Wilkinson, 2014a, b), as together they advance analysis of new regionalisms, especially around natural resources (NRs) in Africa as elsewhere, symbolized by the Kimberley Process (KP) and Extractive Industries Transparency Initiative (EITI). As Weiss and Wilkinson (2014b) rhetorically suggest, the generic, inclusive "global governance" approach may yet "save" established genres like international relations/law/organization (IR/IL/IO), even IPE, as they no longer treat "real"-world issues? Arguably, then, the first decade of the twenty-first century was that of the BRICs/BRICS, especially China and India, leading Pieterse (2011: 22) to assert that the established N-S axis is being superseded by an E-S one: … the rise of emerging societies is a major turn in globalization … North-South relations have been dominant for200 years and now an East-South turn is taking shape. The 2008 economic crisis is part of a global rebalancing process. Such reordering if not disordering, given the intrusion of EMs and FMs, will impact the practice and analysis of IPE post-2015 (Overbeek and van Apeldoorn, 2011), especially in the democratic capitalist BRICS economies of Brazil and India (Mahrenbach, 2013). The post-2015 global political economy To situate post-2015, this paper juxtaposes a set of parallel/ overlapping perspectives to consider whether the several "worlds"from North Atlantic/Pacific and onto Eurozone PIIGS versus "second world" (Khanna, 2009) of BRICS/CIVETS/MINT/MIST/ VISTA-have grown together or apart as global crises and reordering have proceeded (see myriad heterogeneous analyses such as Cooper and Antkiewicz, 2008;Cooper and Flemes, 2013;Cooper and Subacchi, 2010;Economist, 2012;Gray and Murphy, 2013;Lee et al., 2012;Pieterse, 2011;US National Intelligence Council (USNIC), 2012;WEF, 2012;World Bank, 2012;O'Neill, 2011). Such acronyms reflect methodology/hierarchy: so, CIVETS includes more EMs if not EPs than the rest even more than VISTA as it starts with Columbia. In turn, "contemporary" "global" issues-wide varieties of ecology, gender, governance, health, norms, technology and so on (see "Emerging 'global' issues")-have confronted established analytic assumptions/traditions and actors/policies (Weiss and Wilkinson, 2014a) leading to myriad "transnational" coalitions and heterogeneous initiatives/processes/regulation schemes as overviewed in Bernstein and Cashore (2008), Dingwerth (2008), Hale and Held (2011) (see "Varieties of transnational governance"); these impact prospects for sustainable regional development in Africa as elsewhere. And Richey and Ponte (2014) suggest that "development" is increasingly "alliances" or networks, including "new" actors. Such extra-or semistate hybrid "global governance" increasingly challenges and supersedes exclusively interstate international organization/law (Harman and Williams, 2013;Weiss and Wilkinson, 2014a, b). Each set of EMs and now FMs embody slightly different sets of assumptions/directions/implications; PWC expanded the "Next-11" of both EMs and FMs of Goldman Sachs (that is, 15 without RSA) to 17 significant EMs/FMs by 2050 (Hawkesworth and Cookson, 2008). Symptomatically, the initial iconic acronym was proposed at the start of the new century by a leading economist working for a global financial corporation-O'Neill (2011) of Goldman Sachs (http://www2.goldmansachs.com)who marked and reinforced his initial coup with celebration of its first decade. As he notes, global restructuring has been accelerated by the simultaneous decline not only of the United States and the ARTICLE PALGRAVE COMMUNICATIONS | DOI: 10.1057DOI: 10. /palcomms.2014 United Kingdom but also the southern members of the eurozone. Many now predict China to become the largest economy by 2025 and India to catch-up with the United States by 2050 (Hawkesworth and Cookson, 2008: 3). PWC (2013: 6, 8) suggests that: The E7 countries could overtake the G7 as early as 2017 in PPP terms … the E7 countries could potentially be around 75% larger than the G7 countries by the end of 2050 in PPP terms … By 2050, China, the US and India are likely to be the three largest economies in the world … But Brown (2013: 168-170) notes that there are competing prophecies about the cross-over date when China trumps the United States, starting with the IMF advancing it to 2016. As the G8 morphed into G20 (Cooper and Antkiewicz, 2008;Cooper and Subacchi, 2010), a variety of analysts attempted to map the emerging world, including Khanna's (2009) (Jordaan, 2003) such as the old Anglophone Commonwealth with inter alia Indonesia, Japan and South Korea. (c) At the end of 2012, from both sides of the pond, the USNIC produced "Global Trends 2030: Alternative Worlds" (GT 2030) (http://www.gt2030.com) which identified four "megatrends" like "diffusion of power" and "food, water, energy nexus"; a half-dozen "game-changers"; and four "potential worlds" from more to less conflict/inequality, including the possibilities of either China-US collaboration or of a "nonstate world"; and KPMG (2014: 3) produced its own "Future State 2030" with parallel megatrends including "economic power shift" and "resources stress" especially around "essential NRs": "water, food, arable land & energy". (d) Chatham House in London reported on "Resources Futures" (Lee et al., 2012: 2) with a focus on "the new political economy of resources" and the possibility of natural resource governance (NRG) by the "Resource 30" (R30) of major producers/consumers, importers/exporters (http://www. chathamhouse.org/resourcesfutures): G20 including the BRICs, but not BRICS (that is, no RSA), plus Chile, Iran, Malaysia, the Netherlands, Nigeria, Norway, Singapore, Switzerland, Thailand, UAE and Venezuela. And in the case of the most marginal continent, Africa, its possible renaissance was anticipated at the turn of the decade by the Boston Consulting Group (BCG), the Center for Global Development, McKinsey et al. (Shaw, 2012a), with the Economist admitting in January 2011 that it might have to treat Africa as the "hopeful" rather than "hopeless" continent; as the continent with the most FMs, Africa has been the most resistant to economic contraction from the North. Meanwhile, the supply of development resources including official development assistance (ODA) is also moving away from the old North towards the BRICS (Chin and Quadir, 2012) and other new official EM/EP donors like South Korea and Turkey (Sumner and Kirk, 2014;Sumner and Mallett, 2014) plus private foundations like Gates, faith-based organizations (FBOs), remittances from diasporas, heterogeneous Sovereign Wealth Funds (SWFs) and myriad ETFs and novel sources of finance such as taxes on carbon, climate change, emissions, financial transactions and so on (Besada and Kindornay, 2013;Richey and Ponte, 2014). Emerging economies/markets/powers/states/societies? The salience of EPs/EMs (Mahrenbach, 2013), especially the BRICS and other political economies in the second world, has led to debates about the similarities and differences among emerging economies/markets/middle classes/multinational companies/ states/societies and so on, informed by different disciplinary cannons; for example, by contrast to Goldstein on EMNCs or Mahrenbach (2013) on EPs/EMs, Pieterse (2011) privileges sociologically informed "emerging societies". In turn, especially in IR, there are burgeoning analyses of emerging powers/regional and otherwise (Flemes, 2010;Jordaan, 2003;Nel and Nolte, 2010;Nel et al., 2012), some of which might inform new regionalist perspectives, especially as these are increasingly impacted by the divergence between BRICS and PIIGS of the eurozone. In turn, they inform and advance alternative definitions of and directions for development. Despite the US subprime and EU euro crises at the start of the twenty-first century, foreign direct investment (FDI) in Africa continues to grow, reaching US$50 billion in 2013, primarily from China, India and Malaysia; that is, EMs as well as FMs, even if Taylor (2014) is somewhat sceptical about the continent's sustainable development. With new energy discoveries and investments, a second tier of oil producers has emerged after Nigeria and Angola: Equatorial Guinea, Congo-Brazzaville, Gabon, South Sudan and now Ghana, with Uganda eager to join. Liquefied natural gas is now exported from Nigeria, Equatorial Guinea and Mozambique; will the latter be able to challenge the dominance of Qatar and Australia by 2020? And by late-2014, the World Bank plans to launch a $1 billion fund to map the continent's mineral resources to advance the continent's own Africa Mining Vision (AMV) (http://www.africaminingvision.org). Varieties of development "Development" was a notion initially related to postwar decolonization and bipolarity. It was popularized in the "Third World" in the 1960s, often in relation to "state socialism", one party even one man rule, but superseded by neo-liberalism and the Washington Consensus. Yet the newly industrialized countries (NICs) then BRICs and now EMs have pointed to another way by contrast to those in decline like fragile states (Brock et al., 2012); such "developmentalism" (Kyung-Sup et al., 2012) has now even reached Africa (UNECA, 2011(UNECA, , 2012 with its burgeoning FMs as well as EMs. But, while the "global" middle class grows in the South, so do inequalities along with non-communicable diseases (NCDs) like cancer, coronary and diabetes. Given the elusiveness as well as limitations of the MDGs (Wilkinson and Hulme, 2012), the UN has been debating and anticipating post-2015 development desiderata (http://www.un. org/millenniumgoals/beyond2015) including appropriate, innovative forms of governance as encouraged by networks around INGOs (http://www.beyond2015.org) and think tanks (http:// www.post2015.org) (see "Varieties of transnational governance"). Aid is now about cooperation rather than money per se-"alliances" as conceived by Ritchey and Ponte (2014)-as a range of flows, especially from "new" actors, is attracted to as well as from the global South including private capital, FDI, ETFs, philanthropy/FBOs, remittances and SWFs, let alone money-laundering (Shaxson, 2012); ODA by members of the Development Assistance Committee of the OECD (http:// www.oecd.org/dac) is a shrinking proportion of transnational transfers (Brown, 2011(Brown, , 2013Sumner and Kirk, 2014). Meanwhile, within such dramatic global reordering, the varieties of capitalisms, state and non-state, proliferate. Varieties of capitalisms As indicated by my focus, the world of capitalism has never been more diverse: from old trans-Atlantic and -Pacific to new-the global South with its own diversities such as Brazilian, Chinese, Indian and South African "varieties of capitalisms" (Nolke, 2014); that is, FMs as well as EPs/EMs (Mahrenbach, 2013)? Goldstein (2007) introduced EM MNCs in the IPE Series I continue to edit for Palgrave Macmillan, including a distinctive second index: five pages of company names of EMNCs (see next paragraph). And in the postneo-liberal era, SOEs, especially NOCs (Xu, 2012), are burgeoning. Both US/UK neo-liberal, continental/Scandinavian corporatist and Japanese/East Asian developmentalist "paradigms" are having to rethink and reflect changing stateeconomy/society relations beyond ubiquitous "partnerships" (Overbeek and van Apeldoorn, 2011). Furthermore, if we go beyond the formal and legal, then myriad informal sectors and transnational organized crime (TOC)/money-laundering are ubiquitous (see "Informal and illegal economies: from fragile to developmental states?"). For the first time, in the "Global Fortune 500" of (July) 2012, MNC HQs were more numerous in Asia than in either Europe or North America. There were 73 Chinese MNCs so ranked (up from 11 a decade ago in 2002) along with 13 in South Korea and 8 each in Brazil and India. Each of the BRICS/EMs/EPs hosted some global brands: for example, Geely, Huawei and Lenovo (China), Hyundai, Kia and Samsung (Korea); Embraer and Vale (Brazil); Infosys, Reliance and Tata (India); Anglo American, De Beers and SABMiller (RSA) and so on. The pair of dominant economies in Sub-Saharan Africa (SSA) is unquestionably Nigeria and South Africa yet, despite being increasingly connected, they display strikingly different forms of "African" capitalisms and NRG. Nigeria, including its mega-cities like Lagos and Ibadan, is a highly informal political economy with a small formal sector (beer, consumer goods such as soft drinks and soaps, finance, telecommunications and so on); by contrast, despite its ubiquitous shanty-towns, South Africa is based on a well-established formal economy centred on mining, manufacturing, farming, finance, services and so on. Both have significant diasporas in the global North, especially the United Kingdom and the United States, including Nigeria's in RSA, especially Jo' burg, remitting funds back home. Since majority democratic rule, South African companies and supply-chains, brands and franchises have penetrated the continent; initially into Eastern from Southern Africa, but now increasingly into West Africa and Angola. As reflected in the multiplication of acronyms-MIST/MIST and so on-the post-BRICS era is marked by a proliferation of EMs and now FMs as growth and profits from the BRICS and EMs/EPs decline: onto and beyond the N-11. So the Guggenhiem FMs ETF includes over 40 countries concentrated in the Baltics, the Gulf and ME, Central Europe and South America, some without stock exchanges (http://www.guggenheiminvestments. com). As smaller, less-developed markets, investments risks are higher in FMs than in the BRICS or EMs. And many FM investments are in stocks which grow with the middle classes: BBC of banks, brewers and cement; for example, in Nigeria, Zenith Bank, Nigerian Breweries and Dangote Cement. Market Vectors ETFs from Van Eck Global include Africa, Brazil, China, Columbia, Egypt, Gulf States, India, Indonesia, Latin America and Vietnam (http://www.vaneck.com); and iShares by BlackRock offers ETFs on Brazil, Chile, Colombia, Mexico, South Africa and Turkey (http://www.iShares.com). Likewise, the top five holdings in Claymore/BNY Mellon FM are: Chile, Poland, Egypt Columbia and Kazakhstan; with holdings concentrated in finance, minerals, utilities, energy and telecoms (http://www.guggenheiminvestments.com). Africa is the classic FM region, increasingly attractive because of its resources and resilience in the face of the "global" recession. But regions in the global South can now be compared in terms of numbers/ dynamism of EMs/FMs: from Africa to Central Asia and onto South America, especially Andean states: Bolivia, Ecuador and Peru. New regionalisms The proliferation of states along with capitalisms postbipolarity has led to a parallel proliferation of regions, especially if diversities of non-state, informal even illegal regions are so considered. And the eurozone crisis concentrated in the PIIGS has eroded the salience of the EU as model leading to a recognition of a variety of "new" regionalisms (Flemes, 2010;Shaw et al., 2011).These include instances of "African agency" (Lorenz-Carl and Rempe, 2013) like South African franchises and supply chains reaching to West Africa and the Trilateral FTA among COMESA, EAC and SADC (T-FTA) (Hartzenberg et al., 2012) along with older/newer regional conflicts like the Great Lakes Region plus the regional as well as global dimensions of, say, piracy off the coast of Somalia (ACBF, 2014;Hanson et al., 2014). The third, ACIR in 2014 from the ACBF (2014) focuses on capacity for regional development. Blue-ribbon Commissions on Drugs in Latin America and now West Africa (http://www.wacommissionondrugs.org) indicate how far the new South has come in defining its own agenda, direction and pace. The former predated, the latter postdated, the erstwhile Global Commission on Drugs & Policy (http://www. globalcommissionondrugs.org), just as social forces in the United States get to decriminalize then incrementally commercialize marijuana. And increasingly ubiquitous airline alliances link regional hubs including in the global South, especially Asia, from Singapore to Panama. In Southeast Asia, Singapore Airlines/Star Alliance is dominant; likewise, South African/Star Alliance in Africa; but in South America, LAN/oneworld is increasingly hegemonic. And in the Gulf, between Europe and Asia, the trio of burgeoning airlines takes different trajectories: Emirates is its own de facto global alliance; Etihad purchases other carriers from Air Berlin to Alitalia; and Qatar is now oneworld. Emerging "global" issues A growing number of global issues is increasingly recognized arising in the global South as well as resulting from excessive consumption/pollution in the North such as NCDs like diabetes. In the immediate future, these issues will include environmental and other consequences of climate change and health viruses/ zoonoses. They will also extend to myriad computer viruses and cybercrime (Kshetri, 2013). Given the new attention to the energy/food/land/water nexus, some suggest that we may be running out of basic commodities like energy (Klare, 2012) and water let alone rare-earth elements. Finally, after recent global and regional crisis, the governance of the global economy is at stake: the financialization syndrome of DBRAs, derivatives, from EMs to FMs, ETFs/ETNs, hedge and pension funds, SWFs and so on (Overbeek and van Apeldoorn, 2011). Informal and illegal economies: from fragile to developmental states? "Shadow banking" via "shell corporations" has become a set of ubiquitous global networks with centres in London and Miami rather than the Cayman or Virgin Islands (Findley et al., 2014) about which the G20 is rather ambivalent despite is anti-moneylaundering norms and agencies (Findley et al., 2013). Developing ARTICLE PALGRAVE COMMUNICATIONS | DOI: 10.1057DOI: 10. /palcomms.2014 out of the internet, new mobile technologies increasingly facilitate the informal/illegal as well as otherwise, including the dramatic rise of mobile money in SSA-Mpesa-especially Kenya and RSA. The "informal sector" is increasingly recognized in the discipline of anthropology and so on as the "illegal" in the field of IPE (Friman, 2009;Naylor, 2005); these are increasingly informed by telling Small Arms Survey (SAS) annual reports after more than decade with a focus on fragile states (http://www.smallarms survey.org). Similarly, TOC is increasingly transnational with the proliferation of (young/male) gangs from myriad states (Knight and Keating, 2010: 274-300). In response, the field of IPE needs to develop analyses and prescriptions from the established informed annual SAS and Latin American then Global Commission on Drugs and Drug Policy/Health (http://www.globalcommissionon drugs.org); and now at start new decade onto Ideas Google re the illicit (http://www.google.com/ideas)? As supply chains shifted away from Central America and the Caribbean to West Africa in response to the "war on drugs", as already noted in "New regionalisms", the Kofi Annan Foundation created a preemptive, preventive West African Commission on Drugs (http://www.wacommissionondrugs.org): another definition of regionalism. Varieties of transnational governance Just as "governance" is being redefined/rearticulated (Bevir, 2011), so the "transnational" is being rediscovered/rehabilitated (Dingwerth, 2008;Hale and Held, 2011) following marginalization after its initial articulation at the start of the 1970s by Keohane and Nye (1972): they identified major varieties of transnational relations such as communications, conflict, education, environment, labour, MNCs, religions and so on; and Harman and Williams (2013) have produced a very useful teaching collection of case studies for the present decade. And Brown (2011) updated such perspectives with a more economicscentred framework which included civil society, remittances and so on. Now Weiss and Wilkinson (2014b) rhetorically suggest that such global governance may yet save IR/IL/IO and so on. In turn, I would add to global governance such contemporary transnational issues such as brands and franchises; conspicuous consumption by emerging middle classes (from Audis/BMWs/ Mercedes to tourism and, alas, drugs to treat the plague of diabetes); world sports, such as FIFA and IOC; global events from World Fairs to Olympics and world soccer; logistics and supplychains (legal and formal and otherwise); mobile digital technologies including mobile money; new film centres such as Bollywood and Nollywood including diasporas, film festivals, tie-ins and so on; new media such as Facebook and Twitter. But such heterogeneous relations/perspectives, including KP, EITI and the AMV, deserve further attention in terms of their contribution to sustainable development in Africa and elsewhere. Global governance and new regionalisms by mid-century? In conclusion I juxtapose a trio of changes which will probably impact global governance and comparative regionalisms in policy and practice in Africa and elsewhere post-2015: (a) exponential global restructuring in myriad areas, from economics and ecology to diplomacy and security (Besada and Kindornay, 2013; Overbeek and van Apeldoorn, 2011); (b) shift in the direction and concentration of resource flows and supply chains away from S-N towards S-E; and (c) continued evolution in multi-stakeholder communities to incorporate SOEs, SWFs, pension funds, ETFs and so on as well as MNCs, especially from the BRICS and other EMs/FMs
2019-05-05T13:07:47.230Z
2015-01-20T00:00:00.000
{ "year": 2015, "sha1": "4882d45f18514fdae8d106b6f8f1ce27224d8c08", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/palcomms20144.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "25f913f82f473bbb4eb8bcf8f41d88b6328a7ac1", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
201226198
pes2o/s2orc
v3-fos-license
Developing the technology of foodstuffs using ingredients rich in ω-3 PUFA The problems of developing functional foods enriched with ω-3-polyunsatturated fatty acids are discussed in this article. Cod liver is chosen as a source of such valuable substances. The problems of processing and using the cod liver are reviewed. A method of using of microwave-cooked cod liver without sterilization as a base of a series of production is proposed. The results of experimental researches of developing and choosing an optimal composition of different products (meat and fish sausages with addition the cod liver or its oil with microwave-cooked cod liver, bakery products) are shown. The composition of frankfurters with fish mince, meat and cod liver oil has been optimized.Basic composition of frankfurters with meat, fish protein isolate (FPI), and cod liver has been chosen and then optimized both for main ingredients (meat, fish, and cod liver) and for additives (fermented rice, guar gum). The technology of bakery products using fish oil as a bakery agent (combining with glutathione from inactivated yeasts) has been developed. The characteristics of the gluten with such bakery agents has been studied. Test bakings have been carried out for such bread with addition of algae, bran, and milk thistle. All samples have been evaluated for sensory and structural characteristics. Introduction. A codfish is one of the main commercial fish of the Northern basin and one of the most important natural resources of the Arctic region. Catch and processing of cod on an industrial scale makes a significant contribution to the economy of the Northern region. Two priority areas can be identified for ensuring sustainable development of the Arctic region. In economic sphere, the most needed are strengthening of coherency and reliability of the transportation system, energy supplies to distant consumers, and stimulating of investment and industrial activities of the industrial enterprises. In social (socio-economic) sphere, it is necessary to maintain and improve the public health [1]. The potential increase in the share of fish and fish product᾽s consumption (in particular, cod, cod liver oil and oil-containing products) can have a positive impact on the health of the population of the Arctic zone of the Russian Federation. One of the important problems of developing functional foods is improving the fatty component of foodstuffs adding polyunsaturated fatty acids (PUFA), especially ω-3 [2]. ω-3 PUFA have a very high nutritional value, therapeutic and prophylactic action against a series of diseases, especially against cardio-vascular diseases [3]. Fish [4] and seafoods [5] are traditional and very common source of ω-3 PUFA. It is reasonable to point out the liver of Gadidae family fishes among these kinds of raw materials. By the way, traditional methods of cod liver processing have some disadvantages. Firstly, it is very difficult to use frozen cod liver for producing high quality foods. Secondly, using very fatty liver, and frozen raw material for canned foods results in high amount of free oil in the can [6]. Authors propose not to use raw cod liver for producing foodstuffs, but to use semi-finished product of cod liver which is microwave processed. It is also possible to use fish oil separated during such processing. One of the ways of cod liver processing is producing the sterilized canned foods, but this method is not the only possible way. The direct addition of microwave-cooked cod liver and extracted oil might be of a special interest for producing a series of the combined foodstuffs (including culinary) which are analogues of traditional foods, for example, sausages. Using fish raw material in the technology of sausages is not a brand-new method, but it seems to be quite prospective. Unlike traditional meat sausages, meat and fish products make it possible to use highly valuable fish raw materials, and compared to fish products they have more traditional sensory characteristics [7]. Another direction of researches includes using cod liver oil as a bakery agent. The quality of final product is highly dependent on the quality of the main raw material -wheat flour, which properties are not always constant. The bakery agents are needed for providing define characteristics (rheological, colloidal, sensory) to the dough and bakery production. They make possible to produce the bread and bakery products of a stable quality even in case of instable quality of the flour. One of the wide spread groups of bakery agents includes agents of oxidative and reductive actions. Fats and oils including extracted fish oil are agents of oxidative action. The complex usage of both oxidative and reductive agents is practiced for making the best technological effect [8]. The glutathione has been chosen as a reductive agent. It is a tripeptide containing cysteine residue with -SH group. It can break disulfide bonds in the gluten molecule, changing the structural and mechanical characteristics of the dough. The yeasts can be used as the source of glutathione. It is also reasonable to use accelerated kneading of the dough and to reduce the time of dough fermentation [9]. The inactivated yeast cells have been used as a source of glutathione. Materials The chilled liver of Atlantic cod (Gadus morhua), shipped by fishing companies (Sevros LLC and Bionord LLC, Murmansk, Russia) all the year round, and was used as the source of ω-3 PUFA. The cod liver was preliminary cooked using microwave heating followed by freezing and storing at the temperature minus 18 °С. Fish oil extracted during microwave treatment was purified by sedimentation and decantation methods. These semi-finished products have been used for producing the new kinds of foodstuffs. Other raw materials (meat, salt, spices etc.) have been obtained from the local market. Fish protein isolate (FPI) was produced using the method of dissolving of muscle tissue of fish raw material in the alkaline medium followed by the protein sedimentation at the isoelectric point (slightly acidic medium) [10]. Blue whiting was used as a fish raw material. Experimental and data processing methods The Kjeldahl method [11] is using Selecta Bloc Digest and Selecta Pro-Nitro modules (Spain). The obtained nitrogen content was recalculated to the raw protein (P) using the following formula: (1) where: -total nitrogen content; -recalculation coefficient. The lipid content in the samples of raw material, FPI and foodstuffs produced from them (canned foods) was determined using extractor Selecta DET/GRAS (Spain) by the Soxhlet method. Fatty acids composition of lipids was determined using the high-performance liquid chromatography method (HPLC) with Agilent 1100 (USA) after saponification of lipids with alcohol solution of 2 N KOH and pre-column derivatization with bromophenacyl bromide and triethylamine [12]. The penetration strength was used for estimating the structural and mechanical characteristics. Food Checker (Japan) with spherical indenter (diameter 8 mm) immersing at the depth of 10 mm at the constant speed and at the temperature of 20 °С was used to obtain this characteristic. Acid number of the oil was determined after oil extraction using the mixture of chloroform and ethyl alcohol by titration with 0,1 M sodium hydroxide with phenolphthalein. Peroxide number of the oil was determined by titration of iodine displaced from potassium iodide by peroxides and hydroperoxides with sodium thiosulfate [13]. Sensory methods have been carried out using estimation scales [14]. The generalized sensory score (%) has been calculated by this method. Determining of water and fat holding capacity, and stability of mince emulsion has been carried out using Salavatulina method which includes determining the losses of water and fat, and also the emulsion characteristics after heating the mince in the can [15]. The porosity of bakery products has been determined using Zhuravlyov device, water content has been found using drying, and raw gluten quantity has been determined after washing the dough ball [16]. Methods of experimental design (in particular, central composite rotatable design) have been used in this research. Choosing and evaluating the regression equation, and regression analysis at whole have been carried out using Oakdale Datafit 9.1. Using FPI and microwave-cooked cod liver in the technology of meat and fish sausages Producing meat and fish sausages may be one of the directions of using of semi-finished cod liver. Two variants of the composition have been used for studies of meat and fish products. First composition includes meat, chicken egg yolk, starch, tomato paste, spices, washed and dried fish mince, and cod liver oil. Second composition in addition to the first one includes FPI from blue whiting, and microwave-cooked cod liver. The problem is it is not possible to estimate the optimality of the texture by the penetration strength: the same inacceptable would be both extremely hard texture of the product and extremely soft, spreadable product (if it is not a paste). So, experiments have been directed to find a regression dependency of texture estimation (acceptability) on the penetration strength. The first series of experiments have been carried out to develop the technology of analogues of frankfurters colored by adding the tomato paste and acidity regulation. The regression equation is the followed: (2) where = 2.260, = 5.174•10 -3 , = -9.547•10 -5regression coefficients; penetration strength, kPa; Ytexture estimation (from 0 to 5, higher is better). So, it is not difficult to calculate an optimal value of penetration strength which is of 36.1 kPa. According to this result, using the relative structural and mechanical characteristic ( , %) was proposed. It can be calculated by the followed equation: where -penetration strength, kPa; -optimal penetration strength, kPa (36.1); critical growth of penetration strength; this or higher growth of this parameter results in significant changing the texture, kPa (4). This characteristic will be changed in the range from 0 (worst) to 100 (best), so, it is possible to join it with the generalized sensory score. The generalized quality level has been calculated by the followed equation (4) where and significance coefficients (0.3 and 0.7); generalized sensory score, %. The central composition rotatable design has been used for providing the followed studies. Factors are: X1meat to fish ratio, X2quantity of cod liver oil added. Design and results are shown in Table 1. The response surface is shown at the Figure 1. The regression equation is followed: The regression equation doesn't make it possible to find an optimum at the factors changing range, but it can be noted that going outside this range is very undesirable due to coming to another assortment group. Thus, the specimen no.2 has been chosen to be a near-to-optimal point. It has been evaluated for different characteristics shown in table 2. According to the second way of experiment, the formulation included non-fatty pork, microwavecooked cod liver, FPI, eggs, potato starch, salt, spices and some other additional ingredients. It was reasonable to balance both basic and additional ingredients. The following optimization parameters have been chosen for basic ingredients optimizing: generalized sensory score of the product, %; penetration strength, kPa; raw material composition costs, ₽/kg. Authors have chosen the following factors most significant influencing the quality of the resultant product: non-fatty pork to microwave-cooked cod liver ratio (X1); FPI dosage, kg (X2). The generalized numeric quality characteristic of the meat and fish frankfurters has been composed to be a response. It includes generalized sensory score (Y1), value of the relative structural and mechanical characteristic YR2, and relative raw material cost YC3. Response 0 can be calculated according to the following equation: -optimal value of the penetration strength, kPa. As it was done in previous series of experiments, the optimal value of the penetration strength was determined by using the regression method; the followed equation has been found: where 3 ycost of all raw materials needed to produce 1 kg of finished product, ₽; 300practically minimal possible cost of raw materials for 1 kg of product, ₽. The central composite rotatable design for optimizing the composition of meat and fish frankfurters has been developed using experimental design theory, it is shown in Table 3. Computer data processing of the experiment have resulted in the following regression equation: F-ratio for this equation is 9.77, the probability of inadequacy is 0.096. This model is quite complex, and includes subjective sensory parameter (which is mostly significant according to experts' estimations), so confidence level more than 0.9 is not needed and expected. All regression coefficients are significant at the confidence level of not less than 0.95. The response surface which make it possible to analyse factor influence on the generalized quality level is shown at Figure 2. Optimal factors values are the followed: Х1 (pork mince to microwave cooked liver ratio) is 2.73; Х2 (FPI dosage) is 2.09. In addition to balancing the main ingredients of sausage product, it is important to determine the dosage of additional ingredients which can influence on the yield of the finished product, on structural and mechanical characteristics, on color. So, the next stage of developing the composition of meat and fish product was an optimization of additional ingredients. The guar gum has been used to provide the most acceptable structural and mechanical characteristics of the product. It was also decided to drop using tomato paste as a coloring agent and use fermented rice to provide slightly pink color of the product. Objective characteristic of the color of finished product is its additive color model (RGB). Using this model instead of sensory color determination increases the accuracy of the experiment. But the RGB model includes 3 parameters changing it the range from 0 to 255, so it is reasonable to determine summary square deviation from optimal reference. The consequent sampling of colors has been made, and the most acceptable colors for frankfurters have been determined using expert method. The results show that an optimal RGB color is 248.170.168. So, it is possible to develop the central composition design of two-factor experiment. Generalized optimization parameter includes 5 single parameters: Y are percentage, and they need to be maximized, so they do not need recalculated, other parameters have been recalculated to the same range. Generalized optimization parameter is calculated by the equation: Table 4. (13) F-ratio for this equation is 11.12, inadequacy probability is 0.01. All regression coefficients are significant at the confidence level not less than 0.99. The response surface is shown at Figure 3. Using differential method, the local maximum has been found. Optimal factors values are 1 X = 0.33, 2 X = 0.25. The optimal composition of meat and fish frankfurters is shown in Table 5. The physical and chemical experiments have been carried out to prove the functional properties of finished products. The results are shown in Table 6. Thus, due to additives the monounsaturated acids are dominated in the frankfurters, moreover, a large amount of ω-3 PUFA present there. Low amount of ω-6 PUFA can be compensated by vegetable oils. Results of table 6 prove the high content of saturated and monounsaturated fatty acids. It could also be said that ω-3 PUFA are dominated among the total PUFA content. Producing bakery products using cod liver oil In this article, marketing research was conducted using a questionnaire to determine the most preferred types of bakery products and the types of the most desirable additives. Survey results showed that consumers prefer traditional types of bread: shaped (baked in the form) and hearth almost equally. The analysis showed that algae (kelp) (68% of the total number of respondents) and additives based on fish lipids (27% of the total number of respondents) were chosen among the additives of marine hydrobionts for including in the bread recipe. Also consumers elect the addition of grain and seeds, for example, oat grains, flax seeds, sunflower seeds (73% of the total number of respondents), and dietary fiber, for example, bran (56% of the total number of respondents). The studies of using the cod liver oil during producing white bread have been carried out. The series of experiments of baking the specimens of bread with the addition of cod liver oil and glutathione have been provided. The best characteristics have been obtained while using the oil dosage of 1 % to flour mass. Table 7 shows the results of studying the gluten characteristics, and Table 8 shows the properties of specimens of finished product. The compositions of bakery products from wheat flour with the addition of algae, bran, and milk thistle have been developed to expend the assortment of bakery products. The bran is a rich source of dietary fibers which helps to regulate the work of intestine. Algae is a source of iodine, it also contains natural sorbents (alginates) and sterols which can make a positive effect on human organism. Milk thistle have both dietary fibers and flavonolignans. The specimens of wheat bread with such additives have been prepared and evaluated; the results are shown in Table 9. So, quite high quality of the product can be achieved using algae addition. Using bran also makes it possible to obtain qualitative product. This technology is patented (RU2579362 and RU2579363). Conclusions The technology of processing meat and fish frankfurters with FPI and microwave-cooked cod liver is developed; optimal composition is found and proved. Experimental samples are studied. The relevance of improving the technology of sausage products directed to producing qualitative meat product with enhanced food and nutritional value is proved. A special attention has been paid to the coloring process. It was decided to drop the doubtful traditional ingredient like sodium nitrite and phosphates. The sensory evaluation of frankfurters has been carried out, the total chemical composition and penetration strength has been determined. The food and nutrition values have been measured. It was proved that frankfurters with cod liver and FPI are rich in ω-3 PUFA. Using the combination of cod liver oil and glutathione as bakery agents has been proved to enhance both structural and mechanical, and sensory characteristics of bakery products. Adding such components as algae, bran, and milk thistle could in several ways increase the nutritive value of the product and satisfy consumer demand for bread products with natural additives. Acknowledgements This work was carried out with the help of the Russian Science Foundation, project 16-16-00076.
2019-08-23T08:16:46.903Z
2019-08-06T00:00:00.000
{ "year": 2019, "sha1": "64455ebe297e8af85910429b33a7a8e4db24bb38", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/302/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f76067c1a42c225fe64bab8c10004d6316298e3a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
208814907
pes2o/s2orc
v3-fos-license
NF-kappa B Signaling-Related Signatures Are Connected with the Mesenchymal Phenotype of Circulating Tumor Cells in Non-Metastatic Breast Cancer The role of circulating tumor cells (CTCs), tumor microenvironment (TME), and the immune system in the formation of metastasis is evident, yet the details of their interactions remain unknown. This study aimed at exploring the immunotranscriptome of primary tumors associated with the status of CTCs in breast cancer (BCa) patients. The expression of 730 immune-related genes in formalin-fixed paraffin-embedded samples was analyzed using the multigenomic NanoString technology and correlated with the presence and the phenotype of CTCs. Upregulation of 37 genes and downregulation of 1 gene were observed in patients characterized by a mesenchymal phenotype of CTCs when compared to patients with epithelial CTCs. The upregulated genes were involved in NF-kappa B signaling and in the production of type I interferons. The clinical significance of the differentially expressed genes was evaluated using The Cancer Genome Atlas (TCGA) data of a breast invasive carcinoma (BRCA) cohort. Five of the upregulated genes—PSMD7, C2, IFNAR1, CD84, and CYLD—were independent prognostic factors in terms of overall and disease-free survival. To conclude, our data identify a group of genes that are upregulated in BCa patients with mesenchymal CTCs and reveal their prognostic potential, thus indicating that they merit further investigation. Introduction Distant metastases account for most of cancer-related deaths. Yet, fundamental questions regarding mechanisms that promote or inhibit the formation of metastasis still remain unanswered. It is evident that in breast cancer (BCa), tumor cell dissemination occurs already at early stages of the disease [1,2], Expression of Immune-Related Genes within Primary Tumours Correlated with the Phenotype of CTCs To investigate the immune transcriptome associated with each phenotype of CTCs, we applied NanoString multigene expression analysis to samples of primary breast tumors. We observed that the mesenchymal phenotype of CTCs (n = 9) was associated with the upregulation of 37 genes and the downregulation of 1 gene in primary tumors (p-value ≤ 0.05, false discovery rate (FDR) ≤ 0.2) when compared to the epithelial phenotype of CTCs (n = 14) (Table 1, Figure S1; all results in Table S1). Due to the limited number of patients included in our study, we employed the conservative FDR method of multiple testing correction. Our aim was also to explore the association between the expression of immune-related genes within the primary tumors and the overall presence of CTCs. We found no statistically significant differences between the primary tumors' immunotranscriptomes in relation to patients' CTC status (positive vs. negative). Here, we observed that multiple genes differentially expressed in patients with epithelial and mesenchymal CTC phenotypes (Table 1) play a role in the NF-kappa B signaling pathway. Consequently, we decided to interrogate this link more carefully. We applied the Functional Annotation Tool by DAVID 6.8 [33,34] to associate the selected genes with specific functional annotations. Genes upregulated in tumors with mesenchymal CTCs were generally involved in the activation and regulation of immune response (Table S2). Interestingly, 15 out of 37 upregulated genes (FADD, TLR7, TNFRSF11A, IL1RAP, PSMD7, TICAM1, IRF3, BCL10, IKBKE, TRAF6, RELA, IKBKG, TBK1, PSMB10, and CYLD) were implicated in the regulation of NF-kappa B signaling and activity (GO:0043122, GO:0051092, and GO:0038061). A literature search provided a number of links between other 11 differentially expressed genes (CCRL2, PBK, TNFSF13, BIRC5, TAPBP, ELK1, STAT6, ATG10, IFNAR1, CCND3, and MAP2K1) and the NF-kappa B pathway (top 12 genes depicted in Figure 1). Analysis of Gene Ontology also revealed that nine of the upregulated genes regulate the production of type I interferons (GO:0032479; TLR7, TICAM1, IRF3, IKBKE, RELA, TBK1, STAT6, CYLD, and IFNAR1), with a particular role in the stimulation of interferon beta (GO:0032728; TLR7, TICAM1, IRF3, TBK1, and IFNAR1). Genes implicated in NF-kappa B signaling were upregulated in primary tumors of breast cancer patients with mesenchymal CTCs (MES, n = 9) when compared to patients with epithelial CTCs (EPI, n = 14); the top 12 upregulated genes are presented. Gene expression depicted as number of counts of each probe and normalized to the four most stable reference genes (ABCF1, EDC3, HDAC3, and CNOT4); FC calculated on the basis of the median normalized counts of the probe in each group; differences in median normalized counts between groups analyzed with the Mann-Whitney U test (p); the bars correspond to the interquartile range (IQR), the whiskers cover 1.5 IQR from the median. Figure 1. Genes implicated in NF-kappa B signaling were upregulated in primary tumors of breast cancer patients with mesenchymal CTCs (MES, n = 9) when compared to patients with epithelial CTCs (EPI, n = 14); the top 12 upregulated genes are presented. Gene expression depicted as number of counts of each probe and normalized to the four most stable reference genes (ABCF1, EDC3, HDAC3, and CNOT4); FC calculated on the basis of the median normalized counts of the probe in each group; differences in median normalized counts between groups analyzed with the Mann-Whitney U test (p); the bars correspond to the interquartile range (IQR), the whiskers cover 1.5 IQR from the median. Immune-Related Genes Connected with the Mesenchymal Phenotype of CTCs Are Potent Negative Prognostic Factors in Breast Cancer We have previously demonstrated that the mesenchymal phenotype of CTCs correlates with a poor prognosis in breast cancer patients [21]. Consequently, we decided to evaluate the prognostic significance of the immune-related genes that we found significantly up-or downregulated in primary tumors of BCa patients with mesenchymal CTCs. To this end, we turned to The Cancer Genome Atlas (TCGA) database and analyzed the available RNA-seq data on gene expression in a breast invasive carcinoma (BRCA) cohort (n = 877) [35,36]. Five out of 38 genes (PSMD7, C2, IFNAR1, CD84, and CYLD) associated with the mesenchymal phenotype of CTCs demonstrated a negative prognostic impact in the TCGA cohort. Moderate (higher than the first quartile, Q1 in Table S3) expression of PSMD7 correlated with shorter overall survival (OS) in comparison to low expression of PSMD7 in primary tumors (HR = 1.75, 95% CI: 1.08-2.82, p = 0.022; Figure 2A). A higher risk of recurrence was observed for tumors with moderate (higher than the first quartile, Q1 in Table S3) expression of C2 (HR = 4.51, 95% CI: 1.36-14.96, p = 0.014; Figure 2B) and IFNAR1 (HR = 2.68, 95% CI: 1.03-6.97, p = 0.043; Figure 2C) in comparison to tumors with low expression of these genes; high (higher than the third quartile, Q3 in Table S3) (HR = 2.20, 95% CI: 1.09-4.43, p = 0.028; Figure 2E) was also linked to shorter disease-free survival (DFS) in comparison to low expression of these genes in the primary tumors. Multivariate analysis including the clinical stage confirmed the significance of the aforementioned genes as independent prognostic factors (Table S3). Low/moderate status of gene expression relative to the first quartile (Q1); low/high status of gene expression relative to the third quartile (Q3); hazard ratios (HR) with 95% confidence intervals (95% CI) computed using Cox proportional hazards regression; OS: overall survival, DFS: disease-free survival. Discussion The knowledge about immune signatures related to tumor dissemination is still limited. Our current study aimed to identify the immunotranscriptomic profiles of primary tumors associated with the presence of CTCs and the CTCs phenotype in non-metastatic BCa patients. Our data revealed that 38 genes were differentially expressed in the primary tumors of patients with mesenchymal CTCs when compared to patients with epithelial CTCs. Intriguingly, we did not observe any statistically significant difference between primary tumor transcriptomes when comparing (Table S4); hence, we believe that the observed difference in the activation of dissemination at BCa tumors may be cell context-dependent and definitely requires a more thorough analysis. On the other hand, our results demonstrate a substantial connection between the mesenchymal phenotype of CTCs and the NF-kappa B pathway. According to the NanoString gene expression assay and a literature search, 26 out of the 37 genes upregulated in mesenchymal-CTC patients in comparison to epithelial-CTC patients are implicated in NF-kappa B signaling at various levels of the transduction pathway ( Figure 3A) and demonstrate a complex network of interactions at the protein level ( Figure 3B). Importantly, the enrichment of NF-kappa B-related transcripts was consistently observed when we applied a stricter gene inclusion criteria and limited the analysis to a set of 330 genes with the highest expression (log2 mean count of a gene in all samples >9; Table S5). ; image depicting a protein-protein association network, generated using the STRING tool; edge (line) coloring defines the type of interaction: blue-from curated databases, pink-experimentally determined, green-gene neighborhood, red-gene fusions, dark blue-gene co-occurrence, yellow-text mining, black-co-expression, violet-protein homology. image depicting a protein-protein association network, generated using the STRING tool; edge (line) coloring defines the type of interaction: blue-from curated databases, pink-experimentally determined, green-gene neighborhood, red-gene fusions, dark blue-gene co-occurrence, yellow-text mining, black-co-expression, violet-protein homology. NF-kappa B signaling is a potent regulator of numerous vital physiological processes, including survival, inflammation, and immune responses [37]. The activation of the pathway is mediated by numeros receptors. Our enriched set (Table 1) includes genes encoding both specific ligands (APRIL (TNFSF13)) and receptors (TLR7, TNFRSF11A, IL1RAP, and IFNAR1), as well as universal adaptor proteins (FADD, TICAM1, and TRAF6) that facilitate the transduction of the signal from the receptors in the cell membrane to the effectors in the nucleus. Namely, we observed the upregulation of transducers involved in the canonical cascade (BCL10 and IKBKG) as well as in Toll-like receptor-mediated activation of NF-kappa B signaling (PBK, IKBKE, and TBK1). Moreover, the enhanced expression of MAP2K1 gene points to the possible role of ERK-mediated stimulation of NF-kappa B signaling in tumors with mesenchymal CTCs [38]. On the other hand, the activity of NF-kappa B is known to be regulated by the proteasome and ubiquitin-mediated proteolysis. Here, we report the upregulation of genes that are implicated in the ubiquitin-proteasome system (PSMD7, PSMB10, and CYLD) [39] and the autophagy cascade (ATG10) [40][41][42]. Eventually, we observed an increased expression of one the subunits of the NF-kappa B transcription factor-p65 (RELA), as well as of two other co-operating transcription factors (IRF3 and STAT6). The enhanced signaling resulted in the upregulation of five target genes-CCRL2, TAPBP, BIRC5, ELK1, and CCND3. The NF-kappa B pathway is a well-known driver of EMT during both embryonic and tumor development [37]. In general, a constant stimulation of this pathway in cancer cells results in abnormal proliferation and differentiation, enhanced metastasis, and treatment resistance [43]. In breast cancer, NF-kappa B directly regulates the transcription of genes encoding EMT-inducing transcription factors [44]. In fact, the increased expression of NF-kappa B is a common feature of breast cancer cell lines and tissues, correlating with intensified activation of both the canonical and the non-canonical pathway [45][46][47]. What is more, several reports point to an interesting association between NF-kappa B and HER2 [47][48][49], with evidence for predominant NF-kappa B activation in ER−/HER2+ breast tumors [45,49]. Our data revealed another interesting pattern of enrichment, with upregulation of several NF-kappa B-related genes that are particularly involved in the positive regulation of type I interferon production (Table S2). The cross-talk between NF-kappa B and Toll-like receptors (TLR)-mediated signaling results in an increased pro-inflammatory response that is additionally enhanced in an autocrine and paracrine manner by a positive feedback loop ( Figure 3A) [50,51]. Noteworthy, among the NF-kappa B-unrelated genes, we found markers of platelet activation (CD63 and CD84) [52,53], which is in line with literature reports on the co-operation between platelets and CTCs in the induction of EMT and metastasis formation [54,55]. Of note, tumor dissemination may also be supported by other populations of cells within the intratumoral stroma. The elevated NF-kappa B activity may result from increased release of pro-inflammatory cytokines by macrophages at the tumor site [56][57][58]. In fact, NF-kappa B seems to be involved in the polarization of tumor-associated macrophages [57]. We have previously reported the negative prognostic significance of CTCs of mesenchymal phenotype in BCa patients [21]. Due to the low number of patients in this cohort, in the current study we analyzed the impact of genes linked with mesenchymal CTCs in TCGA BCa cohort. In fact, in TCGA data 5 out of the 38 genes of our interest were associated with worse prognosis (overall survival or risk of recurrence), namely, PSMD7, C2, IFNAR1, CD84, and CYLD. None of the corresponding proteins is currently included in the routine histopathology for breast tumor, thus they need to be validated at the protein level in a large cohort of patients in order to prove their clinical importance and diagnostic applicability. Patients The study group consisted of 35 breast cancer patients staged I-III, who had undergone surgical treatment at the Medical University Hospital in Gdansk between April 2011 and May 2013. The study was approved by the Ethical Committee of the Medical University of Gdansk (NKBBN 94/2017), and informed consent was collected from all participants. Patients were characterized by different clinicopathological parameters (Table S4), with particular focus on CTC status-negative (n = 12) or positive (n = 23)-and molecular phenotype of CTCs-epithelial (n = 14) or mesenchymal (n = 9)-as described previously [26]. nCounter Gene Expression Assay Extracted RNA (4 µl) was pre-amplified using the nCounter Low RNA Input Kit (NanoString Technologies, Seattle, WA, USA) with the dedicated Primer Pool covering the sequences of 730 immune-related genes included in the nCounter PanCancer Immune Profiling Panel (NanoString Technologies). Pre-amplified samples were analyzed using the NanoString nCounter Analysis System (NanoString Technologies) according to the manufacturer's procedures for hybridization, detection, and scanning. Data Analysis For each tumor sample analyzed with the NanoString technology, the background level was estimated using the mean plus 2 standard deviations of the counts of the negative control probes included in the assay. Data were normalized using the geometric mean of the positive controls included in the assay and 4 most stably expressed housekeeping genes included in the PanCancer Immune Profiling Panel-ABCF1, EDC3, HDAC3, and CNOT4-(expression stability assessed with NormFinder, SD range 173.5-228.4 counts). Background thresholding and normalization were performed using nSolver 4.0 software (NanoString Technologies). Low-expression genes (log2 mean count of a gene in all samples <6) were excluded, leaving 584 genes for further analysis. Subsequently, the genes differentiating each CTC status were selected on the basis of fold change in comparison to the control; fold change was calculated on basis of the median normalized counts of the probe in each group. The following comparisons were performed: CTC-positive vs. CTC-negative; CTC-epithelial vs. CTC-negative; CTC-mesenchymal vs. CTC-negative; CTC-mesenchymal vs. CTC-epithelial. Genes with FC > 1 were considered upregulated; genes with FC < 1 were considered downregulated. Data were analyzed using the R statistical computing environment (3.6.1) [59]. Differences in gene expression between groups were analyzed using the Mann-Whitney U test with Benjamini-Hochberg correction for multiple comparisons; p-values ≤ 0.05 and FDR values ≤ 0.2 were considered statistically significant. For the differing genes, gene ontology was analyzed using the Functional Annotation Tool by DAVID Bioinformatics Resources 6.8 [33,34]. EASE Score, a modified Fisher exact p-value, was used to assess gene enrichment. Multiple testing was corrected using FDR correction. For the NF-kappa B-related genes, a protein-protein association network was depicted using STRING v11 [60]. Survival Analysis in TCGA Cohort RNA-seq (RNASeqV2, RSEM_ normalized) and clinical data of BRCA cohort were obtained from TCGA portal [35,36] (data status of 28 January, 2016). The group was limited to T1-3M0 patients, and records with missing clinical or expression values were excluded, leaving 877 out of 1098 BCa patients for the analysis. OS was defined according to the "days_to_death" variable for survival time and the "vital_status" variable for event; DFS was defined according to the "days_to_last_follow-up" variable for survival time and the "person_neoplasm_cancer_status" variable for event. For the genes of interest, low/moderate status of gene expression was determined according to the 1st quartile (Q1) cut-off, while low/high status of gene expression was determined according to the 3rd quartile (Q3) cut-off. For each gene, the expression status (low vs. moderate; low vs. high) was tested in both univariate and multivariate analyses including the clinical stage. Hazard ratios (HR) with 95% confidence intervals (95% CI) were computed using Cox proportional hazards regression using the R statistical computing environment (3.6.1) [59]. Conclusions To summarize, this study points to the potential link between the expression of immune-related genes in cells within the primary tumor and the EMT state of circulating tumor cells. Increased NF-kappa B signaling-related signatures in the tumor mass might possibly promote EMT in CTCs, thus contributing to their more aggressive phenotype and worse patient prognosis. It merits further investigation whether such effect is due to the action of cancer cells or that of normal cells in the surrounding TME. The potential prognostic relevance of selected genes associated with mesenchymal CTCs is promising and deserves further validation. Supplementary Materials: The following are available online at http://www.mdpi.com/2072-6694/11/12/1961/s1, Table S1: Results of gene expression analysis according to each CTC status or phenotype, Table S2: List of GO terms enriched in the genes upregulated in mesenchymal-CTC patients, Table S3: Results of multivariate survival analysis depending on the expression of genes up-and downregulated in mesenchymal-CTC patients; low/moderate status of gene expression relative to the 1st quartile (Q1); low/high status of gene expression relative to the 3rd quartile (Q3); hazard ratios (HR) with 95% confidence intervals (95% CI) computed using Cox proportional hazards regression; significant results are highlighted in green, Table S4: Clinicopathological characteristics of the patients, Table S5: List of genes upregulated in the primary tumors of patients with mesenchymal CTCs (compared to patients with epithelial CTCs), computed for a set of 330 high-expression genes (log2 mean count of a gene in all samples > 9), Figure S1: Expression of genes up-and downregulated in primary tumors of patients with mesenchymal CTCs (MES, n = 9) and patients with epithelial CTCs (EPI, n = 14); expression depicted as number of counts of each probe and normalized to the 4 most stable reference genes (ABCF1, EDC3, HDAC3, and CNOT4); fold change (FC) based on the median normalized counts of the probe in each group; differences in median normalized counts between groups analyzed with the Mann-Whitney U test (p); bars correspond to IQR, whiskers cover 1.5 IQR from the median.
2019-12-08T00:18:47.366Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "e36bd167c04c2b06c40ca5414db3685eb3a01e72", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/11/12/1961/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2962d7ad0ab165859d68fdee7d5b8b1d0734f756", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119193185
pes2o/s2orc
v3-fos-license
Collisionally excited filaments in HST H$\alpha$ and H$\beta$ images of HH~1/2 We present new H$\alpha$ and H$\beta$ images of the HH~1/2 system, and we find that the H$\alpha$/H$\beta$ ratio has high values in ridges along the leading edges of the HH~1 bow shock and of the brighter condensations of HH~2. These ridges have H$\alpha$/H$\beta=4\to 6$, which is consistent with collisional excitation from the $n=1$ to the $n=3$ and 4 levels of hydrogen in a gas of temperatures $T=1.5\to 10\times 10^4$~K. This is therefore the first direct proof that the collisional excitation/ionization region of hydrogen right behind Herbig-Haro shock fronts is detected. The HH 1/2 system has a central source detected in radio continuum (see, e.g., Rodríguez et al. 2000) and a bipolar jet system, with a NW jet (directed towards HH 1) which is visible optically, and a SE jet (directed towards HH 2) visible only in the IR (see Noriega-Crespo & Raga 2012). HH 1 has a "single bow shock" morphology, and HH 2 is a collection of condensations, some of them also with bow-shaped morphologies (see, e.g., Bally et al. 2002). The emission-line structure of these objects has been studied spectroscopically, with 1D (Solf, Böhm & Raga 1988) and 2D (Solf et al. 1991;Böhm & Solf 1992) coverage of the objects. It should be pointed out that the HH 1/2 outflow lies very close to the plane of the sky, so that projection effects do not have to be considered when interpreting the observations of these objects. The spatial structure of the optical line emission has been studied at higher angular resolution with HST images. Schwartz et al. (1993) Hartigan et al. 2011). In the present paper we describe a pair of new HST images of HH 1 and 2 obtained with filters isolating the Hα and Hβ lines. These images were obtained in consecutive exposures, so that they are not affected by proper motions (which become evident in HST observations of the HH 1/2 complex separated by more than a few weeks) nor by differences in the pointing, and they therefore yield an accurate depiction of the spatial distribution of the Hα/Hβ ratio in these objects. These images show effects that have not been detected before in ground based studies of the emission line structure of HH 1 and 2 (see, e.g., Solf et al. 1991 andSolf 1992) nor in HST images of other HH objects (since HST Hβ images of HH objects have not been previously obtained). The paper is organized as follows. The new HST images are described in section 2. The spatial distribution of the Hα/Hβ ratio, the line ratios as a function of Hβ intensity and the distribution functions of the line ratios are presented in section 3. Finally, an interpretation of the results is presented in section 4. THE OBSERVATIONS The region around HH 1 and 2 was observed with the Hα (F656N) and Hβ (F487N) filters on August 16, 2014 with the WFC3 camera on the HST. The Hα image was obtained with a 2686 s exposure and the Hβ image with a slightly longer, 2798 s exposure. The images were reduced with the standard pipeline, and a simple recognition/replacement algorithm was used to remove the cosmic rays. The final images have 4130 × 4446 pixels, with a pixel size of 0 ′′ .03962. The images contain only two stars: the Cohen-Schwartz star (Cohen & Schwartz 1979) and "star no. 4" of Strom et al. (1985). These two stars have been used to determine astrometric positions in CCD images of the HH 1/2 region since the work of Raga et al. (1990), yielding better positions for HH 1 (which is closer to the two stars) than for HH 2. We have carried out paraboloidal fits to the PSFs of these two stars, and find no evidence for offsets and/or rotation, which shows the excellent tracking of the HST during the single pointing in which the two images were obtained. Also, we have analyzed the Hα−Hβ difference images of the two stars, and find no offsets between the two frames. The full Hα frame is shown in Figure 1, as well as blowups of regions around HH 1 and HH 2 in both Hα and Hβ. As seen in the top frame, the Hα map shows the extended collection of HH 2 knots (to the SE) and the more compact distribution of the HH 1 knots (towards the NW). The central frames of Figure 1 show the Hα emission of HH 2 (left) and HH 1 (right) at a larger scale. In the fainter Hβ emission (bottom frames of Figure 1) only the brighter regions of HH 1 and 2 are detected. We have defined two boxes (labeled A and B in the bottom frame of Figure 1) enclosing the regions of the two objects which are detected in Hβ. In the following section, the regions within these two boxes are used in order to study the spatial dependence of the Hα/Hβ ratio. As discussed in detail by O'Dell et al. (2013), the F656N filter is contaminated by emission from the [N II] 6548 line, and both the F656N and F487N filters have contributions from the nebular continuum. Using the fact that at all measured positions within HH 1 and 2, the [N II] 6548/Hα ratio does not exceed a value of ≈ 0.35 (see, e.g., Mannery 1981a andSolf et al. 1988) and the transmission curve of the F656N filter (see O'Dell et al. 2013 and the WFC3 Instrument Handbook) one then finds a peak contribution of ≈ 2% to the measured flux. For estimating the effects of the continuum in the F656N and F487N images one can use the continuum and line fluxes obtained by Brugel, Böhm & Mannery (1981a, b) and the bandpasses of the filters to obtain estimates of ≈ 0.4 and 5% (for the F656N and F487N filters, respectively). Therefore, when interpreting the Hα/Hβ ratios obtained from our HST images, it is necessary to keep in mind that there is an uncertainty of ∼ 5% due to a possible spatial dependence in the Hβ line to continuum ratio within the F487N filter. As this uncertainty is ∼ 1 order of magnitude smaller than the Hα/Hβ ratio variations described below, we do not discuss it further. Figure 2 shows the Hα map (right) and Hα/Hβ ratio map (left) for HH 2. To avoid having extended regions dominated by noise, in order to calculate the line ratio map it is necessary to place a lower bound on the line fluxes. We have chosen to calculate the ratios only in regions in which the observed Hβ flux is larger than THE Hα/Hβ RATIOS For calculating the intrinsic Hα/Hβ ratios we have applied the following reddening correction. We first calculate the observed ratios for all of the pixels with Hβ intensities larger than I 0 (see above) for the A and B boxes (shown in the bottom frames of Figure 1). For HH 2 we obtain a mean line ratio < Hα/Hβ > obs = 3.82, and for HH 1 an almost identical < Hα/Hβ > obs = 3.79 value. Considering an observed line ratio of 3.8 for both objects, comparing with the case B recombination cascade intrinsic Hα/Hβ ratio of 2.8 and using the average Galactic extinction curve, we obtain an E(B − V ) = 0.27 colour excess. This value is somewhat lower than the E(B − V ) ≈ 0.35 value deduced for HH 2 by Brugel et al. (1981a), using the method of Miller (1968), based on the fixed ratios between the auroral and transauroral lines of [S II] (i.e., not assuming a recombination cascade Hα/Hβ ratio). In order to calculate the dereddened Hα/Hβ ratios, we therefore multiply the observed ratios by a factor of 2.8/3.8, basically assuming that the extinction towards HH 1 and 2 is position-independent. The dereddened Hα/Hβ ratios of HH 2 (see Figure 2) have values in the 2 → 6 range, with the regions of higher values corresponding to filamentary structures in the leading edge of the emitting knots (i.e., in the edges directed away from the outflow source). In order to illustrate the positions of these "high Hα/Hβ" regions, we have superimposed an Hα/Hβ = 4 contour on the Hα emission map (purple contour in the right frame of Figure 2). Figure 3 shows the Hα map (bottom) and dereddened Hα/Hβ ratio map (top) for HH 1. We have calculated the ratios only for pixels with an observed Hβ flux larger than I 0 = 5.4 × 10 −18 erg s −1 pix −1 (i.e., the same cutoff used for HH 2, see above). The region with Hα/Hβ > 4 is a thin filament on the E side of the leading edge of HH 1 (see the purple contour on the Hα emission map in the bottom frame of Figure 3). It is clear that HH 1 shows a strong side-to-side asymmetry with respect to the outflow axis, as the SW region of the leading edge does not show high Hα/Hβ ratios (see the top frame of Figure 3). The Hα emission also shows a strong side-toside asymmetry. Figure 4 shows the dereddened Hα/Hβ line ratio as a function of the (observed) Hβ flux for all of the pixels with I Hβ > I 0 (see above) for HH 1 (top frame) and HH 2 (bottom frame). It is clear that for low values of the Hβ intensity in both HH 1 and 2 we have a relatively broad distribution of line ratios (the width of this distribution representing the relatively large errors of the line ratio at low intensities) centered on the Hα/Hβ = 2.8 recombination cascade value. For pixels with brighter Hβ intensities, we see a distribution of Hα/Hβ ratios extending from ≈ 3 to larger values of ∼ 5 (for HH 1) or ∼ 6 (for HH 2). This result is seen more clearly in Figure 5, where we show the normalized distributions of the line ratios of pixels with I 0 < I Hβ < I 1 = 2.5 × 10 −17 erg s −1 pix −1 (distribution f 1 , top frame), of pixels with I 1 < I Hβ < I 2 = 4.7 × 10 −16 erg s −1 pix −1 (distribution f 2 , center), and of all pixels with I 2 < I Hβ (distribution f 3 , bottom frame of Figure 5, with appropriate pixels found only in HH 2). For both HH 2 (left column) and HH 1 (right column of Figure 5), we see that the distribution f 1 of the lower intensity pixels is approximately symmetrical, centered at an Hα/Hβ ≈ 2.8 line ratio. The distributions for higher intensity pixels (f 2 and f 3 , see above and the central and bottom frames of Figure 5) start at values of Hα/Hβ ∼ 2-2.5, have a peak at a line ratio of ≈ 3.3 and have a wing extending to Hα/Hβ ∼ 5 for HH 1 and ∼ 6 for HH 2. In the following section, we show that these high Hα/Hβ ratios coincide with the values expected for collisional excitation of the n = 3 and 4 levels of H. DISCUSSION From our new Hα and Hβ HST images we can compute dereddened Hα/Hβ maps for the brighter regions of HH 1 and 2. For the reddening correction, we assume that the mean value of the Hα/Hβ ratio coincides with the recombination cascade value of 2.8, as found previously by Brugel et al. (1981a), who calculated the reddening correction with Miller's method, based on the ratios of auroral to transauroral [S II] lines. We find that in limited spatial regions the (dereddened) Hα/Hβ ratio has values of ∼ 4 → 6, which are inconsistent with the recombination cascade value. These high Hα/Hβ regions are filaments along the leading edges (i.e., the edges away from the outflow source) of the brighter emitting regions of HH 1 and 2 (see Figures 2 and 3). Raga et al. (2014) show that the r αβ =Hα/Hβ ratio for a "case B" cascade fed by collisional excitations from the ground state of hydrogen has the approximate form: r αβ = 3.40 e E43/(kT ) + 0.95 (1 + 5 × 10 4 K/T ) 4 , where k is Boltzmann's constant and E 43 is the energy difference between the n = 4 and n = 3 energy levels (so that E 43 /k = 7680 K). The first term of this functional form has a temperature dependence derived from -Distribution functions for the number of pixels within Hα/Hβ ratio bins for HH 2 (left) and HH 1 (right). We show three different distributions: f 1 (top) corresponding to pixels with I 0 = 5.4 × 10 −18 erg s −1 pix −1 < I Hβ < I 1 = 2.5 × 10 −17 erg s −1 pix −1 , f 2 (center) of pixels with I 1 < I Hβ < I 2 = 4.7 × 10 −16 erg s −1 pix −1 and f 3 (bottom) of pixels with I 2 < I Hβ . The distribution function of the lower intensity pixels (f 1 , top frames) is approximately symmetrical, centered at a line ratio of ≈ 2.8, corresponding to the recombination cascade value (the dashed, vertical line in all plots corresponds to Hα/Hβ = 2.8). The distribution functions for higher intensity pixels (f 2 and f 3 , central and bottom frames) all show extended wings to higher values of Hα/Hβ. the ratio of the n = 1 → 3 and n = 1 → 4 collisional excitation coefficients (assuming temperature-independent collision strengths), and the second term is a correction necessary to match the results of a 5-level, collisionally fed cascade matrix description of the hydrogen atom in the T = 10 3 → 10 6 K temperature range (see Raga et al. 2014). It is clear that the functional form of r (see equation 1) has high values for low temperatures, and has an asymptotic value of 4.35 for T → ∞. This clear evidence that we are observing collisionally excited Balmer lines together with the fact that the high Hα/Hβ regions are restricted to the leading edges of the outward moving condensations of HH 1 and 2 is quite conclusive evidence that we are observing the region of collisional excitation of H lines right after the shock waves driven into the surrounding medium by the condensations. Most of the Hα emission, however, comes from a region further away from the shock, in which the Balmer lines are produced through the standard recombination cascade (as evidenced by the Hα/Hβ ∼ 3 ratios, see Figures 2 and 3). The theoretical prediction of these two regions of Balmer line emission (a collisionally excited Balmer line region immediately after the shock, and the recombination region with Balmer lines dominated by the recombination cascade) in HH shock wave models is already mentioned by Raymond (1979), and the Hα emission from the two regions was studied in more detail by Raga & Binette (1991). These two regions are of course present in all shock models (for example, in the plane-parallel, time-dependent shock models of Teşileanu et al. 2009). In non-radiative shocks observed in some supernovae remnants or in pulsar cometary nebulae, the observed emission comes exclusively from the region of collisional excitation right behind the shocks (see, e.g., the review of Heng 2010). In HH objects, the only previous observational evidence of the emission from the immediate post-shock region (as opposed to the emission from the recombination region) were the Hα filaments seen in HST images of some bow shocks, notably in the HST images of HH 47 (Heathcote et al. 1996), HH 111 (Reipurth et al. 1997) and HH 34 . However, as only Hα was observed it was not possible to guarantee that these filaments did correspond to the region of collisionally excited Balmer lines. Our new Hα and Hβ images for the first time show in a quite conclusive way that we have a detection of the immediate post-shock region of HH objects (in which H is being collisionally ionized and the levels of H are being collisionally excited). The detection of this region provides a clear way forward for developing models of HH bow shocks, in which the position of the shock wave relative to the recombination region is directly constrained by the observations. We should note that throughout this paper we have assumed that the exctinction is uniform over the emission regions of HH 1 and 2. In principle it could be possible that foreground structures in the vicinity of the objects might produce changes in the extinction on angular scales comparable to the size of the objects. However, estimates of the density of the pre-bow shock material of HH 1 and 2 (based on observations of the post-shock density and on plane-parallel shock models, see, e.g., Hartigan et al. 1987) give values ∼ 100-300 cm −3 . Clearly, such a low density environment will not produce appreciable extinction on spatial scales comparable to the size of the HH objects. Because of this, if one wants to attribute the observed changes in the Hα/Hβ ratio to an angular dependence of the extinction, it is necessary to assume that still undetected, sharp-edged, high density regions are present in the immediate vicinity of HH 1 and 2. Support for this work was provided by NASA through grant HST-GO-13484 from the Space Telescope Science Institute. AR and ACR acknowledge support from the CONACyT grants 101356, 101975 and 167611 and the DGAPA-UNAM grants IN105312 and IG100214.
2014-11-28T18:43:59.000Z
2014-11-28T00:00:00.000
{ "year": 2014, "sha1": "d9dddd62bc639abaee504c7c8d0abf7a691980a5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1411.7972", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d9dddd62bc639abaee504c7c8d0abf7a691980a5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
149900470
pes2o/s2orc
v3-fos-license
Social Citizenship, Democratic Values and European Integration: A Rejoinder This Forum debate has gone way beyond my expectations and hopes. I thought that commentators would mainly address my proposals on enhancing rights and introducing duties. The conversation has instead extended to my diagnosis as well, to the rationale which lies at the basis of my prescriptive ideas. By focusing on starting points, the forum has thus brought into light different perspectives and styles of reasoning around citizenship and even broader political questions. With hindsight, I should have spelled out more carefully my basic assumptions. But there is time to remedy this now – and not just for the sake of this particular discussion. I am in fact convinced that a closer and more systematic dialogue between empirical, normative, legal and social theorists would be a welcome and beneficial innovation, a way to contrast excessive disciplinary perspectivism and the related risks of analytical lock-ins. fostered identity and community ties -both having a strong 'bonding' and emotional component. 1 I see EU citizenship as a novel step in this long term development of right-based citizen empowerment. But I suggest that the integrative and legitimating potential of EU citizenship is not only weaker than its national counterparts, but also ripe with potentially divisive consequences, due to its isopolitical nature. I do acknowledge that workers' mobility can bring and has indeed brought substantial economic advantages. But functional arguments and evaluations play a secondary role in my diagnosis. And while I appreciate Richard Bellamy's friendly effort to extract an unarticulated moral view from my reasoning (a form of cosmopolitanism), my own effort has gone in a different direction: analysing EU citizenship as a political instrument which -regardless of its functional or normative rationale -can produce (or not produce) political cohesion and stability. My questions rest on a realist conception of politics, conceived as the sphere whose foundational task is to 'keep the community together' (of course under democratic constraints in the cases discussed here) and to look at citizenship in this perspective. Bellamy goes some way in my direction when he defends the nation state (and thus boundaries) in instrumental terms, i.e. as the most effective system and territorial container devised so far for safeguarding responsiveness, accountability and equal rights. But my perspective takes an additional step by asking: what are the empirical conditions of possibility for nation-building (or EU-building) and for the political viability over time of the democratic state (or the Union)? And what role can (EU) citizenship play in this context? Many commentators have either not captured or not appreciated my empirical perspective. Christian Joppke considers my association between national citizenship and political bonding/loyalty as a 'questionable idealisation' and dismisses 'affectual and normative attitudes' towards state authorities as 'delusional at best'. What is the ground of such a severe takedown? If I understood him correctly, Joppke espouses a state theory whereby the protection logic of national citizenship has mainly served to coat the elementary state function of providing security with 'flowery allegiance and loyalty'. As factual judgements, these statements sound quite daring and farfetched to me. The war-welfare nexus has been indeed highlighted by a wealth of comparative historical works. 2 But even if and when social pro- grammes were originally introduced to 'coat' the warfare goals and strategies of the nation state, their 'protection logic' has subsequently acquired an autonomous dynamic, which in most cases started to generate genuine bonding, loyalty and diffuse support. If this is the historical case, I fail to see why puzzling about the integrative potential of EU citizenship should be "a category mistake". It is precisely by using this category that we can single out the political differences between state-building and EU building and identify the limits and constrains of the latter compared to the former. Joppke criticises my starting point also from a normative point of view, defining as 'retrograde' my remarks about mobility rights being restricted to EU citizens and not (fully) to third country residents. To begin with, this is today a fact, with factual implications that need to be captured and empirically analysed. Second, as aptly noted by Rainer Bauböck, the dimension of exclusion inheres in any concept denoting membership and inclusion. It is true that, from a normative perspective, the balance between inclusion and exclusion must rest on principled justifications. But, again, my metric is realist-political. Citizenship integrates and legitimises political power to the extent that it 'bounds', that it is a recognisable marker of an insiderhood to which certain selective advantages are associated. I am not formulating a value judgement here; I am not saying that things ought to be this way. What I am saying is that we have empirical evidence that citizenship, when operating within a politically bounded space, has a potential to integrate and legitimise. The 'good' in which I am interested is the political cohesion of the EU. In this sense, and only in this, I make a value choice. But it is only a very weak 'value-related' choice à la Max Weber. I merely believe that it is interesting and important to raise questions about the viability of the EU, given its undeniable conspicuousness as a political entity and its increasing role in shaping people's life chances. Nothing more or less. The contrast between the empirical and the normative perspective is best exemplified by Frank Vandenbroucke's and Andrea Sangiovanni's wellarticulated contributions. Both outline distinct conceptions of justice for EU solidarity and free movement in particular. And they both embark on this exercise because they deem my reasoning lame (my interpretation), peripheral (Vandenbroucke) or lacking (Sangiovanni) in respect of the more 'foundational' debate about justificatory principles. For them, the basic challenge which I dodge is how to address the question of an ideal (presumably rational and informed) citizen asking, in Sangiovanni's words, 'why should I accept or enhance EU citizenship?'. I concede that my empirical and realist arguments would have little traction indeed were I ever to engage in a philo-sophical disputatio of this sort. But would they remain equally unpersuasive if I engaged in a debate with a real world Europhile politician struggling everyday with the problem of consensus? In this situation, it would probably be the philosopher's view that has little traction and might be considered unfit for pragmatic purposes. It is, indeed, a matter of perspective as well as of interlocutors. I locate myself in the real situation of late 2010s Europe; I notice that the fact of free movement causes the fact of Euroscepticism; I surmise that this dynamic may well jeopardise the political stability of the EU as such; I draw on the toolkit of comparative politics and public policy analysis and suggest that a recrafting of EU citizenship might contain this threat. In addition to my fellow political scientists, my interlocutors are essentially the policy-makers. Yes, I confess: the elite. Not because I am dismissive of 'the people' and cynical about the stylised processes of democratic will formation elaborated by political philosophers. But rather because I think that elites are and should not only be spokespersons of their voters, but responsible leaders as well (remember the polemic between Edmund Burke and his Bristol electors?). And, in my perspective, 'keeping the community together' in the face of pluralism and disagreement (and hopefully building constructively on both) is a key task of responsible leaders. As self-contained conceptions of EU social justice, I do find Sangiovanni's and Vandenbroucke's arguments coherent and largely convincing (with some caveats, starting from those raised by Bauböck). They have an academic, but also a political relevance, to the extent that they can provide valuable symbolic resources to policy-makers puzzling about problemsolving and consensus-building. But -as both authors obviously know -the public acceptance of these arguments cannot be taken for granted. What can be done if there is disagreement? In the philosopher's perspective, one should probably move up one level and interrogate those philosophical doctrines about political justice, which specialise in principles on how to fairly manage disagreements. This regress ad infinitum is however of little use for real world politics and politicians, struggling with conflicts here and now. Without detracting from the importance of principles and normative reasoning, empirical political theory shifts the focus on how institutions and policies relate to system performance and diffuse support. Collective acceptance for the right reasons remains a desirable ideal goal and may even result in greater stability. But, in Weber's wake, empirical political theory conceives of legitimation as a more complex property and process, resting not only on reasons (normative and instrumental) but also on affectual and traditional orientations. It is this mix of motives that allows a real world polity to survive what Ernest Renan called the "daily referendum" on associative life and collective institutions. The debate has revealed another misunderstanding that I may have inadvertently originated in my initial contribution and that needs to be cleared. Joppke has raised the worry (which has resonated in other comments as well) that my diagnosis and proposals may bring ammunitions to the enemy, i.e. 'populist demonology'. Let me be crystal clear: in acknowledging the fact of Euroscepticism and the profusely documented increase of chauvinist orientations of European voters, I certainly do not imply that one must be indulgent towards such phenomena, not least because of their manipulative character. On the other hand, a mere judgement of fact cannot be accused of buying into the enemy's views. And while I do agree with Dorte Martinsen that researchers should concentrate on fact finding and perhaps even engage directly 'with the tensions described, be they mainly perceived or real', I must be able to use descriptive categories such as 'stayers' or 'movers' and of analysing observable social and political tensions between them without being accused of covert intelligence with the enemy. The most appropriate and fruitful conclusion of this discussion on fundamentals is a plea for mutual understanding and collaboration between normativists and empiricists. What I have in mind is not just a modus vivendi, but the construction of an overlapping consensus whereby: 1) each side makes an effort to acknowledge an equal, if obviously different, theoretical relevance, purchase and autonomy on the other side; 2) both look more closely into each other, especially when normativists make descriptive or causal arguments and empiricists deal with values or undertake political or policy evaluations. To some extent this construction is already under way. 3 I find that it is a challenging enterprise, opening novel avenues of research especially for younger scholars. Citizenship, democracy and European integration Magnette's distinction between sympolitical and isopolitical citizenship rights has proven very useful to frame the entire debate. It has also pushed some commentators to focus on the political dimension of citizenshipequal participation rights to democratic self-rule. Sandra Seubert is correct in pointing out that I have not adequately addressed this dimension in my historical reconstruction and diagnosis. The European project, Seubert argues, ought to be voluntarily chosen by citizens who consider it as responding to common concerns. If this is not the case, as noted also by Kostakopoulou, then my proposals would just reinforce the problematic logic that has driven European integration so far: buying consensus by delivering tangible advantages for particular groups. Van Middelaar has defined this logic as the Roman strategy of EU consensus building through panem et circenses -and without even reaping the full benefits of this. 4 Does my realist perspective inevitably make me a Bismarckian in disguise or, at best, an elitist and paternalist liberal-democrat? Probably yes, if the starting point is a normative preference for participatory democracy based on individual equality and freedom under bottom-up, self-given laws. But that is not the only possible starting point. When I became a political scientist, I started to appreciate 'Schumpeter's other doctrine', i.e. the socalled competitive theory of democracy, which, in my reading, is not an elitist juxtaposition to the participatory view. It rather corrects the latter by bringing back into the democratic scene the important figure of the (wouldbe) elected leader and by drawing attention to the electoral logic as such. In the real world, free elections inescapably activate a quid pro quo dynamic whereby whats (policy programmes inspired by different values and ideologies) are exchanged for whos (votes in support of competing political leaders promising whats). On this view, political citizenship confers an equal (if minimal) power resource -the individual vote -which can be spent during electoral exchanges. Democratic rights of political participation logically presuppose civil rights and are in their turn instrumental for the acquisition and defence of social rights. Once the whole package is in place, the famous Marshallian tryptic generates mutual synergies; citizenship not only acquires a self-sustaining equilibrium but becomes a unique instrument for taming and controlling vertical power through the multiplication of the horizontal powers and endowments of citizens, in their various social roles and life situations. The keystone of this system is sympolitical closure. Who gets what, how and when is the result of domestic democratic politics, which produces collectively binding sovereign decisions. Domestic markets -for goods, services, capital and labour -can of course be (made) open. But key national decisions result from citizens' endogenous preferences on how to manage the consequences of openness and define/redefine its boundaries. My conclusion is not dissimilar from Seubert's (democratic empowerment is the core) but on my view the core is derived from empirical, not normative theory. Gradually, and to some extent creepingly, the EU has lifted the sympolitical keystone. Isopolitical integration has caused increasing cross-system externalities which can no longer be democratically managed at either the national or the supranational level. The EU is today a quite peculiar political system which defies all our analytical categories. We say it is "far from federal". But in certain policy areas regulatory standardisation linked to free movement has gone way beyond the limits that historical federations (such as the USA or Switzerland) have not dared to trespass. Swiss cantons still enjoy wider margins of residency-based 'discrimination' than EU member states. In the US it is true that 'states cannot select their citizens', especially when it comes to welfare, as Martin Seeleib-Kaiser reminds us. But they can, for example, charge higher fees to out-of-state students applying to state universities and delay residence requests by students for the mere purpose of paying lower fees. The Court of Justice of the European Union (CJEU) has become a hyper-federal watchdog of EU law and its supremacy over national law -with serious social consequences, as correctly highlighted by Susanne Schmidt. Another indicator of hyper-federalism is the extent to which some policy decisions are delegated to non-majoritarian institutions with very wide regulatory autonomy (e.g. as regards state aids, competition, or banking supervision). It is true that this institutional architecture has resulted from 'demoicratic' procedures and decisions in the past (the CJEU was born from the Rome Treaty, the ECB from the Maastricht Treaty, and so on). But the fact is that today such institutions find themselves far removed from the basic form of democratic control: the vote of individual citizens. In some other core areas of state power (e.g. fiscal policy: taxing and spending) we are under the illusion that the EU only rests on intergovernmental coordination. But we use intergovernmentalism as an indicator of inter-nationalism, in Bellamy's sense: a two level game in which national citizens mandate their governments to negotiate inter-national agreements under the implicit assumption that subsequent decisions under these agreements remain responsive and accountable to national citizens. This is no longer the case. Under the reformed Growth and Stability Pact, the Commission's decisions on macroeconomic imbalances or budget deficits (decisions which may have huge consequences for ordinary citizens) can be rejected only through a reverse qualified majority rule, which has been (correctly in my view) equated with 'minority rule'. 5 I am afraid that the EU has long ago ceased to conform to that 'republican inter-nationalist' blueprint praised by Bellamy. And I think this also obtains for the intuitively appealing demoicratic formula of 'governing together, but not as one'. 6 If my diagnosis is correct, in key policy areas the EU has already become a powerful 'one', in which some demoi (not to speak of some citizens) are more equal than others. What are the consequences of this opaque regime (that we find very hard to define in terms of democratic theory) for the Marshallian triptych described above? The least that we can say is that the new regime has entirely destructured the coherence of the triptych and heavily undermined its effectiveness and even viability. Strangely enough, this situation has been endogenously generated. Democratic sympolitical decisions have originally authorised isopolitical standardisation of economic and civil rights. Such decisions have also deliberately transferred some sympolitical sovereignty to the supranational level. The latter has gradually undermined the content and quality of domestic social rights. The hands of national citizens have been tied: in certain domains their votes have become ineffective or no longer requested. It is unclear which majorities prevail, in some cases the rules even allow minorities to prevail. A full account of how we got here is way beyond the scope of this rejoinder. 7 Empirical political theory suggests that to some extent we have been victims of unintended consequences and perverse effects of institutional logics. We should also be careful not to neglect the enormous advantages that integration has produced: not only more aggregate welfare, but also robust safeguards for peace and security. As noted by Bauböck, the EU was born to anchor the post-war system of fragile and shattered democracies. And still today we badly need it to secure the conditions of possibility for democracy in Europe. I would add a second consolation. Political supranationalisation has partly served -especially in certain member states -as a beneficial constraint for irresponsible domestic choices in taxing and spending and as an incentive to engage in responsible strategies of functional and distributive rationalisations. There were important cross-national variations in the coherence and balance of the Marshallian tryptic and some did need significant corrections, especially in terms of financial duties (see below). The bottom line of my reasoning is, however, that the EU citizenship regime(s) are currently skewed and unstable. Let me then turn to the question of what can be done, focusing on one particular instrument: EU citizenship in its social and duty components. Caring Europe, my proposals and the 'holding environment' Agreeing with my diagnosis about a growing tension between stayers and movers, Van Parijs identifies three fundamental strategies of response. The first ('all movers'; we could also call it 'more of the same') consists in 'converting as many stay-at-homes as possible into movers'. Since a total conversion would be obviously impossible, let us say that this strategy should rest on persuading the stayers to internalise the functional and normative rationales of mobility as a collective benefit. But empirical evidence tells us that an increasing number of stayers do not (no longer) buy into that view. The 'all movers' strategy is not a solution, but an aggravation of the political problem. The second strategy is 'retreat', i.e. curtailing those isopolitical rights that cause the problem. I did not discuss retreat in my introduction, but yes, I believe that there is room for some steps in this direction. 8 I fully agree, for example, with Schmidt that limits should be posed to the judicialisation of citizenship. I also think that the mobility regime can be partially reconfigured in a restrictive direction through secondary legislation alone -no Treaty changes needed. The third strategy is 'Caring Europe', which was first submitted to EU leaders in exactly this wording by a group of scholars (myself included) during the UK presidency of the EU in 2005, under Tony Blair. 9 The political rationale of Caring Europe is not Bismarckian. And while this strategy alone cannot remedy the loss of individual democratic control, it can indeed kill three birds with one stone: 1) it can backstop the centrifugal, Eurosceptic dynamics as well as the destabilisation of the Marshallian triptych; 2) it can safeguard the functional and social justice advantages ingrained in free movement; 3) it can contribute to the overall durability of the EU polity by thus preserving the otherwise vulnerable pre-conditions of peace and democracy in Europe (Bauböck's argument). The Caring Europe strategy has precisely informed my concrete proposals, so let me now revisit them in the light of the debate. Both Seeleib-Kaiser and Ilaria Madama underline that there is already more ground than meets the eye for implementing some of my proposals and that the Commission is well aware of the need to integrate stayers in the mobility and social agenda of the EU. This should at least partly overcome the scepticism of Martinsen who is worried about the lack of time and political support for my proposals to mate- rialise swiftly. To a large extent, my proposals merely go in the direction of a political rationalisation of the status quo: reaping all the consensus building potential of those instruments that are already available. One might ask: if it is so easy, why has it not been done already? The answer lies in the level at which such decisions are taken and the interests/views of decision-makers at that level. Making sure that the EU role can be captured at the street level and "in the last mile" or introducing a social card is not today European Council stuff. These nitty gritty provisions are decided by the lower echelons of EU and national bureaucracies, primarily interested in administrative and practical details. Last mile implementation is under the radar of local politicians ready to capture the credit of any panes or circenses accruing to their voters. The integrative and legitimising potential of my proposals should be brought to the attention of top leaders, those who are ultimately responsible for the EU's stability and durability. The launch of a social card for accessing all the already existing co-funded programmes of the EU that provide advantages to all citizens, whether stayers or movers (as well as the enhancement and greater visibility of the external protection advantages of the EU passport) should be promoted by top leaders and could be done rather easily. The introduction of a voucher scheme (and I like Theresa Kuhn's idea of using in some way the label 'mobility bonus') and of a universal skills guarantee (maybe also a 'children guarantee') require sympolitical agreement. But the skills guarantee is already on the agenda: it could well be deliberately crafted so as to maximise its visibility to the stayers. Some commentators (Sangiovanni, Vandenbroucke, Hermann, Hemerijck) have rightly noted that mobility may not only generate some losses for the stayers of the countries of destination, but also of the countries of origin (e.g. through brain drain). Here the solution could be an active involvement of the EU in sponsoring 'return mobility' programmes. The Central and Eastern member states have already launched national initiatives in this direction to bring back home the 'drained brains' and to help the relocation of their nationals residing in the UK. EU complements to such initiatives would be a very good idea. A sympolitical consensus on a dedicated EU insurance scheme for mobile workers is more difficult to piece together, I acknowledge this. This proposal has been around for many decades, without attracting the attentions it deserved. What is required here is a shift from functional to political attention, in a context of increasing contention about mobility. A similar (and more demanding) shift is needed also for the possible introduction of an EU fund against cyclical unemployment. Here the obstacles concern not only political consensus building, but also epistemic convergence, given the currently prevailing obsessions about 'moral hazard' on the side of ordoliberal elites and experts. More than a century of experience with mass social insur-ance against unemployment at the domestic level (initially opposed precisely on moral hazard grounds) should indicate however that there are ways of containing the risk and that the risk itself is not so high after all. Some commentators have themselves made additional proposals in the logic of a Caring Europe. There is no space to enter into the details and I do share the logic (if not all the details) of such additional suggestions. I would like to briefly comment, however, on the more ambitious strategy outlined by Vandenbroucke and Anton Hemerijck about moving towards a European Social Union of some sort. 10 Under this approach, the core of social sovereignty should remain at the national level, where redistributive issues can still largely (but not entirely) be dealt with via national sympolitical decisions. In Vandenbroucke's contribution, one task of the Union should be to make sure that member states do guarantee (via binding constraints or surveillance?) sufficient social provisions and legal minimum wages for whoever legally resides within their territory. In Hemerijick's contribution, the Union should essentially provide a 'holding environment' for an effective functioning of national social protection systems. If I understand him correctly, Hemerijck espouses a 'softer' overall approach, in the logic of the Lisbon and EU2020 agendas, which now underpin the newly created European Pillar of Social Rights. And he is not sure whether it is essential for the EU to claim political credit for its institutional scaffolding. In addition, he feels half way between the inter-national position of Richard Bellamy and my alleged supra-national position. But as I argued above, supranationalism is already with us, and rather 'hype' in some policy areas. Taking it apart -at least to a certain degree -may be functionally and normatively desirable. But is it institutionally feasible, short of a financial/monetary catastrophe? Brexit is teaching us how difficult it is for member states to disentangle themselves from the EU in ways which are decently reasonable in normative and instrumental terms. In this sense, I fully agree with Bauböck that the EU has become a community of -'prosaic and not at all romantic' -destiny. It is the famous historical institutionalist argument about the temporal quasiirreversibility of complex institutions (you cannot put the toothpaste back into the tube once you have squeezed it out). My doubts about Hemerijck's softer and semi-internationalist notion of a socially friendly 'holding environment' (HE) are fourfold. First, would it imply a partial dismantling of the supranationalist excesses that we now have (as proposed, among others, by Fritz Scharpf)? 11 Would this HE essentially be a top-down construction promoted by enlightened leaders, technocrats and experts? Is it realistic to expect that HE would reinforce 'loyalty to the EU as a common possession of a union of welfare states' in the eyes of voters already mobilised by anti-EU parties? And finally, how can we manage the dangerous and destructive politicisation that free movement has already triggered off? My modest proposals for the short term are motivated by these latter developments. But also for the long term, I think that we should definitely have a plausible and deliberate legitimation strategy for the EU (even as a holding environment) which will never be effective without at least a modicum of "Roman policies'' (i.e. resource transfers). What about duties? The question of duties has remained somewhat in the shadows of the debate. In my initial contribution I had myself been cautious and modest on this front. The link between duties, and especially tax paying duties, and legitimacy is complex and full of strains. Many of the existing Eurosceptic parties were born as anti-tax parties. If our aim is to enhance the integrative potential of citizenship, we should tread very lightly on this terrain, adopting, as I suggested, a nudging rather than a binding strategy. Since Joppke has launched an attack on the very idea that citizenship ought to imply duties, I feel a duty to respond. I understand that in normative and legal theory there is an articulated debate on this issue. I do not enter into this debate but will try to summarise my realist approach, in the hope of making my normativist colleagues aware of the essentials of the empirical theory on rights and duties. The production of political goods (policies and generalised compliance) requires 'extractions' from the members of the territorial community, the most obvious exemplars of which have historically been conscription and taxes. Are these extractions part of the citizenship package? Definitely yes, in my perspective. As the etymology of the term clearly suggests, being a citizen means being a member of a civitas, a legally constituted collectivity. Since extractions are a precondition for the survival of the latter, a citizen cannot avoid the duties of membership which inhere in her very status as such. Fulfilling one's duties (which also and predominantly means, in ordinary life, to respect the rights of fellow citizens and the prerogatives of the authorities) is key for the success of the "daily referendum" on the political community. Without gener-alised compliance, political stability is at risk. The formal titularity of a right is a precondition for its actual exercise. But the exercise is effective only to the extent that there is both horizontal (on the side of other citizens) and vertical (on the side of the authorities) compliance, i.e. the observance of those duties which are correlative of rights. The correspondence of rights and duties is especially important in the case of social entitlements, which entail financial resources. As mentioned above, in various countries the increasing gap between the actual fruition of social entitlements and tax/contributory duties or compliance (e.g. through evasion or the black economy) has led to acute sustainability problems for the welfare state. To a significant extent, such problems have also resulted from irresponsible political choices, i.e. the conferral of entitlements not underpinned by adequate duties of financial participation. Why do citizens fulfil their duties? In my perspective, this is immaterial. Some may do that 'for the right reasons', some for habit, custom, romantic affection. As I said above, in real world polities, legitimacy rests on a mix of motives. Is the correspondence between rights and duties the product of a coherent historical trajectory and deliberate strategy? Not at all. Citizenship is a symbol that came gradually to encompass pre-existing national patchworks of rights and duties, got intertwined with the parallel symbol of 'nationality' and turned into a basic status, that of 'having rights to have rights' within a bounded space. The symbol over-emphasised the rights side of membership, but it always implied a second side, i.e. the duty to accept duties. It is certainly true that the substance of the citizenship package has been gradually extended to all legal residents (with the key exception of sympolitical participation rights). But as long as state boundaries remain a fact, the status of citizenship entails a vertical empowerment vis-à-vis territorial authorities which aliens or denizens do not have and through which citizen can define and redefine the rules of access and the content of the denizenship status itself. Even if ordinary people do not visualise this clearly, the EU is a bounded territorial collectivity. Although derivative of national citizenship, EU citizenship does confer novel isopolitical civil and social rights and their correlative duties as well as novel sympolitical rights through the European Parliament. As I have argued above, the large majority of citizens are 'stayers'. They have to comply with one class of isopolitical duties (accepting mobile workers as equals in the labour market and welfare state) without de facto exercising the corresponding isopolitical rights. Their capacity to change this situation through sympolitical rule making has been curtailed domestically and is still weak supranationally. I do not share Hemerijck's theory according to which EU citizenship was adopted to seal the internal market. Historical reconstructions show that the new provisions of the Maastricht Treaty (also) reflected the social and political strategy of EU building of leaders such as Jacques Delors. Whether by design or by failure, the fact is that rather than complementing national citizenship regimes, EU citizenship has ended up destabilising them. My proposals aim at a political rebalancing. In this perspective, I believe that a smart gradual strategy of soft dutification of EU citizenship, initially based on nudging, might have positive and virtuous political effects. Kuhn worries that such nudging would only activate those who are already in favour of the EU. So be it. My survey data show that the share of EU voters that do favour cross national or pan-European forms of solidarity exceeds the share of cosmopolitans. 12 Eurosceptics are extremely vocal, but their numbers oscillate between 15 per cent and 30 per cent, depending on the member state. Pro-EU voters are still a large majority, but this majority is silent and disoriented. Adding stuff to EU citizenship and some nudging for its dutitification could provide, precisely, a focus to coalesce around the Caring Europe agenda. ConflictsandvisionsonthefutureofEurope Time to conclude. My realist perspective is only loosely related to values. It rests on a Weberian value relation and then emphasises the centrality of instrumental political goods, which have to do with safeguarding 'what is necessary to maintain democracy' (Bauböck) so that it can produce the final goods that free and equal citizens decide to pursue. Do I have a personal normative conception about integration? Yes, I do, and it belongs to the same liberal egalitarian cluster of the explicit or implicit conceptions espoused by most of our commentators. 13 But I have chosen here to keep my reasoning at a meta-level. And at this levels normative conceptions are political 'objects' which contribute to providing a collective sense of purpose that can motivate citizens to belong together. A vibrant intellectual debate on ultimate purposes is very important for institution building and polity maintenance. EU building is a novel experiment in political unification of different national communities, undertaken within a (now) unfavourable historical constellation characterised by an overall de-freezing of the economic, social and cultural patterns of modernity. We perceive a pervasive and foundational change, a general "melting of all that was solid", but we seem unable 12 Ferrera, M. & A. Pellegata (undated), Reconciling economic and social Europe. Report on the REScEU Survey, available at http://www.resceu.eu/ events-news/news/can-economic-and-social-europe-be-reconciled-citizens'view-on-integration-and-solidarity.html. 13 Ferrera, M. (2014), 'Solidarity in Europe after the Crisis', Constellations 21 (2): 222-238. to define this change in positive terms rather than merely as an ambiguous contrast to the past (post-modernism, post-nationalism, post-democracy, post-materialism, post-capitalism, etc.). Without 'pros-eutopian' (from the Greek pros, before us) visions of the future, we should not be surprised about the return of nostalgic and backward looking 'retrotopias' (to use Zygmunt Bauman's metaphor). 14 I mentioned above Schumpeter's distinction between the 'classical' and the 'other' doctrine of democracy and I have argued that they should be seen as two sides of the same coin, the latter as a 'vertical' correction to the former. I now conclude by recommending an additional correction. Democratic participation and competition must be infused with values. Equal and free participation and proceduralised power struggles among elites only define the perimeters of a playing ground where substantive interests, ideas and values contend with each other. The emphasis on values (on the polytheistic fight among them) as a quintessential element of politics in the sense of Berufspolitik is a major legacy of Weber's political theory, including his often misinterpreted theory of democracy. 'Man would not have attained the possible unless time and again he had reached out for the impossible' is the famous Weberian motto concluding his speech on ''Politics as a Profession''. As social scientists (normative and empirical) we can contribute to producing visions of the impossible. But the outreachers ought to be political actors: responsible, pros-eutopian and, I would add, also Euro-phile politicians. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2019-05-12T14:23:42.572Z
2018-09-13T00:00:00.000
{ "year": 2018, "sha1": "e20c1c9742da98b2554df9f43cbf90221c09d507", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-89905-3_50.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "1e7ee74a64dea4815356d1e9d2500f08edb9409c", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Political Science" ] }
125458402
pes2o/s2orc
v3-fos-license
Experimental and theoretical study of friction torque from radial ball bearings In this paper it is presented a numerical simulation and an experimental study of total friction torque from radial ball bearings. For this purpose it is conceived a virtual CAD model of the experimental test bench for bearing friction torque measurement. The virtual model it is used for numerical simulation in Adams software, that allows dynamic study of multi-body systems and in particularly with facility Adams Machinery of dynamic behavior of machine parts. It is manufactured an experimental prototype of the test bench for radial ball bearings friction torque measurement. In order to measure the friction torque of the tested bearings it is used an equal resistance elastic beam element, with strain gauge transducer to measure bending deformations. The actuation electric motor of the bench has the shaft mounted on two bearings and the motor housing is fixed to the free side of the elastic beam, which is bended by a force proportional with the total friction torque. The beam elastic element with strain gauge transducer is calibrated in order to measure the force occurred. Experimental determination of the friction torque is made for several progressive radial loads. It is established the correlation from the friction torque and bearing radial load. The bench allows testing of several types and dimensions of radial bearings, in order to establish the bearing durability and of total friction torque. Introduction In the literature are presented a large number of studies concerning the friction, lubrication and wear of materials. Test procedures for experimental measuring for the friction torque in rolling bearings are presented in [1]. For this purpose it is used a modified four ball machine, in order to test rolling bearings. It is monitored the friction torque and operating temperature. Other research [2] examines the frictional power loss of a needle roller bearing lubricated with grease. Obtained results reveal that the test bearing has higher friction compared with bearings lubricated with conventional lubrication. Low cost systems for the force and torque measurement, for wheel bearings are presented in [3]. Also, a study for the friction of a ball screw is presented in [4]. It is presented a theoretical model for the friction between balls of the screw. The study is useful because provides a theory support for reasonably reducing the screw ball friction. Studies based on the lubrication theory are presented in [5]. It is studied the wear behavior, using a four ball wear tester. Tests are made with a low viscosity additivated mineral oil. Two types of balls are used, steel and ceramic and obtained results shows greater resistance of the ceramic balls. The effects of roughness upon friction of the plastics, intended for bearing applications are studied in [6]. Obtained results shows that not exists an optimal roughness for minimum friction for polymers and the friction depends on the bulk properties of the polymer. Aspects concerning automotive tribology are presented in [7]. In this research is presented an overview of various lubrication aspects, for typical power train, including engine, transmission and driveline. Also an aspect presented consist in the currents status and future trends in automotive lubricants. Other comparative studies on the tribological behavior of lubricants are presented in [8]. Test methods for engine lubricants are presented in ASTM (American Society for Testing and Materials) standards and also are available a large number of patents [9], concerning the bearing friction. Theoretical considerations The resistant torque that appears in roller bearings is produced by a complex of friction appearance ways. The rolling friction phenomena complexity is generated by a large number of factors that occurs simultaneously. The most important sources for bearing friction are: friction generated by the contact deformations, rolling friction on the contact surfaces, friction produced by the lubricants, sliding of bearing elements, friction from seals. For usual calculations the friction torque can be estimated, with sufficient precision, with relations obtained from experimental results. The equation (1) establishes the total friction torque (Mtc). where: Ml is the resistant torque produced by the fluid friction of bearing elements in contact with the lubricant; Mf is the resistant torque produced by the bearing load. Equation (1) is used for bearings that operate to moderate speed and loads. In case of the ball bearings, used for high speed, when the frictions produced by the spin and gyroscopic motion are important, should be considered the friction torque produced by these motions. The resistant torque (Mf) produced by the bearing load, is computed with the equation (2). where: F [N] is the bearing radial load; dm is the bearing medium diameter, in [m]. For radial ball bearings the factor f1 is established with the relation: where: [N]; C0= 8000 N, for radial ball bearing type 6204. In order to compute the resistant torque (Ml) produced by the fluid friction of bearing elements in contact with the lubricant are used equations (4) or (5): Experimental test bench description For experimental measurement of the bearing torque it is used the test bench shown in figure 1, where the tested bearing is mounted on the shaft (2) and it has oscillating exterior housing, as see in figure 1. The actuation is made with an electric motor, with the nominal speed of 1450 rpm. Components of the test bench are: (1) shaft, (2) -tested bearing, mounted on oscillating housing; (3)shaft bearings supports; (4) -lever to produce the radial load; (5) -shaft bearings; (6) -elastic coupling; (7) -beam with strain gauge transducers; (8) -electric motor. Radial ball bearing (2) subjected to experimental tests is mounted on the shaft (1). The outer rig of the tested ball bearing is mounted to an oscillating cylindrical bush. The radial load of the bearing is created with the lever (4) that is articulated to point O (figure 1). To point A are added additional weights G (as see in Fig. 4) in order to create different radial loads for the bearing. Electric motor (8) is used to rotate the shaft (1). The shafts of tested bearing and electric motor are connected with an elastic coupling (6). Electric motor stator is fixed with a tie rod to the elastic beam (7) free side. The elastic beam is subjected to deformations proportional with the motor torque. In order to measure the beam bending force, produced by the motor torque are used the strain gauge transducers (10), as see in figure 2. With experimental calibration is established the dependency between elastic deformation versus bending force and motor torque, which is necessary to be determined. Bending stress that appears in the beam is computed with equation (6) . For experimental measurement of the deformation it is used the MGCPlus acquisition system from Hottinger Baldwin Messtechnik. In order to evaluate the bending force, the beam transducer it is calibrated. The calibration diagram of the strain gauge transducer is presented in figure 3. stabilized to 12.5 μm/m. To this deformation corresponds a bending force of 11.06 N. Considering the distance between electric motor shaft and beam longitudinal center, as 71.3 mm, it results a friction measured torque by 788.57 Nmm. The radial force in order to load the bearing is created by adding metal discs with 1.8 kg weight of the lever (4), as see in figure 4. Registered results of the beam deformations, by adding three supplementary loads are presented in figure 6. Obtained results, corresponding to radial load regime, shows an increase of the total friction torque. Obtained theoretical dependence of the bearing friction torque versus radial load is presented in figure 7. Also, experimental obtained dependence of the bearing friction torque versus redial bearing load is presented in the same graphic, figure 7. It is observed a linear dependence, the friction torque increasing with the bearing radial load. Similar linear dependencies for the friction torque versus radial load are obtained by bearing manufacturer, to performed tests [10]. Numerical simulation in ADAMS of bearing test bench ADAMS software offers the possibility with machinery plug-in to define a various number of machine elements. To design the dynamic model of the bearing test bench in ADAMS, for the kinematic elements have been specified the materials and are defined the revolute joints as ball bearings. The ball bearing construction in Adams Machinery is specified as shown in figure 8. The motion for the electric motor is defined to 157 rad/s. To obtain accurate results the shaft (1) of the test bench is considered as a deformable body. Based on Adams Machinery feature, the tested bearing life is 526 hours, as presented in figure 9. Figure 9. Bearing life report using Adams Machinery feature. In figure 10, is presented the bearing center marker translational displacement upon X axis (axial). In figure 11, is presented the computed translational displacement of bearing center marker upon Y axis (radial). The axial displacement has small values (reaches 0.0015 mm), and the radial displacement is larger, reaching the value of 0.022 mm. Adams computed translational deformation of the shaft (1) center marker are shown in figure 12. Conclusion In this paper it is presented the design of a test bench for bearing friction torque measurement. The ball bearing subjected to tests in this study is type SKF 6204. From theory and experimental tests are obtained similar linear dependencies between the friction torque and bearing radial load. A graphic comparison for the obtained theoretical and experimental obtained results is presented in figure 7. Theoretically, it is computed a total friction torque of 1647.96 Nmm for the radial load of 291 N and the torque increases to 5018.17 Nmm in case of 1164 N bearing radial load. Experimentally, at 291 N bearing radial load it is measured a total torque of 1383.22 Nmm and for the maximum radial load of 1164 N the value of the friction torque reaches 5532.88 Nmm. The numerical simulation of the test bench is performed with Adams software, considering as flexible body the shaft mounted on bearings. Adams machinery reports a bearing life of 526.26 hours, as presented in figure 9. Are computed in Adams and presented in the paper the bearing center marker translational displacements. Because of the radial load it is computed a translational of 0.022 mm for the marker attached to the bearing center. The shaft deformations shown in figure 12, have the maximum amplitude of 0.13 mm. Proposed test bench can be used to tests different types of radial bearings, with different lubricants.
2019-02-17T14:16:10.585Z
2017-08-25T00:00:00.000
{ "year": 2017, "sha1": "9c70a481eff5676c2215e6bcfdcd0fb6557951ec", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/252/1/012048/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "5ae5a4ffaf644f63a61fe42e8f65f85caaa61a5f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Engineering", "Physics" ] }
27799581
pes2o/s2orc
v3-fos-license
Effects of typified propolis on mutans streptococci and lactobacilli : a randomized clinical trial Objective: The aim of this study was to determine in a randomized, double-blind, placebo-controlled clinical trial the effects of typified propolis and chlorhexidine rinses on salivary levels of mutans streptococci (MS) and lactobacilli (LACT). Methods: One hundred patients were screened for salivary levels of MS >100,000 CFUs/mL of saliva. All patients presented with at least one cavitated decayed surface. Sixty patients met entry criteria. Subjects were adults 18-55 years old. After restoration of cavitated lesions patients were randomized to 3 experimental groups: 1) PROP-alcohol-free 2% typified propolis rinse (n = 20); 2) CHX0.12% chlorhexidine rinse; 3) PLplacebo mouthrinse. Patients rinsed unsupervised 15 mL of respective rinses twice a day for 1 min for 28 days. Patients were assessed for the salivary levels of MS (Dentocult SM) and LACT (Dentocult LB) at baseline, 7-day, 14-day, and at 28-day visits (experimental effects) and at 45-day visit (residual effects). General linear models were employed to analyze the data. Results: PROP was superior to CHX at 14-day and 28-day visits in suppressing the salivary levels of MS (p < .05). PROP was superior to PL at all visits (p < .01). The residual effects of PROP in suppressing the salivary levels of MS could still be observed at the 45-day visit, where significant differences between PROP and CHX (p < .05), were demonstrated. PROP was significantly superior than CHX in suppressing the levels of salivary LACT at the 28-day visit (p < .05). Conclusion: Typified propolis rinse was effective in suppressing cariogenic infections in caries-active patients when compared to existing and placebo therapies. Camillo ANAUATE NETTO1; Maria Cristina MARCUCCI2; Niraldo PAULINO2; Andrea ANIDO-ANIDO1; Ricardo AMORE1; Sergio de MENDONÇA3; Laurindo BORELLI NETO1; Walter Antonio BRETZ4 1 Biomaterials Research Group – School of Dentistry – UNIBAN Bandeirante Anhanguera University – São Paulo – SP – Brazil. 2 Professional Masters Program in Pharmacy – School of Pharmacy – UNIBAN Bandeirante Anhanguera University – São Paulo – SP – Brazil. 3 Microbiology Research Group – Professional Masters Program in Pharmacy – UNIBAN Bandeirante Anhanguera University – São Paulo – SP – Brazil. 4 Department of Cariology & Comprehensive Care – College of Dentistry – New York University – New York-NY – USA. Effects of typified propolis on mutans streptococci and lactobacilli: a randomized clinical trial Anauate netto C et al. IntRoDuctIon F luorides and chorhexidine are arguably the most common agents utilized for the prevention of oral diseases.These chemical agents have been available for use to the general population where chlorhexidine, particularly, has been used to promote gingival health for over 45 years [1].The effectiveness of chlorhexidine rinses in fighting gingivitis has extensive documentation as its efficacy is evident from reports using the methodology of meta-analysis [2].The use of chlorhexidine mouth rinses in the prevention of dental caries however is contradictory.Clinical evidence on the application of chlorhexidine gels and varnishes for the prevention of dental caries is also inconclusive [3]. Propolis is a resinous matter collected by honeybees from different plant exudates, which is used to seal beehives.At least 200 compounds have been identified in different propolis samples of different botanical geographic origins.The typified propolis has standardized constituents such as: prenylated phenolic acids derived from p-coumaric, including it [4].The literature on propolis use in dentistry is extensive.There are numerous laboratory and clinical reports of propolis that include: suppression and inhibition of cariogenic [5] and periodontal organisms [6], prevention of respiratory infections [7] and gingival inflammation, [8] inhibitory activity against endodontic pathogens [9], and therapeutic action on oral ulcers [10].These reports however lack evidence of propolis effectiveness because adequately designed randomized controlled trials have yet to be conducted. Studies comparing propolis with chlorhexidine solutions have been limited to in vitro studies.These studies have suggested that mAteRIAls AnD methoDs Inclusion/exclusion Criteria One hundred-fifty patients were screened from a patient pool attending the Dental Clinics at Bandeirante Anhanguera University -UNIBAN, São Paulo, Brazil.After signing informed consent approved by the Institutional Review Board (UNIBAN-Protocol N.0038/2007), patients were submitted to eligibility criteria.The main entry criteria for participants was to present with salivary levels of the mutans streptococci >100,000 CFUs/mL of saliva and to present with at least one cavitated decayed surface.Additional entry criteria included: the presence of at least 20 teeth, no clinical signs of periodontal disease, age range of 18 to 55 years-old, not being a current smoker, normal saliva secretion rate, not being pregnant, and not making use of any oral topical or systemic medication. Subject Population/Demographics This was a randomized double-blind placebocontrolled clinical trial.Sixty patients met entry criteria.These participants were 18-55 years old of both genders and in good general health.Table 1 depicts demographic and clinical characteristics of study participants.Study groups were well balanced at baseline for demographic variables and for the number of decayed and restored teeth.propolis solutions were equivalent to chorhexidine solutions in inhibiting the mutans streptococci [11].The primary aim of this investigation was to determine in a randomized, double-blind, placebo-controlled clinical trial the experimental and residual effects of typified propolis and chlorhexidine rinses on salivary levels of the mutans streptococci and lactobacilli. treatment Products and Protocol After restoration of all cavitated lesions patients were randomized to 3 experimental groups: 1) alcohol-free, 2% typified propolis mouth rinse (n = 20).Propolis 2% rinse was manufactured at the laboratories of the Department of Pharmacology at Federal University of Santa Catarina, Florianópolis, Santa Catarina, Brazil. The formulation included 2% typified propolis, mint flavor, polioxyethelers, sorbitol, blue color and water; 2) a commercially available 0.12% chlorhexidine mouth rinse; 3) placebo mouth rinse that matched propolis mouth rinse without the active ingredient.Patients rinsed 15 mL of the experimental rinses twice a day for 1 min for 28 days.Rinsing was performed in the morning and before bedtime after ordinary oral hygiene procedures.Patients were assessed for the salivary levels of mutans streptococci (Dentocult SM, Orion Diagnostica, Espoo, Finland) and lactobacilli (Dentocult LB, Orion Diagnostica, Espoo, Finland) at baseline, 7-day, 14-day, and at 28-day visits (treatment effects) and at 45day visit (residual effects).All adverse reactions were documented and patient accountability/ continuance criteria were recorded at all visits. Allocation Concealment For allocation of groups a computer-generated list of random numbers was used.Rinses were prepared in dark-bottles, which were consecutively numbered according to the randomization schedule.Participants were randomized to one of the three test colormatched rinses.Study coordinator, examiners and participants were unaware of group allocation.The group identity was generated and kept in Florianópolis, SC, Brazil while the study was conducted in São Paulo, SP, Brazil. Mutans Streptococci Assay The Dentocult SM test was employed to determine the salivary levels of the mutans streptococci.Two thirds of a treated plastic strip was inserted into the mouth and rotated on the surface of the tongue about 10 times.This strip was placed into a culture vial containing a well-mixed bacitracin solution and processed according to the manufacturer's instructions.Interpretation of test scores using a density chart was as follows: 0-1:<100,000 CFU/mL of saliva, 2:>100,000 to <1,000,000 CFU/mL and, 3: >1,000,000 CFU/mL. Lactobacilli Salivary Levels The Dentocult LB assay was employed to estimate the levels in saliva of lactobacilli.Saliva collected after stimulation was poured overagar surfaces, ensuring that they are well moistened. Excess saliva was allowed to drain from the slide.The slide was screwed tightly back into the tubeand placed the in an upright position in an incubator (36 ± 2°C) for four (4) days.The salivary levels of the lactobacilli were estimated as follows: 0-Non-detectable; 2-1,000 CFU/ mL saliva; 3-10,000 CFU/mL saliva; 4-100,000 CFU/mL saliva; 5-1,000,000 CFU/mL saliva. Product Satisfaction Questionnaire Participants were asked to rank mouth rinses according to taste, breath improvements, nausea symptoms, perception of oral cleaninless, ease to use, and olfatory perception.Participants then ranked each item with scores ranging from 1 (excellent) to 5 (poor) for an overall score based on the range of acceptance for a particular mouth rinse. Statistical Analysis Univariate models were employed to analyze the data of treatment effects between study groups for the salivary levels of cariogenic bacteria.Analysis of co-variance was performed to compare treatment effects for all groups between baseline and 28 days and between baseline and 45 days for the salivary levels of the mutans streptococci and lactobacilli adjusted for age and gender.Chi-square tests were employed to analyze frequency distributions of demographic parameters.ANOVA was employed to estimate differences among study groups at baseline for age, and for the number of decayed and restored teeth.We employed SAS (r). Results Adverse reactions were reported for the chorhexidine and placebo groups at high frequencies with regards to flavor, burning sensations and alterations of taste.Patient satisfaction and acceptability was highest and excellent for the propolis mouth rinse (74%) followed by the chlorhexidine (68%) and placebo (45%) mouthrinses, respectively. Analysis of co-variance revealed significant treatment effects from baseline to 28 and 45 days for both propolis (p < 0.05) and chorhexidine (p < 0.05) groups for the salivary levels of mutans streptococci.These same findings were not observed for the salivary levels of lactobacilli.The propolis mouth rinse was superior to chlorhexidine and placebo rinses at 7-day, 14-day and 28-day visits (treatment effects) in suppressing the salivary levels of the mutans streptococci (Table 2).The chlorhexidine was superior to placebo at 7-day and 14-day visit.The propolis mouth rinse was superior to the placebo rinse at all visits (treatment period) in suppressing the salivary levels of the mutans streptococci.The residual effects of propolis mouth rinse in suppressing the salivary levels of mutans streptococci could still be observed after 17 days of product discontinuation, where significant differences between the propolis rinse and chorhexidine and placebo rinses, were demonstrated. Very little information is available on the efficacy and superiority of suppression of salivary levels of lactobacilli by means of antimicrobials. The data presented in Table 3 shows that propolis mouth rinse was significantly different than chorhexidine mouth rinse in suppressing the levels of salivary lactobacilli at the 28-day visit. DIscussIon Upon search of the literature it is apparent that this is the first randomized double-blind placebocontrolled trial on the effects of propolis on cariogenic bacteria.Although there are several in vitro studies confirming the inhibitory activity of propolis against the mutans streptococci and in vivo studies attesting the efficacy of chlorhexidine on the suppression of the mutans streptococci, our study design does not permit comparisons with the existing literature as data is not available with study design and product evaluation similar to our protocol. Despite the high number of initial decayed and restored teeth present in our study population (Table 1), the propolis and chlorhexidine rinses were effective in suppressing the salivary levels of the mutans streptococci from baseline up to 45 days after a 4-week twice-a-day daily use.Similar results were not found for the placebo group.These results need to be put in perspective as a high number of restorations allows for rapid re-colonization of the mutans streptococci [12;13] and, therefore, had our study design been of a longer duration we are unsure if results presented here would have been extended for a longer period of time. Group analysis at the various pointvisits revealed superior suppression of the mutans streptococci for the propolis rinse when compared to placebo and chlorhexidine rinse at days 7, 14 and 28.Chlorhexidine rinse was superior to placebo at day-7 and day-14 visits but not at day -28 visit (Table 2).The residual effects of the rinsing protocols clearly show that propolis rinse could sustain suppression of the mutans streptococci after 17 days of rinse discontinuation.Notably, we would have expected chlorhexidine rinse to exert similar effects because of chlorhexidine substantivity. We are unaware of any clinical studies on the effects of propolis rinses on salivary levels of the lactobacilli.Our study has demonstrated that after 4-week use of propolis rinse a significant suppression of the salivary levels of lactobacilli was evident when compared to chlorhexidine and placebo rinses (Table 3).This is added benefit for the propolis rinse as suppression of lactobacilli is hard to attain as recently shown in comparative studies employing chlorhexidine rinses [14,15]. Effects of typified propolis on mutans streptococci and lactobacilli: a randomized clinical trial Anauate netto C et al. Limitations of this study include the non-determination of the power of our sample size prior to the commencement of the study.Although our study groups were well-balanced at baseline for various parameters (Table 1) and the fact that we were able to demonstrate superiority of propolis rinses, no sample size calculations were performed during design of this protocol. Lastly, our questionnaire survey showed higher acceptance of propolis rinse for various factors when compared to chlorhexidine and placebo rinses.One recent study evaluated the compliance and acceptability of a 5% propolis rinse [16], and although most subjects reported the unpleasant taste of the rinse, they said they were satisfied with the rinse and would recommend its use by others.Only 24% of individuals reported difficulties in following the study protocols. conclusIons Typified propolis rinses may be of value in suppressing cariogenic infections in caries-active patients when compared to existing and placebo therapies. Table 1 - Demographics and clinical parameters of study participants at entry Table 2 - Effects of rinses on mutans streptococci salivary levels Table 3 - Effects of rinses on lactobacilli salivary levels a, b -Numbers with same superscripts are significantly different by Tukey's pairwise comparisons.
2017-04-25T23:34:04.323Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "191f9899cd6af3fbffb7061daa30618f0a6a82ea", "oa_license": "CCBY", "oa_url": "https://bds.ict.unesp.br/index.php/cob/article/download/879/802", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "191f9899cd6af3fbffb7061daa30618f0a6a82ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244728877
pes2o/s2orc
v3-fos-license
Self-Healing Mechanism and Conductivity of the Hydrogel Flexible Sensors: A Review Sensors are devices that can capture changes in environmental parameters and convert them into electrical signals to output, which are widely used in all aspects of life. Flexible sensors, sensors made of flexible materials, not only overcome the limitations of the environment on detection devices but also expand the application of sensors in human health and biomedicine. Conductivity and flexibility are the most important parameters for flexible sensors, and hydrogels are currently considered to be an ideal matrix material due to their excellent flexibility and biocompatibility. In particular, compared with flexible sensors based on elastomers with a high modulus, the hydrogel sensor has better stretchability and can be tightly attached to the surface of objects. However, for hydrogel sensors, a poor mechanical lifetime is always an issue. To address this challenge, a self-healing hydrogel has been proposed. Currently, a large number of studies on the self-healing property have been performed, and numerous exciting results have been obtained, but there are few detailed reviews focusing on the self-healing mechanism and conductivity of hydrogel flexible sensors. This paper presents an overview of self-healing hydrogel flexible sensors, focusing on their self-healing mechanism and conductivity. Moreover, the advantages and disadvantages of different types of sensors have been summarized and discussed. Finally, the key issues and challenges for self-healing flexible sensors are also identified and discussed along with recommendations for the future. Flexible elastomer sensors usually use flexible metals [43,44], polymer films [45,46] and polymer elastomers [47,48] as substrates and are then combined with graphene [49,50], carbon [51,52], carbon nanotubes (CNTs) [53,54] and metal nanowires [55,56] as the conductive This review aims to provide a comprehensive account of the latest progress in selfhealing flexible hydrogel sensors. First, we summarize the mechanism of self-healing flexible materials and their latest developments in flexible sensor applications. Second, the conductive categories of the self-healing hydrogel flexible sensor were reviewed. This study ends with a brief conclusion and perspective on this rapidly developing and promising field of flexible sensors. Self-Healing Mechanism of Hydrogel In recent years, breakthroughs have been made in research on hydrogels, but most hydrogels still have poor mechanical strength and are susceptible to damage (accidental fracture, etc.), leading to some microscopic or macroscopic cracks [79,80]. As these cracks are further extended, the structure of the hydrogel network is destroyed, its mechanical properties are significantly reduced and its original function is lost, resulting in a waste of resources. To reduce environmental pollution and save resources, it is necessary to study self-healing materials that can prolong life cycles via the autonomous repair of damage [81]. The self-healing ability allows the hydrogel to recover from the damage it has sustained, thus maintaining its main properties and functions, and finally extending the service lifetimes of the materials [82][83][84]. The self-healing properties of polymeric materials can be divided into extrinsic and intrinsic self-healing, depending on whether the selfhealing component is inserted into the polymer or the original component in the polymer matrix. Extrinsic self-healing materials can heal by encapsulating the components that enable healing, such as monomers, that are dispersed in matrix materials in the form of capsules, and the components inside the capsules are released upon damage. This method has difficulty achieving repeated self-healing. In the second category of intrinsic self-healing materials, healing is achieved through noncovalent or reversible dynamic covalent bonds in polymeric materials. When a hydrogel is subjected to external forces, the covalent or noncovalent bonds in the gel will break, forming a fracture surface. By re-contacting the fracture surface, the polymer chain segments interpenetrate and re-establish the dynamic cross-linking sites in the damaged area to repair the network structure of the hydrogel and restore its original mechanical properties and function to a certain extent. Table 1 compares the different performance (such as self-healing time, efficiency and mechanical property recovery) of different hydrogels in detail. Self-Healing Mechanism of Hydrogel In recent years, breakthroughs have been made in research on hydrogels, but most hydrogels still have poor mechanical strength and are susceptible to damage (accidental fracture, etc.), leading to some microscopic or macroscopic cracks [79,80]. As these cracks are further extended, the structure of the hydrogel network is destroyed, its mechanical properties are significantly reduced and its original function is lost, resulting in a waste of resources. To reduce environmental pollution and save resources, it is necessary to study self-healing materials that can prolong life cycles via the autonomous repair of damage [81]. The self-healing ability allows the hydrogel to recover from the damage it has sustained, thus maintaining its main properties and functions, and finally extending the service lifetimes of the materials [82][83][84]. The self-healing properties of polymeric materials can be divided into extrinsic and intrinsic self-healing, depending on whether the self-healing component is inserted into the polymer or the original component in the polymer matrix. Extrinsic self-healing materials can heal by encapsulating the components that enable healing, such as monomers, that are dispersed in matrix materials in the form of capsules, and the components inside the capsules are released upon damage. This method has difficulty achieving repeated self-healing. In the second category of intrinsic self-healing materials, healing is achieved through noncovalent or reversible dynamic covalent bonds in polymeric materials. When a hydrogel is subjected to external forces, the covalent or noncovalent bonds in the gel will break, forming a fracture surface. By re-contacting the fracture surface, the polymer chain segments interpenetrate and re-establish the dynamic cross-linking sites in the damaged area to repair the network structure of the hydrogel and restore its original mechanical properties and function to a certain extent. Table 1 compares the different performance (such as self-healing time, efficiency and mechanical property recovery) of different hydrogels in detail. Abbreviation: κ-carrageenan (κ-CG); stearyl methacrylate (C18); dococyl acrylate (C22); montmorillonite (MMT); xanthan gum (XG); ammonium sulfate (AS); functionalized-boron nitride nanosheets (f-BNNS); acetonitrile-based supramolecular gel (G-Zn-tpy); oxidized alginate (ADA); acrylamide-modified chitosan (AMCS7); oxidized sodium alginate (OSA); α-lipoic acid (LA). Noncovalent Interactions Noncovalently cross-linked hydrogels have been developed to assemble self-healing hydrogels using various mechanisms, including hydrophobic interactions, hydrogen bonding, host-guest interactions and metal coordination leading to dynamic and reversible networks. When the hydrogels are subjected to an external force, the noncovalent interaction in the network will dissociate and associate, and the hydrogel will have hysteresis in the process of deformation and recovery, dispersing the energy [98]. Thus, the hydrogels exhibit reproducible features and a fascinating self-healing ability. However, these hydrogels have stimuli responses and are less mechanically robust structures. Hydrophobic Associations Hydrophobic associated hydrogels are physically cross-linked hydrogels formed by hydrophobic interactions [85]. The preparation of hydrophobic association hydrogels generally adopts the micellar polymerization method [99]. Micelle polymerization is formed by introducing the hydrophobic segment into the hydrophilic polymer segment for copolymerization, and the hydrophobic segment serves as the dynamic cross-linking point of the hydrogel. When the hydrogel is stretched, these physically cross-linked points could dynamically dissociate/associate to reorganize the polymer chains, distributing the applied stress uniformly over the entire network. Meanwhile, physically cross-linked points dissipate the energy by a large hysteresis [86]. In micellar polymerization, hydrophilic segments, hydrophobic segments and surfactants are required [87,88]. Tuncaboylu et al. [100] reported a hydrophobic interaction self-healing hydrogel. Using stearyl methacrylate as the hydrophobic monomer and n-alkyl(meth)acrylate as the physical cross-linking agent, copolymerization in wormlike sodium dodecyl sulfate (SDS)/NaCl aqueous solutions was performed to prepare the hydrogel. Additionally, the effects of the length of the alkyl side chain of the hydrophobe and the surfactant concentration on the properties of the self-healing gel are discussed. To enhance the mechanical properties of hydrogels, self-healing hydrogels are usually designed by combining hydrophobic association effects with other physical interactions [101]. A composite hydrogel was prepared by incorporating grape seed-extracted polymer (GSP) into an acrylamide, methacrylate stearate matrix [102]. As the side chains of GSP contain carboxyl groups, ammonia groups, hydroxyl groups and alkyl groups, these groups tend to form dynamic noncovalent bonds (hydrogen bonds, ionic interactions and hydrophobic association) in the hydrogel, which could dissipate energy efficiently. The Gels 2021, 7, 216 5 of 33 hydrophobic association existing in the system can self-heal after being broken, which gives the hydrogel excellent mechanical properties and self-healing properties, as shown in Figure 2a. these groups tend to form dynamic noncovalent bonds (hydrogen bonds, ionic interactions and hydrophobic association) in the hydrogel, which could dissipate energy efficiently. The hydrophobic association existing in the system can self-heal after being broken, which gives the hydrogel excellent mechanical properties and self-healing properties, as shown in Figure 2a. Yang et al. [103] proposed a polyacrylamide (PAM)/cellulose nanofiber (CNF)/multiwalled carbon nanotube (MWCNT) hydrogel by in situ polymerization. CNFs dispersants uniformly disperse the MWCNTs in the hydrogel and strengthen the mechanical properties of the hydrogel by hydrophobic interactions and electrostatic repulsion. The prepared hydrogel had conductivity, an electromagnetic shielding function and self-healing properties. The hydrogel could be bent 1000 times without breaking after self-healing. The hydrogel could completely self-heal in approximately 7 days, with a healing efficiency of 77.2%. Yang et al. [103] proposed a polyacrylamide (PAM)/cellulose nanofiber (CNF)/multiwalled carbon nanotube (MWCNT) hydrogel by in situ polymerization. CNFs dispersants uniformly disperse the MWCNTs in the hydrogel and strengthen the mechanical properties of the hydrogel by hydrophobic interactions and electrostatic repulsion. The prepared hydrogel had conductivity, an electromagnetic shielding function and self-healing properties. The hydrogel could be bent 1000 times without breaking after self-healing. The hydrogel could completely self-heal in approximately 7 days, with a healing efficiency of 77.2%. Hydrogen Bond Hydrogen bonding, as a type of physical interaction, is formed by the short-range supramolecular interaction between an electron-deficient hydrogen atom and an electronrich species [89,104]. The hydrogen bond can be broken by heating. It can also be re-Gels 2021, 7, 216 6 of 33 generated at a certain temperature. This reversible effect enables the material to achieve self-healing effects [90]. Due to the inherent weakness of hydrogen bonding, hydrogen bonding can be susceptible to competition with the surrounding water molecules, potentially weakening the mechanical properties of hydrogels. Improvement in the mechanical properties of hydrogels can be achieved by designing the network structure of the hydrogel, such as a double-network hydrogel. A doublenetwork hydrogel with poly (acrylamide-co-acrylic acid) (PAM-co-PAA) as the first network and polyvinyl alcohol (PVA) as the second network was prepared [105]. The first network was formed by free radical copolymerization, and the second network was created by freezing and thawing a large number of hydrogen bonds as cross-linking points. The mechanical properties and self-healing properties of the hydrogel were improved by these hydrogen bonds as shown in Figure 2b. The hydrogen bonds can also be derived from the interaction of C=O and N-H, in addition to the hydroxyl groups. In addition, hydrogen bonding is often combined with other cross-linking interactions to produce self-healing hydrogels with excellent mechanical strength. The self-healing hydrogel was also prepared by carboxymethyl cellulose (CMC) in a paste with water and acidified with a citric acid solution [106]. The self-healing effect was the best when the hydrogel was soaked in citric acid at a concentration of 8 mol/L. When the hydrogel was cut in half and re-contacted, the uncross-linked CMC built new hydrogen bonds with hydrogen ions, thereby restoring the damaged area of the hydrogel. The self-healing efficiency reached 81%, and the compressive strength reached 2.3 MPa. Hydrogen bonds can also work with other chemical bonds to improve the mechanical properties of the hydrogel. Wang et al. [107] added acrylic acid and methylene bisacrylamide to a mixed solution of cellulose and PVA, and a double-network hydrogel was obtained by UV-induced polymerization. The cutting hydrogel contacted for 16 h, and the cracks disappeared completely and could be bent, at room temperature. This double-network structure improved the mechanical properties of the hydrogel. In addition, the self-healing properties of the hydrogel were also improved by forming hydrogen bonds and metal coordination bonds. In addition, the introduction of 2-uridine 4-pyrimidinone (UPy) in the preparation of hydrogels has enabled excellent self-healing properties. UPy has been widely used as a multi-hydrogen bonding motif in supramolecular chemistry due to its higher intermolecular bonding strength than single hydrogen bonds. For example, the UPy group was used as a cross-linking point with a PANI/PSS network to form a self-healing conductive hydrogel [108]. The hydrogel completely self-heals within 30 s after damage due to the multiple hydrogen bonds generated by UPy. Furthermore, the combined effect of multiple hydrogen bonds and metal-ligand coordination not only enables the hydrogel to achieve rapid self-healing, but also improves the mechanical properties of the hydrogel (the tensile strength of the self-healed hydrogel reached 7.9 MPa). This hydrogel also has excellent self-healing properties. The damaged hydrogels can recover 91% of their initial properties within 1 h [109]. Host-Guest Interaction The host-guest interaction is a type of noncovalent interaction formed by the physical insertion of the guest's moiety into the host moiety [91]. Generally, host molecules include cyclodextrins (CDs), pillar[n]arenes, crown ethers, calix[n]arenes, cucurbiturils and adamantane. The commonly used guest molecules include ferrocene, azobenzene, cholic acid and N-vinylimidazole derivatives. Among the frequently used host molecules, CD has lipophilic inner cavities and hydrophilic outer surfaces, enabling high-affinity interactions with specific hydrophobic guest moieties. Specifically, as the most important member of the CD family, β-cyclodextrin (β-CD) is most widely produced and possesses a cavity that matches the size of numerous guest molecules and can be easily crystallized, separated and purified. Furthermore, β-CD inclusion complexes can enhance the resistance of the encapsulated guest molecules to various environments, such as acidic and alkaline media, light and heat [110][111][112]. A self-Gels 2021, 7, 216 7 of 33 healing hydrogel was synthesized by the host-guest interaction between the hydrophobic isopropyl group of N-isopropylacrylamide (NIPAM) and β-CD [110]. The main procedure is shown in Figure 2c. The isopropyl group in NIPAM serves as the guest component and β-CD serves as the host component to form a host-guest complex. Hydrogels have a variety of hydrogen bonds and host-guest interactions. Extensive comparative experiments have shown that the host-guest interaction is the principal factor influencing the self-healing of hydrogels. The hydrogel, cut into two pieces, is capable of rapid self-healing at room temperature. The self-healing ability of the hydrogel was measured by its weight-bearing capacity before and after healing. For example, the original hydrogel (before cutting) could bear 200 g and after healing it could bear 55 g. Therefore, the self-healing efficiency is approximately 28%. Adamantane, as the guest molecule, can form a stable inclusion complex with the β-CD cavity and has a high binding constant with the β-CD cavity compared with other guest molecules. Rodell et al. [113] used methacrylate to modify hyaluronic acid and further used it as the main chain of β-cyclodextrin/amantadine (β-CD/Ad) to prepare a double network hydrogel with self-healing properties. The crosslinking point of the first network was formed by the host and guest interaction, and the second network was a methacrylate network. Not only were the mechanical properties greatly improved, but self-healing could also be completed in an instant. The experimental results show that the cut hydrogel fragments heal quickly within about 1 s. However, hydrogels synthesized by chemical processes take a long time and produce toxic byproducts that are unsuitable for biological applications. Therefore, a nonchemical grafting method to prepare hydrogels was proposed [114]. In this hydrogel, the amphiphilic substance N,N-dimethyl-1-adamantane (DM-AD) was used as a cross-linking agent, and CMC and poly β-cyclodextrin (β-CDP) were used as the polysaccharide skeleton. One end of DM-AD is the adamantly group, which is wrapped by β-CDP through host-guest interactions. The nitrogen atom at the other end combines with protons to form a quaternary ammonium compound and is electrically attracted to the carboxyl anion. To verify the self-healing ability of the hydrogel, two identical hydrogels were stained with different colors and cut from the middle of hydrogel. Take two different colored hydrogel cut surfaces into contact with each other. After much time (more than 0.5 h), the hydrogel was completely healed, and there was no obvious sign of fracture on the fracture surface of the hydrogel. In addition, stretching the ends of the hydrogel again, the hydrogel did not fracture. In summary, by changing the host and guest monomers and polymers, different synthetic methods can be utilized to design and prepare host-guest complexes according to the different applications. Self-healing hydrogels containing reversible host-guest interactions exhibit some advantages, such as a repeatable healing process without any external energy, long storage time and high healing rate. Self-healing based on host-guest interactions is still a wide field for research due to the diversity of guest molecules and their reversible nature. Metal Coordination Metal coordination is a supramolecular structure that introduces metal ions and organic ligands into the matrix. Metal coordination interactions have a wide selection of metal ions (Fe 3+ , Zn 2+ and Cu 2+ ) and ligands (-COOH, -NH 2 and -OH), which can respond quickly to external stimulation. Meanwhile, their coordination strength and applicability cover a large range of natural and synthetic polymers, thus metal coordination interactions are widely used in the synthesis of self-healing materials. Lee et al. [115] modified CNTs with mussel adhesion protein to improve their compatibility with polymer materials and enabled CNTs to be uniformly dispersed in the solution. Then, Fe 3+ was added to the solution to form a reversible metal coordination with the carboxyl group, which acted as the physical cross-linking point of the system. However, when the hydrogel is damaged, the metal coordination interaction between the carboxyl group and the Fe 3+ ion at the affected area can be re-established to form a new physical cross-linking point to achieve fast self-healing. In fact, the healing time directly affects the Gels 2021, 7, 216 8 of 33 performance of the sensor. Therefore, many researchers have focused on how to shorten the healing time of hydrogels by some methods, such as a self-healing conductive hydrogel that introduces Zn 2+ and 2,2 :6 ,2 -terpyridine (tpy) ligand into a polypyrrole (PPy) matrix with good conductivity by sol-hydrogel conversion [92]. Its excellent conductivity could reach 12 S m −1 . The coordination of Zn 2+ could connect the separated PPy chains to reform the supramolecular structure and achieve the self-healing of the material after the hydrogel was broken. Self-healing could completely restore its original conductivity at room temperature in 1 min. In addition to using the method of chemical cross-linking to prepare this type of hydrogel, the approach of physical cross-linking cannot be ignored. Hussain et al. [116] added Fe 3+ as the cross-linking agent to the physical cross-linking network formed by hydroxyethyl cellulose and PAA. The metal-ligand effectively dispersed energy and improved the mechanical properties of the self-healing hydrogel. In enhancing the mechanical properties of hydrogels, double metal coordination bonds are also used to prepare hydrogels. Shao et al. [117] proposed a physically cross-linked CNF composite hydrogel by a one-pot strategy. Self-healing was achieved by double metal coordination bonds (iron ions and 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO)-oxidized CNFs and carboxylate ions on PAA) and hydrogen bonds (PAA and CNF molecular chains). Fe 3+ and CNFs were used as the cross-linking agents to improve the mechanical properties, such as excellent fracture strength (1.37 MPa), fracture elongation (1803%) and fast self-healing (95.7% recovery ratio within 1 h). Tannic acid (TA) was coated on the surface of nanocrystals (CNCs) by static electricity, and AA polymerization was carried out in situ by free radicals in TA@CNC solution [118]. Aluminum ions were then introduced to form a variety of coordination bonds, as shown in Figure 2d. A nanocellulose-reinforced hydrogel material with a dynamic cross-linking structure and excellent self-healing properties was prepared. The hydrogel could directly adhere to human skin and be used as a wearable electronic sensor to detect large deformations (wrist swings) and weak physiological signals (pulse beats). Dynamic Covalent Bonds Repeated self-healing of hydrogels is also possible by forming reversible dynamic covalent bonds in the hydrogel network. Because the bonding strength of dynamic covalent bonds is higher than that of noncovalent bonds, these hydrogels possess good mechanical strength. In addition, these hydrogels also have some other excellent properties, such as pH sensitivity, redox sensitivity and temperature sensitivity. Currently, some dynamic covalent bonds have been successfully utilized to prepare self-healing hydrogels, containing Schiff base linkages, disulfide bonds, boronic/boronate ester bonds and Diels-Alder (DA) reactions. Such covalent links are formed by reversible couplings, and hydrogels are formed via the association equilibrium between rupture and reformation. Schiff Base Linkage Schiff base linkage [93,94] is derived from the condensation of carbonyl groups with amines and used as one of the driving forces for self-healing hydrogels. The Schiff base reaction (imine, acylhydrazone bonds) is mediated by the nucleophilic attack of the N atom of the amino group on the electrophilic carbon atom of the aldehyde/ketone, which takes place in an aqueous solution under physiological conditions and generates nontoxic products, ensuring good biocompatibility for Schiff base reaction-based hydrogels. In addition, it has a high chemical reaction selectivity and rapid reaction speed. Once the Schiff base linkages in the network structure are disrupted, the amino or hydrazide groups on the fracture surface rapidly react with the aldehyde groups in contact and form imine or acylhydrazone bonds again, thus reconfiguring the hydrogel matrix for self-repair. It is worth noting that the Schiff base is only stable in an alkaline or neutral environment. In recent years, polysaccharides (such as chitosan, hyaluronic acid, sodium alginate, cellulose and dextran) have become ideal matrix materials to prepare self-healing hydrogels with acylhydrazone bonds. This is mainly because their backbones carry a large number Gels 2021, 7, 216 9 of 33 of functional groups that can participate in the Schiff base reaction either in a direct way (such as the primary amine groups of chitosan) or after being modified into aldehyde and amine groups. Among them, chitosan is a nontoxic, biodegradable and biocompatible polysaccharide that can only be soluble in acidic aqueous solutions. Therefore, researchers have enhanced chitosan's water solubility by chemical modifications or conjugations with a specific ligand, making it suitable for the conditions of Schiff base reactions. Zhang et al. [119] used a large amount of -NH 2 on chitosan (CS) to condense with benzaldehyde groups on dibenzaldehyde-terminated telechelic PEG to form imine bonds. The study found that this could quickly form a hydrogel within 60 s after contacting CS with telechelic PEG at 20 • C. The self-healing experiment showed that the incision on the hydrogel gradually decreased over time and could be completely healed within 15 min, after the hydrogel broke. The broken hydrogels could self-heal by the dynamic properties of Schiff base linkage. To obtain the self-healing hydrogels with high performances, a self-healing hydrogel formed of oxidized sodium alginate (OSA) and acrylamide (AM) monomer by schiff base reaction was prepared [94]. Figure 3a shows the process of the self-healing hydrogel. The different colored hydrogels were cut into two semicircular hydrogels. Then, the separated semicircular hydrogel was contacted for a period of time, and the fractured surfaces joined together and healed. After self-healing, the hydrogel still retained excellent mechanical and conductivity properties. Yang et al. [120] used modified carboxyethyl cellulose with dibenzaldehyde-terminated PEG under the catalysis of 4-amino-DL-phenylalanine to form a self-healing hydrogel. The prepared hydrogel not only had better self-healing ability but also had dual responsiveness to pH and redox agents. This is because the acylhydrazone bond is more sensitive to pH and the disulfide bond is more sensitive to redox agents. As mentioned in previous studies, the imine and acylhydrazone bonds in Schiff base reaction-founded self-healing hydrogels can be formed under mild conditions and not only allow facile preparation of hydrogels without any stimulation, but also bestow the self-healing ability. Therefore, Schiff base reaction-founded hydrogels prepared through some modification strategies and methods will promote the development and effective application of hydrogels. Disulfide Bond The disulfide bond [121] is a dynamic covalent bond based on the thiol/disulfide dynamic exchange reaction, which is sensitive to many factors such as acid, alkali and ultraviolet light. Li et al. [122] proposed a photosensitive cellulose-based self-healing hydrogel by embedding thiuram disulfide bonds into hydrogels via the polyaddition method. The hydrogel could realize rapid self-healing within 2 min, and the cracks disappeared completely under visible light irradiation (Figure 3b). The reason is that the dithiocarbamate ester bonds in the CNC-containing hydrogel could be the result of homolytic cleavage under visible light and produce dormant dithiocarbamyl radical intermediates. When the Disulfide Bond The disulfide bond [121] is a dynamic covalent bond based on the thiol/disulfide dynamic exchange reaction, which is sensitive to many factors such as acid, alkali and ultraviolet light. Li et al. [122] proposed a photosensitive cellulose-based self-healing hydrogel by embedding thiuram disulfide bonds into hydrogels via the polyaddition method. The hydrogel could realize rapid self-healing within 2 min, and the cracks disappeared completely under visible light irradiation (Figure 3b). The reason is that the dithiocarbamate ester bonds in the CNC-containing hydrogel could be the result of homolytic cleavage under visible light and produce dormant dithiocarbamyl radical intermediates. When the fracture surfaces were in contact with each other, the dithiocarbamyl radicals broke and recombined on the re-contacted surfaces by exchange and transfer reactions, and the covalent S-S bond was reconstructed to realize the healing of the fracture surface. The self-healing efficiency reached 97%, and the hydrogel could be stretched 42.6 times of the original length. Usually, disulfide bonds are combined with other covalent or noncovalent bonds to enhance the mechanical properties and self-healing efficiency of hydrogels. Dang et al. [95] prepared a healable ionic hydrogel with acrylic acid (AA), choline chloride and ferric chloride through a simple, fast process. The self-healing properties are achieved due to the contribution from disulfide bonds, hydrogen bonds and coordination bonds in the hydrogel. Hydrogels can be used directly as wearable sensors to monitor human movement [123]. Boronic/Boronate Ester Bond Boronic/boronate ester bonds are formed via a combination of boronic acid and 1,2-or 1,3-diols. These bonds can be formed or broken reversibly depending on the pH or aqueous media. Boronic acid can selectively bond to diols to form boronic esters or boronate esters; therefore, boronic acid can be applied in sensors as a component in drug delivery systems and self-healing materials [96]. In polymer networks containing boronic ester bonds, these bonds undergo facile bond exchange via associative or dissociative mechanisms. Lu et al. [124] designed a selfhealing hydrogel with boronic ester bonds as the driving force. The process consisted of 3-acrylamidophenylboronic acid (AAPBA) and acrylamide (AM) chain copolymerization and covalently cross-linked with hydroxypropyl guar gum (HPG). The tensile strength increased as the AAPBA, HPG and AM increased. In the hydrogel, the phenylboronic acid groups in AAPBA combined with the 1,3-cishydroxyl moieties of HPG formed dynamic covalent phenylboronic (PBA)-diol ester bonds and endowed the hydrogel with good self-healing properties and tensile properties. The hydrogel (cut into two pieces) could be completely restored for 30 min at room temperature. It was found that the formation of PBA ester bonds was dependent on the pH value. In an acidic environment, AAPBA did not react with HPG, and the hydrogel had poor tensile stress. When the pH was higher than 8.2, stable boronic ester bonds were formed in the hydrogel. In addition, boronate ester bonds are another dynamic covalent bond formed by free boronic acid and diol. In many studies, borax has been used as an alternative to boric acid, combining with diols to form dynamic B-O bonds, and dynamic B-O bonds are usually regarded as boronate esters. Because the borax can be hydrolyzed in water to form boric acid and borate ions, it has been widely used as a cross-linking agent in the preparation of PVA-borax hydrogels. Lu et al. [125] used the reversible dynamic boronate bond to mix with the microfibrillated cellulose (MFC) obtained by ball milling with borax, and then added a PVA solution to prepare a pH-responsive self-healing hydrogel. The hydrogel containing 3.0% MFC could be stretched 3000%, while the hydrogel without MFC was easily broken. This indicated that MFCs could improve the mechanical properties of the hydrogel. Then, the self-healing process of the two hydrogels was observed, as shown in Figure 3c. After 10 min, the broken hydrogel could be healed. Moreover, the hydrogel was sensitive to pH as it showed a repetitive sol-gel phase transition depending on the pH. However, traditional self-healing hydrogels have long self-healing times. Modifying the components of the hydrogels could greatly shorten the self-healing time and increase the conductivity of the hydrogels [96]. In these hydrogels, dynamic boronate bonds were formed by PVA and benzoboric acid groups. The separated hydrogel was contacted for 15 s, and the fractured surfaces could join together and heal together. Diels-Alder Reaction The Diels-Alder (DA) reaction [126], also known as diene addition, is the reaction of conjugated diene and dienophile to generate substituted cyclohexene. The DA reaction, as one of the "click chemistry" reactions, plays an important role in the preparation of various functional hydrogels due to its high efficiency, high selectivity and lack of side reactions and byproducts [127]. Additionally, the DA reaction possesses atomic economy and generally requires no catalyst or initiator. Interestingly, the DA reaction is reversible under certain conditions (e.g., at elevated temperature or in organic solvents). Hence, the DA reaction has been used for the preparation of hydrogels. DA reaction-founded hydrogels can be healed by the reversible formation and breakage of covalent bonds upon heating. Specifically, in the damaged hydrogel, the Diels-Alder bonds break upon the application of heat, and the chains become elastic at high temperatures. The elastic chains move to the fracture site to reform the Diels-Alder bonds upon a decrease in temperature, and self-healing occurs as the network is reformed. Shao et al. [97] reported a tough, highly elastic and fast self-healing hydrogel with an interpenetrating network by the Diels-Alder click reaction. The synthesis process of self-healing hydrogels is shown in Figure 3d. The furan group at the crystal end of the modified CNFs and the maleimide at the end of the polyethylene glycol form a thermally reversible covalent bond. As a reinforcing agent and chemical cross-linking agent, CNFs can improve the mechanical properties of the hydrogels. Conductive Categories of Self-Healing Hydrogel for Flexible Sensors Wearable flexible devices are one of the main application fields of flexible sensors, which require the matrix materials of sensors to have good biocompatibility. Currently, most sensor devices are based on inorganic materials (such as metals and silicon) with good conductivity. However, the physical and chemical properties of these inorganic materials are significantly different from those of biological tissues. Inorganic material sensor devices may cause inflammatory reactions in the body when in direct contact with the skin, and the signals collected may be inaccurate. Conductive hydrogels with selfhealing properties have shown great potential in sensor devices due to their appropriate electrical and mechanical properties, long service life and good biocompatibility. However, most of the polymer networks in hydrogels are insulated [128,129], so the methods to synthesize conductive hydrogels are as follows: (1) embedding conductive fillers into an existing nonconductive hydrogel matrix; (2) constructing hydrogel networks by selfpolymerization or self-assembly of conductive polymers; and (3) diffusing free ions. Conductive hydrogels with self-healing properties can significantly prolong the service time of electronic devices. Many conductive hydrogels with high self-healing properties are based on the intrinsic repair method by designing reversible (weak) interactions in polymer networks. Under low external stress, the weak bonds can break first and adsorb energy to protect the covalent polymer network. When the covalent polymer network of the hydrogel is damaged under a higher external stress, the reversible bonds will reform to restore the properties of the hydrogel. Self-Healing Hydrogel with Conductive Fillers The most convenient way to obtain a highly conductive self-healing hydrogel is to embed conductive fillers into an existing nonconductive hydrogel matrix. These conductive fillers are suspended in the hydrogel precursor solution for polymerization and cross-linking to form a conductive network in the hydrogel. Typically, conductive fillers include metal nanomaterials, carbon nanomaterials, transition metal carbides and carbonitrides. As the nanoscale conductive filler can be uniformly dispersed into the hydrogel matrix, stress concentration is avoided, enabling the mechanical strength of the hydrogel to be significantly increased. In addition, the type and content of conductive fillers used in synthetic self-healing conductive hydrogels, as well as the surface modification and cross-linking methods used, have a significant impact on the properties of hydrogel sensors, such as conductivity, stretchability, toughness, fatigue resistance and self-healing properties [130,131]. However, the conductive mechanism of conductive filler-based hydrogels is complex, and different conductive mechanisms need to be combined to explain this conductive phenomenon. Currently, the conductive mechanisms commonly used for conductive filler-based hydrogels are contact conduction and tunnel conduction [132]. As the filling amount of conductive filler increases, the conductivity gradually increases. When the filler content reaches the critical volume fraction, the conductivity increases sharply. This phenomenon is called "percolation threshold". However, as the filler content continues to increase, the conductivity no longer increased significantly [133]. In conductive filler-based hydrogels, in one case, conductive fillers are in contact with each other to form conductive network pathways, and in the other case, conductive fillers are not in contact with each other, and they exist in isolation or in aggregates. However, when the spacing between the conductive fillers is very small, the electrons of the conductive filler may be activated by a thermal vibration due to the interaction between the conductive filler particles. The activated electrons absorb energy and jump across the barrier of the thin polymer layer to the adjacent conductive filler, thereby forming a tunneling current and conducting path. This is the electron tunneling effect conductive mechanism [134]. Metal-Based Nanomaterials Metal nanomaterials (such as metal and its oxide nanoparticles, nanowires and nanorods), as one type of the preferred raw materials for the preparation of functional Gels 2021, 7, 216 13 of 33 conductive hydrogels, have high conductivity, optical properties, catalytic properties and easy processing [135][136][137][138][139]. Contact conduction theory can explain the conductive mechanism of metal-based hydrogels. When the content of metal filler is below the permeation threshold, only a local conductive network can be formed inside the hydrogel and its conductivity is low. As the filler content gradually rises above the percolation threshold, a complete conductive network is formed inside the hydrogel and the conductivity increases significantly. Incorporating metal nanoparticles into hydrogels improve their strength but causes nonbonding between fillers and the polymer matrix. To address this issue, some researchers have proposed solutions. He et al. [140] fabricated a conductive hydrogel made of PVA and in situ reduced Au nanoparticles, which could achieve self-healing without external stimuli due to hydrogen bonding and reversible metal-ligand coordination. Furthermore, the conductive hydrogel has a high mechanical toughness (maximum compressive strength was 7.26 MPa). To obtain a shorter healing time and better conductivity, the silver/reduced graphene oxide (Ag/rGO) composite material was combined into PVA-borax [141]. This hydrogel contains many hydrogen bonds and can heal itself within 3 s without any external stimulation at room temperature. Ding et al. [142] proposed a self-healing hydrogel-based sensor with conductivity, antibacterial and self-healing properties consisting of hydrophobic modified polyacrylamide (HMPAM), bis (acryloyl) cystamine (BACA)-modified silver nanowires (AgNWs) and dextran, as shown in Figure 4a. This sensor had ultralow strain (0.05%), a wide strain sensing window (0.05-1200%), a wide operating frequency range and superior cycle stability (200 relatively low resistance changes). The Young's modulus of the hydrogels increased with the increasing ANGWS content, as shown in Figure 4b. However, these hydrogels with different contents of AgNWs exhibited quite a low Young's modulus (10~90 kPa) and were very sensitive to strain signals. Meanwhile, Figure 4c shows a number of reversible noncovalent and hydrogen bonds contained in the hydrogel network, as well as reversible Ag-S coordination bonds, which were helpful to improve the self-healing and mechanical properties of the hydrogel. Furthermore, Figure 4d shows that HMPAM/Dex/AgNW nanocomposite hydrogels have outstanding compression performance. According to the conductivity experiment, the conductivity increased with AgNW content and reached a maximum of 1.0 S m −1 (Figure 4e). In particular, the sensor was first realized to recover its sensing properties after self-healing. Although AgNWs have excellent conductivity and easy processing, research on hydrogels filled with AgNWs is still very limited, mainly because (i) the mechanical properties of the hydrogel are reduced when the filled AgNWs are not uniformly dispersed in the hydrogel polymer matrix, (ii) the process of patterning AgNWs on the hydrogel surface is less and (iii) there is weak interfacial bonding between AgNWs and the hydrogel matrix. Zhu et al. [143] proposed an easily patterned, highly conductive self-healing hydrogel sensor by dispersing AgNWs in a highly viscoelastic hydrogel matrix. The mechanical properties of this hydrogel were superior to those of other hydrogels, and the fracture stress could reach 3.3 MPa. The hydrogel sensor had a gauge factor of 58.2 and could detect human motions. Therefore, the introduction of metal nanomaterials into hydrogels can effectively improve the conductivity and mechanical properties [144,145]. However, precious metal conductive materials (such as gold and platinum) are usually expensive, which severely limits their large-scale utilization. In addition, metals are prone to corrosion in a humid environment, resulting in a decline in the electrical properties of hydrogels, which greatly hinders their potential application in the field of bioelectronics. tect human motions. Therefore, the introduction of metal nanomaterials into hydrogels can effectively improve the conductivity and mechanical properties [144,145]. However, precious metal conductive materials (such as gold and platinum) are usually expensive, which severely limits their large-scale utilization. In addition, metals are prone to corrosion in a humid environment, resulting in a decline in the electrical properties of hydrogels, which greatly hinders their potential application in the field of bioelectronics. MXene-Based Nanomaterials MXene (transition metal carbides and carbonitrides) nanosheets [146], as a new twodimensional material, have some excellent properties: high conductivity, good mechanical properties and water solubility. It is widely used in the fields of supercapacitors [147,148] and sensors [31]. Because MXene has abundant hydrophilic groups, these hydrophilic groups firmly combine MXene nanosheets with the hydrogel network through multiphysical interactions, thereby improving the mechanical properties [149,150]. In addition, MXene nanosheets have good water solubility, which allows them to be evenly distributed in the hydrogel network, not easily agglomerate and form a stable conduction network [151]. According to the contact and tunnel conduction theory, the conductivity of the MXenebased hydrogel increases with increasing the MXene nanosheets content, reaching an optimum conductivity after exceeding to the percolation threshold. Combined with effective polymer action, an effective tunneling current is achieved for the MXene nanosheets, resulting in a high conductivity of the hydrogel. Simultaneously, the deformation of the hydrogel causes a change in conductivity. Specifically, under tensile deformation, the spacing between the MXene nanosheets in the hydrogel increases, which induces a decrease in conductivity. In contrast, under compressive deformation, the distance between the MXene nanosheets decreases, which increases the conductivity of hydrogels. As such, Zhang et al. [152] added MXene nanosheets (Ti3C2Tx) to a matrix containing PVA, water and anti-dehydration additives, and the obtained hydrogel had excellent tensile strain sensitivity, self-healing properties and conductivity. The self-healing properties of the hydrogel were achieved by hydrogen bonding. It was found that the value of the compression experiment was higher than the values of the tensile experiment. Some movement directions and speeds of the sensor surface could be detected more accurately by this asymmetric sensitivity. A self-healing hydrogel suitable for low temperature was also proposed [153]. The hydrogel polymer network was composed of PVA, MXene nanosheets and polypropylene amine (PAAM) (Figure 5a). MXene nanosheets were added to the hydrogel matrix to form a three-dimensional conductive network, which contributed to electron transmission and made the hydrogel have excellent conductivity. The self-healing process of the hydrogel is shown in Figure 5b. The two semicircular hydrogels with different colors healed together by dynamic cross-linking and molecular interactions. The healed hydrogel did not break when stretched again. Meanwhile, to further investigate the effect of self-healing on the electrical performance, a circuit with a red LED indicator was designed (Figure 5c). When the hydrogel was cut off completely in the circuit, the red LED indicator switched off immediately. After the two fractured parts were re-contacted and healed, the circuit was restored and the red LED indicator lit up again. A wider strain range contributes to expanding the applications of hydrogels. Wei et al. [154] proposed a ternary hybrid network hydrogel composed of TA-modified CNFs, which combined the conductive MXene nanosheet network and covalently cross-linked PAAM network. It contained a large number of hydrogen bonds and dynamic borate bonds, which not only realized the self-healing properties of the hydrogel, but also improved the tensile properties of the hydrogel. The hydrogel also had a wide working strain range and high sensitivity, which was suitable for human body motion monitoring. However, there are still many problems in the application of MXene-based flexible hydrogels. For example, some dangerous chemicals are used in the preparation of MXene nanosheets, which may pollute the environment and introduce harmful substances. The prepared MXene-based flexible hydrogel has a relatively weak network and excellent mechanical properties, which will limit its application [155,156]. Therefore, to make MXene nanosheets more widely used in hydrogel sensors, it is necessary to perfect the preparation method of MXene nanosheets. The aspect ratio of CNTs exceeds 1000, which can allow them to achieve electron transfer at lower voltages [168]. With increasing the CNT content in the hydrogel system, the conductivity of the hydrogel also increases [169]. Here, contact conduction and tunneling conduction can be used to clarify the conductive mechanism of hydrogels. However, the random distribution and easy aggregation behavior of CNTs limit the properties of hydrogels. The modification of CNTs is the key to overcoming the poor compatibility between CNTs and the polymer matrix [170,171]. Han et al. [172] reported a multi-functional conductive hydrogel based on PVA-borax and CNT-CNF composite materials, in which borax was used as a cross-linker to make the hydrogel mechanically tough and selfheal. When the CNTs content was below 0.3 wt%, no complete conductive network was formed within the hydrogel, which resulted in low conductivity of the hydrogel. When the CNTs content was increased to 0.5 wt%, the conductivity of the composite hydrogel increased rapidly to 8.0 ± 0.5 S m −1 . The results show that the permeation threshold of CNT content was 0.3 wt%. As the content of CNTs continued to increase, conductive channels were formed within the hydrogel, thus enabling the contact conduction mechanism. In addition, cellulose nanofibers (CNFs) cannot only act as a dispersant to stabilize the dispersion of CNTs in hydrogels, but also achieve an effective tunneling current for CNTs to achieve high conductivity at low CNTs content. In this hydrogel network, borax formed borate ester bonds with PVA chains, and CNFs contributed to its rapid self-healing ability. The aspect ratio of CNTs exceeds 1000, which can allow them to achieve electron transfer at lower voltages [168]. With increasing the CNT content in the hydrogel system, the conductivity of the hydrogel also increases [169]. Here, contact conduction and tunneling conduction can be used to clarify the conductive mechanism of hydrogels. However, the random distribution and easy aggregation behavior of CNTs limit the properties of hydrogels. The modification of CNTs is the key to overcoming the poor compatibility between CNTs and the polymer matrix [170,171]. Han et al. [172] reported a multi-functional conductive hydrogel based on PVA-borax and CNT-CNF composite materials, in which borax was used as a cross-linker to make the hydrogel mechanically tough and self-heal. When the CNTs content was below 0.3 wt%, no complete conductive network was formed within the hydrogel, which resulted in low conductivity of the hydrogel. When the CNTs content was increased to 0.5 wt%, the conductivity of the composite hydrogel increased rapidly to 8.0 ± 0.5 S m −1 . The results show that the permeation threshold of CNT content was 0.3 wt%. As the content of CNTs continued to increase, conductive channels were formed within the hydrogel, thus enabling the contact conduction mechanism. In addition, cellulose nanofibers (CNFs) cannot only act as a dispersant to stabilize the dispersion of CNTs in hydrogels, but also achieve an effective tunneling current for CNTs to achieve high conductivity at low CNTs content. In this hydrogel network, borax formed borate ester bonds with PVA chains, and CNFs contributed to its rapid self-healing ability. Similarly, it is still feasible to use these two conductive theories to explain the conductive phenomena of carbon-based hydrogels. Wang et al. reported a conductive self-healing hydrogel with adhesion properties by adding dopamine (DA) [173]. Furthermore, it was found that MWC-NTs could be uniformly dispersed in the hydrogel network due to π-π interactions between DA and MWCNTs. In addition, multiple hydrogen bonds were formed to realize the rapid self-healing of the hydrogel. The hydrogel also exhibited good adhesion in the presence of DA, which improved the comfort of the sensor. In addition, Gao et al. [174] proposed a multifunctional conductive hydrogel composed of a PAM/CS composite network, which is shown in Figure 6a. The PAM network was cross-linked by hydrophobic associations, and the CS network was ionically cross-linked by MWCNTs. These two networks were further interconnected by physical entanglement and hydrogen bond interactions. Because the dynamic cross-linking network effectively dissipated energy, the prepared hydrogel exhibited excellent flexibility, adhesion and self-healing. After re-contacting the two cut samples of hydrogel for 48 h, they were completely self-healed. Moreover, for the hydrogels with different c-MWCNT contents after healing for 48 h, the tensile curves coincided with those of the original samples, as shown in Figure 6b. (The solid lines represent the tensile curves of pristine hydrogel, and the dashed lines are the tensile curves of hydrogel after healing for 48 h.) The conductivity of this hydrogel increased dramatically as the c-MWCNT increased from 0.5 wt% to 1 wt%. When the content of c-MWCNT was increased to 1.5 wt%, the increase in the conductivity of the hydrogel was not significant. The disadvantage is that the tensile properties of the hydrogel deteriorate, although increasing the content of c-MWCNT increases the conductivity of the hydrogel. Significantly, the hydrogel was simply assembled as a wearable sensor. It can accurately monitor human motions, such as elbow, neck and knee joint motions, as shown in Figure 6c-f. Similarly, it is still feasible to use these two conductive theories to explain the conductive phenomena of carbon-based hydrogels. Wang et al. reported a conductive self-healing hydrogel with adhesion properties by adding dopamine (DA) [173]. Furthermore, it was found that MWCNTs could be uniformly dispersed in the hydrogel network due to π-π interactions between DA and MWCNTs. In addition, multiple hydrogen bonds were formed to realize the rapid self-healing of the hydrogel. The hydrogel also exhibited good adhesion in the presence of DA, which improved the comfort of the sensor. In addition, Gao et al. [174] proposed a multifunctional conductive hydrogel composed of a PAM/CS composite network, which is shown in Figure 6a. The PAM network was cross-linked by hydrophobic associations, and the CS network was ionically cross-linked by MWCNTs. These two networks were further interconnected by physical entanglement and hydrogen bond interactions. Because the dynamic cross-linking network effectively dissipated energy, the prepared hydrogel exhibited excellent flexibility, adhesion and self-healing. After re-contacting the two cut samples of hydrogel for 48 h, they were completely selfhealed. Moreover, for the hydrogels with different c-MWCNT contents after healing for 48 h, the tensile curves coincided with those of the original samples, as shown in Figure 6b. (The solid lines represent the tensile curves of pristine hydrogel, and the dashed lines are the tensile curves of hydrogel after healing for 48 h.) The conductivity of this hydrogel increased dramatically as the c-MWCNT increased from 0.5 wt% to 1 wt%. When the content of c-MWCNT was increased to 1.5 wt%, the increase in the conductivity of the hydrogel was not significant. The disadvantage is that the tensile properties of the hydrogel deteriorate, although increasing the content of c-MWCNT increases the conductivity of the hydrogel. Significantly, the hydrogel was simply assembled as a wearable sensor. It can accurately monitor human motions, such as elbow, neck and knee joint motions, as shown in Figure 6c-f. Graphene oxide (GO) nanosheets contain abundant hydroxyl, epoxy and carboxyl groups on their surface and have been used as cross-linkers to prepare conductive hydrogels. A copolymer hydrogel double-cross-linked by laponite and GO could achieve repeated healing [175]. The hydrogel as an electrolyte in supercapacitor not only had ultra- Graphene oxide (GO) nanosheets contain abundant hydroxyl, epoxy and carboxyl groups on their surface and have been used as cross-linkers to prepare conductive hydrogels. A copolymer hydrogel double-cross-linked by laponite and GO could achieve repeated healing [175]. The hydrogel as an electrolyte in supercapacitor not only had ultrahigh mechanical tensile properties of 1000% but could also achieve repeated healing under Gels 2021, 7, 216 18 of 33 infrared light irradiation and heating conditions. Xia et al. [176] prepared a conductive selfhealing hydrogel with a physical cross-linking network. Using the FeCl 3 as a cross-linking site, the hydrogel was formed with PAA, CS and GO in a solvent mixture of water and glycerol. The conductivity of PAA/CS/GO/Gly hydrogel could reach 5.6 ± 0.25 × 10 −3 S cm −1 , which is attributed to the effect of GO and ions (Fe 3+ , Cl). These hydrogel sensors also had a rapid response time (40 ms) and moderate gauge factor (GF) (1.138). In addition, the hydrogel could be self-healed rapidly due to coordination interaction and hydrogen bonds. After 1 h of being self-healed, the stretch curve of the healed hydrogel was almost identical to the original sample. Wang et al. [177] prepared a new self-healing conductive hydrogel with a fast self-healing ability and good conductivity (10.5 mS dm −1 ). This hydrogel was synthesized by GO, soluble starch and poly (sodium 4-vinyl-benzenesulfonateco-N-(2-(methacryloyloxy)ethyl)-N,N-dimethylbutan-1-aminium bromide)(P(NaSS-co-MOBAB)). It is worth noting that the hydrogel conductivity could be restored after self-healed. The experimental tests have shown that after 10 cut-healing cycles, the hydrogel could be restored to 80% of its original conductivity. CS/DA/GO Electron hydrogen bonds, π-π stacking 1.2 × 10 −3 S cm −1 engineering applications [189] rGO/AM Electron covalent bonds hydrogen bonds 27.2 S m −1 artificial skin, soft robotics [190] PNIPAM/Laponite/CNT Electron electrostatic interaction, hydrogen bonds 0.17 S m −1 wearable sensor [191] PAM/MWCNTs Electron hydrophobic interactions, hydrogen bonds 5.6 0.5 S m −1 Wearable medical monitoring [192] GOxSPNB Electron/ion electrostatic interaction, hydrogen bonds 10.5 mS dm −1 conductive adhesive materials [177] PAA/GO/Ca 2+ Electron/ion Hydrogel bonds, ionic interactions 257.31 kΩ wearable biosensors [193] AlgPBA/PVA/ PAM/rGO Electron covalent ester bonds, hydrogen bonds 0.0525 S m −1 E-skins, healthcare monitoring, [194] PVA/FSWCNT/ PDA Electron/ion Hydrogen bonds, π-π stacking, wearable sensors. [195] Abbreviation: proanthocyanins (PC), acrylamide (AM), Partially reduced graphene oxide (pRGO), functionalized single-wall carbon nanotube (FSWCNT), carbon black (CB); 2,2,6,6-tetramethylpiperidine-1-oxyl oxidized CNFs (TOCNFs), graphene nanocomposites (GN), Egg white (EW), Sodium polyacrylate polymer particles (SAP), β-cyclodextrin (β-CD); zinc phthalocyanine tetra-aldehyde (ZnPcTa), glycerol (Gly), Pentafluorophenyl acrylate (PFPA) Nisopropylacrylamide(NIPAM), N,N-Dimethyl acrylamide (DMA). However, the conjugated structures of conductive polymers are inherently rigid and natively hydrophobic, which make them incompatible with the hydrophilic polymer matrix, resulting in the conductive component tending to aggregate and inhomogeneously distribute. The insufficient weak interactions between the two ingredients usually result in the weak mechanical performance of the hydrogel and poor adaptability to large deformations, which seriously impede practical applications in the fields of wearable strain sensors. Additionally, conductive polymers in the intrinsic state, the π electrons on the conjugated structure, are difficult to migrate along the long chain of the molecule when unexcited, so the conductivity is limited and needs to be modified by chemical doping [205]. Specifically, the essence of doping is that the polymer chain with a conjugated structure has a charge transfer or redox reaction with the dopant, so that the formed electrons can move along the direction of the molecular chain, and the conductivity of the polymer will be significantly improved [206]. The conductive mechanism of the conductive polymer is related to the type of dopant, which can generally be divided into a charge transfer mechanism and proton acid mechanism [207,208]. The one conductive mechanism of hydrogel is the charge transfer when conductive polymers are modified with oxidizing dopants, such as metal salts (FeCl 3 ) and halogens (I 2 , Br 2 ). For example, FeCl 3 acts as a p-type dopant, taking electrons from the large π bonds of the polymer, which reduces the hindrance of hole-electron migration, thereby increasing the conductivity [209]. Ding et al. [210] reported a new strategy for the design and preparation of a multifunctional hydrogel. Specifically, PPy was assembled onto the surface of CNFs and then mixed with PVA/boric acid solution. The results show that the PPy was well dispersed in the hydrogel to form a continuous conductive network, which promoted the tunneling of charges transferred between adjacent PPy chains. The conductivity of the hydrogels was increased from 1.5 to 4.8 S cm −1 with increasing PPy. The obtained hydrogel exhibited a high water content (∼94%), low density (∼1.2 g cm −3 ) and rapid self-healing ability. Another conductive mechanism is the protonic acid mechanism. Commonly used proton acid dopants are HCl, H 3 PO 4 and H 2 SO 4 or other non-oxidized Lewis acids (BF 3 ) [211]. Specifically, there is no migration of electrons between the polymer chain and the dopant. However, the proton of the dopant is attached to the carbon atom of the main polymer chain, causing a change in the charge distribution on the polymer chain [212]. Typically, hydrogels are prepared by free radical polymerization [213][214][215]. Polymer monomers, oxidizing agents (APS) and/or dopants are used to complete the polymerization and are homogeneously dispersed into the hydrogel matrix by the cross-linking agents to form a complete conductive network. Yang et al. [216] proposed a self-healing hydrogel with good extensibility, using trypan blue (TB) as the cross-linking agent to form a semi-interpenetrating network with PAA and PPy. In addition, the PAA support structure and the PPy molecular chain can be well connected to form an interconnected conductive network by the large π conjugated ring of TB. So, the conductivity of this hydrogel was equivalent to that of pure PPy hydrogel, up to 15 S m −1 . Its elongation at break exceeded 750%. In addition, the broken hydrogel could recover more than 60% within 10 s. A self-healing hydrogel can also have antibacterial properties by PPy and Zn-functionalized CS molecules cross-linked with PVA [217]. The conductive component CS-PPy was synthesized by graft polymerization of PPY on double bond-decorated chitosan with a free radical. When the content of CS-PPy was 1%, the maximum conductivity of the hydrogel reaches 1.16 S cm −1 . The reason for this phenomenon was that the content of the conductive component reached the electrical percolation threshold and the PPy molecular chain formed a connected conductive network in the hydrogel. In hydrogels, multiple covalent bonds (hydrogen bonds and zinc-based coordination bonds) also endowed the hydrogel with self-healing properties. In addition, the introduction of Zn ions enhanced the antibacterial properties of the hydrogel. In situ polymerization of conductive polymer monomers onto nanostructured flexible templates (CNF, CNC) to synthesize hydrogels with a stable, flexible, continuous conductive network thereby improves electrochemical and mechanical properties of the hydrogels [218][219][220][221]. Han et al. [222] prepared PANI/CNF nanocomposites by dispersing aniline on CNFs by in situ polymerization. Then, the nanocomposite was introduced into the borax cross-linked PVA hydrogel to prepare a self-healing hydrogel with good ductility and excellent conductivity ( Figure 7a). As shown in Figure 7b, with the mass ratio of aniline (ANI) monomers to CNFs increased, the hydrogel conductivity increased from 2.5 to 5.2 S m −1 ; the result shows a nonlinear enhancement of conductivity with increasing the PANI content. It was also demonstrated that the PANI/CNF complex was well dispersed in the PVA system, resulting in the construction of an effective conductive pathway. The hydroxyl groups and dynamic reversible cross-linked bonds in the hydrogel network enable the hydrogel to recover quickly within 15 s, as shown in Figure 7c. Figure 7d shows that the electrical pathways inside the hydrogel are still maintained during the stretching process, indicating the high stability and stretchability of the hydrogels. In addition, Song et al. [223] added CNC-PANI polymer (in situ polymerization) into PVA/borax to prepare the hydrogel. The separated hydrogel could recover quickly without any external stimuli under the effect of hydrogen bonds and dynamic borate bonds. The sensor made from this hydrogel could be sensitive to tiny movements of the human body (swallowing, bending of fingers or joints). In situ polymerization of conductive polymer monomers onto nanostructured flexible templates (CNF, CNC) to synthesize hydrogels with a stable, flexible, continuous conductive network thereby improves electrochemical and mechanical properties of the hydrogels [218][219][220][221]. Han et al. [222] prepared PANI/CNF nanocomposites by dispersing aniline on CNFs by in situ polymerization. Then, the nanocomposite was introduced into the borax cross-linked PVA hydrogel to prepare a self-healing hydrogel with good ductility and excellent conductivity ( Figure 7a). As shown in Figure 7b, with the mass ratio of aniline (ANI) monomers to CNFs increased, the hydrogel conductivity increased from 2.5 to 5.2 S m −1 ; the result shows a nonlinear enhancement of conductivity with increasing the PANI content. It was also demonstrated that the PANI/CNF complex was well dispersed in the PVA system, resulting in the construction of an effective conductive pathway. The hydroxyl groups and dynamic reversible cross-linked bonds in the hydrogel network enable the hydrogel to recover quickly within 15 s, as shown in Figure 7c. Figure 7d shows that the electrical pathways inside the hydrogel are still maintained during the stretching process, indicating the high stability and stretchability of the hydrogels. In addition, Song et al. [223] added CNC-PANI polymer (in situ polymerization) into PVA/borax to prepare the hydrogel. The separated hydrogel could recover quickly without any external stimuli under the effect of hydrogen bonds and dynamic borate bonds. The sensor made from this hydrogel could be sensitive to tiny movements of the human body (swallowing, bending of fingers or joints). Ionic Self-Healing Hydrogel The hydrogel is composed of a three-dimensional framework and significant water molecules (water content above 90%). Such a structure provides many channels for ion migration, which provides the possibility to synthesize excellent ion conductive hydrogels. The preparation method of ion conductive hydrogels usually incorporates inorganic Ionic Self-Healing Hydrogel The hydrogel is composed of a three-dimensional framework and significant water molecules (water content above 90%). Such a structure provides many channels for ion migration, which provides the possibility to synthesize excellent ion conductive hydrogels. The preparation method of ion conductive hydrogels usually incorporates inorganic salts (e.g., LiCl, NaCl or KCl) [224,225] into the hydrogel network, balancing the conductive properties and mechanical properties of the hydrogels [226]. Inorganic salts are a strong electrolyte, which are dissolved in water to form freely moving anions and cations. Under the action of an electric field, the positive ions generated within the hydrogel move in the direction of the electric field and the negative ions in the opposite direction, enabling rapid transport of ions and giving the hydrogel a good ionic conductivity [227]. Extensive research has shown that the mechanical strength of polyvinyl alcohol-based or chitosan-based hydrogels can be significantly improved by using the salting out effect of NaCl. Currently, the method used is to soak the prepared hydrogels in NaCl solution. For instance, a self-healing hydrogel with a semi-interpenetrating polymer network using carboxymethyl CMC, NaCl and PAM is shown in Figure 8a [228]. When the concentration of NaCl reached 0.9 M (unsaturated), the salting out effect could occur in hydrogel; thus, it caused more hydrogen bonds to form between the CMC and PAM chains. During the stretching process, the hydrogen bonds act as sacrificial bonds to dissipate energy, resulting in a significant enhancement of the mechanical properties of the hydrogel. Moreover, the NaCl introduced in the hydrogel promoted the formation of hydrogen bonds between the carboxymethyl CS and PAM chains and subsequently improved the mechanical properties ( Figure 8b) and self-healing properties of the hydrogel (Figure 8d). In addition, the NaCl solution gives the hydrogel better properties (water retention and freezing resistance). Yuan et al. [229] proposed a PAA/2-hydroxypropyltrimethyl ammonium chloride chitosan (HACC) self-healing hydrogel by in situ polymerization in NaCl solution. The hydrogel showed excellent mechanical properties (the fracture stress was 3.31 MPa, the Young's modulus was 2.53 MPa and the compressive stress was 60 MPa). Putting the broken hydrogel into NaCl solution, the self-healing efficiency reached 61%. In addition, the hydrogel was rich in sodium and chloride ions and had high ionic conductivity. This ion conductive hydrogel with salts is believed to be an ideal material for the fabrication of strain sensors [230][231][232][233]. LiCl and KCl are also important components for preparing ionic self-healing hydrogels. Lv et al. [234] reported a dopamine-functionalized hyaluronic acid (HAC)/borax/PAM self-healing hydrogel. Abundant conductive ions were formed in the hydrogel network by introducing LiCl solution, which improved the conductivity of the hydrogel. In addition, Wu et al. [235] prepared a KCL/PAM/carrageenan self-healing hydrogel. The ethylene glycol/glycerol binary solvent introduced into the hydrogel formed a strong hydrogen bond with water molecules so that the hydrogel had excellent self-healing properties. The hydrogel also had good freezing resistance and drying resistance due to the existence of this binary solvent [236][237][238]. In summary, introducing conductive particles and conductive polymers into hydrogels can establish their conductive network. To improve the conductivity of hydrogels, more conductive fillers need to be added. However, during the stretching process, the interconnected conductive polymers or overlapping nanomaterials will be irreversibly separated from each other, resulting in a significant decrease in conductivity. Therefore, scientific researchers turned their attention to the direction of preparing ion-conducting hydrogels. Ion-conducting hydrogels not only have better ductility and are suitable for sensors in more situations but can also be used in the field of energy storage, broadening the application range of hydrogels. In addition, ion-conducting hydrogels also have good biocompatibility, which gives them great potential in the field of biomedicine. Gels 2021, 7, x FOR PEER REVIEW 22 of 32 Conclusions and Perspective Hydrogel-based flexible sensors have rapidly developed in wearable electronic devices, electronic skins, artificial intelligence and other popular areas due to their high sensitivity and conductivity, strong tensile properties and excellent mechanical properties. To improve the lifetime of the hydrogel-based flexible sensor, it is necessary to introduce self-healing properties that can repair structural damage and restore the sensing ability in the hydrogel to resist fracture damage under continuous action. Moreover, for hydrogel flexible sensors, conductivity is another important property. This review summarizes the latest research status and research progress in self-healing hydrogel-based flexible sensors, including the self-healing mechanism and conductivity. Self-healing performance is the best solution for flexible sensors when dealing with damage and chapping. Many factors should be coordinated while improving the conductivity of self-healing flexible ma- Conclusions and Perspective Hydrogel-based flexible sensors have rapidly developed in wearable electronic devices, electronic skins, artificial intelligence and other popular areas due to their high sensitivity and conductivity, strong tensile properties and excellent mechanical properties. To improve the lifetime of the hydrogel-based flexible sensor, it is necessary to introduce self-healing properties that can repair structural damage and restore the sensing ability in the hydrogel to resist fracture damage under continuous action. Moreover, for hydrogel flexible sensors, conductivity is another important property. This review summarizes the latest research status and research progress in self-healing hydrogel-based flexible sensors, including the self-healing mechanism and conductivity. Self-healing performance is the best solution for flexible sensors when dealing with damage and chapping. Many factors should be coordinated while improving the conductivity of self-healing flexible materi-als, such as achieving ultrahigh conductivity while maintaining the flexibility, elasticity, repairability and high transparency of flexible materials. Although great progress has been made in the preparation and research of self-healing conductive hydrogels, which are widely used in artificial skin, artificial intelligence, flexible sensors, wearable devices and other fields, self-healing conductive hydrogels still have limitations and face many challenges, which as described as follows (1) High conductivity, mechanical and tensile properties are the basic requirements for most hydrogel-based sensors. However, as wearable sensors, direct attachment to the skin surface is required for practical applications, requiring the development of conductive hydrogels with additional features, such as self-healing and tissue adhesion. The current research is focused on the design and preparation of hydrogels with self-adhesive properties through the addition of polysaccharides, proteins, polyethylene glycols and polydopamine (PDA). Self-adhesive hydrogels prepared by adding polysaccharides, proteins and polyethylene glycols have good biocompatibility, but these hydrogels have poor adhesion and toughness. The strategy has largely enhanced the adhesion of hydrogels by adding PDA, but dopamine is easily oxidized, resulting in a dopamine-based hydrogel whose adhesion is not sustainable and repeatable. In the future, the mussel adhesion mechanism should be investigated in depth, such as the interaction between mussel proteins and the effect of multiscale structure on the adhesion mechanism, to optimize the self-adhesive properties of the hydrogel-based sensor. (2) Although many reports mention good self-healing properties in their studies, they have not been evaluated fully and accurately. Therefore, there is a need to establish standardized experimental evaluation criteria to measure these properties. For instance, in hydrogel self-healing tests, the volume size of the hydrogel, the number of cuts, the healing time and the characterization after healing all need to be accurately represented. (3) In addition to the material aspects, we will focus more on the packaging, integration and practical applications of hydrogel-based sensors. While several strategies, such as surface modification and encapsulation, have been proposed to address these issues, these strategies can affect the mechanical properties of the hydrogel and reduce the conductivity and sensitivity of the sensor. Future research on the integration of hydrogels should take into account the interface difference between the "soft" material and the "hard" encapsulation material, the comfort of the encapsulated hydrogel sensor and the fatigue resistance and durability of the encapsulation material. Thus, the development of a flexible sensor with excellent comprehensive properties, low cost and a simple process will be of great significance to the further development of wearable electronic devices.
2021-11-20T16:11:53.231Z
2021-11-16T00:00:00.000
{ "year": 2021, "sha1": "6bcf9a70450cd9ac05bde4e97c1c97c1ccb8176d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2310-2861/7/4/216/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "608f6fef83ec899413f620b4328bb48580466ecb", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
54845020
pes2o/s2orc
v3-fos-license
Improving Early Childhood Development among Vulnerable Populations : A Pilot Initiative at a Women , Infants , and Children Clinic 1Des Moines University, Des Moines, IA, USA 2University of Louisville Hospital, Louisville, KY, USA 3Blank Children’s Hospital, Des Moines, IA, USA 4Reach Out and Read Iowa, Johnston, IA, USA 5Women, Infants & Children Program, Des Moines, IA, USA 6Breastfeeding Coalition of Polk County, Des Moines, IA, USA 7UnityPoint Health Iowa, Des Moines, IA, USA 8Geneva Foundation for Medical Education and Research, Geneva, Switzerland Introduction The first 1,000 days of life (beginning at conception through two years of age) encompass rapid development, adaptation, and consolidation that takes place in brain structure and function, including peak growth in sensory (seeing/hearing), language/speech, and higher cognitive functions [1][2][3].When exposed to home environments that facilitate poor bonding and ineffective levels of stimulation, children at this stage will have significant lifetime developmental detriments [1,[4][5][6]. These detriments include less capacity in education and earnings, poorer health and longevity (especially related to chronic disease), and reduced personal and social adjustment and coping, which results in a greater lifetime stress ratio, withdrawal, anxiety, and aggression [1,6].Exposure to multiple deprivations will synergistically increase these consequences of poor early development [4].Furthermore, children living among poor communities are at the greatest risk of being deprived during this crucial early period [5]. ECD programs within the first 1,000 days have the potential to offset the risk for developmental detriments of vulnerable children, providing better outcomes in terms of health, physical growth, educational attainment, quality of learning, and future societal productivity [7].Globally, societies that elect to invest in children in the early years-developed or nondeveloped-have the most literate and numerate populations [8].Not surprisingly, these societies also boast the best health status and simultaneously the lowest levels of health inequality in the world [9].Moreover, a reduction in inequalities can extend to the next generation as today's children become tomorrow's parents and expose their children to fewer risks, more protective factors, and better opportunities for learning [6].These ripple effects can extend benefits of investment in ECD over the lifespan of beneficiaries and their families, and in so doing, they are among the most costeffective investments a country can make-supporting both its people and capital gain [10]. When ECD interventions are delayed, the reduction of harm fails to reestablish the original developmental potentials and are much more costly than those within the first 1,000 days [1,2,9,[11][12][13].In this way, trajectories for development follow a general inertia principal-once set in motion, trends are extremely difficult to reverse, which engenders perpetuation of the cyclic trends of intergenerational transmission of poor ECD and poverty [6,13]. Incorporating early child development activities into the health system provides opportunities for reaching vulnerable children that manifest behavior and social issues, poor adaptation, and lower cognitive and educational attainment [9,14,15].An integration of ECD into established maternal and child-health visits ensure cost-effectiveness and time efficiency for both caregiver and health-worker [2,16].Longterm evaluations in the US have found significant effects of early childhood development interventions that were delivered through the health system and targeted poor and low birth weight children [7,[17][18][19]. Programs using the WHO and UNICEF early childhood development teaching protocol: Care for Development, have confirmed the ability to produce significant impacts in ECD competencies, as well as intellectual performance, and have demonstrated high perceived acceptance from both providers and participants [15,[20][21][22]. The Reach Out and Read Program promotes early language development and literacy using the primary health care system.Reach Out and Read has resulted in more children's books in the household, increased reading aloud, and improved language development [23][24][25][26]. Among the main reasons for the current lack of investment and public health support in early development is the low level of awareness at the policy and program levels about the critical importance of ECD within the window of opportunity (the first 1,000 days) [2,6].Additionally, there is a lack of awareness in the role that health services can play in promoting early psychosocial development of children [16], reflecting the need for demonstration projects among vulnerable populations-such as families serviced by the Federal Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Currently, there is a push for testing of integrated interventions from the public health fields of nutrition and early childhood development [27].In previous assessments, both nutritional supplementation and psychosocial stimulation have demonstrated improvements in development, with psychosocial stimulation resulting in improved IQ scores among those that were previously stunted [4].A review on integration of interventions in the public and primary health setting reveals that integrated community-based strategies for prevention and treatment of malnutrition, along with ECD interventions, have strong evidence for significant benefit and have demonstrated decreased malnutrition mortality (by as much as 55%) [16].There is a strong theoretical rationale for integration from both a logistical and financial basis; however, a review of the most recent literature on integration calls for more research into population and nutritional contexts that are most conducive to benefit [27]. There is limited research into the direct integration of early childhood development education into the WIC program; yet, collaboration between existing programs has been shown to enhance delivery of nutritional components [28].Some challenges to the WIC group setting in the past have been the time consuming nature of prior materials as well as less interactive demonstrations [28].The Care for Child Development protocol (updated form of the Care for Development protocol for teaching of ECD) includes many previously validated demonstration and group interaction techniques for delivery of ECD materials and would seemingly translate well into the small group WIC setting [15, 20-22, 29, 30]. As the WIC program is primarily focused on nutritional goals, this study demonstrates a pilot ECD initiative, utilizing existing WIC structure, and, while encompassing broader aims, seeks to target improvements in early childhood development practices and examine parental capacity for reception of these materials. Materials and Methods 2.1.Participants.The cohort of individuals selected for participation into the study was mothers or fathers with children 2 years of age and less, as identified through regular maternal and child health visits to the urban WIC center located at 2300 Euclid Avenue, Des Moines, IA 50130.Participants were found by existing rosters of preconceived WIC early education groups, which meet biannually and have long been used by WIC to counsel and provide dietary support to new and recently new mothers/fathers.Primary caregivers are encouraged to attend these WIC sessions as they receive their WIC dietary supplement checks following the educational group sessions. To be eligible for WIC participation, a parent must have a pretax annual household income less than a predetermined local area poverty line adjusted for household size (e.g., for a household of three individuals in Iowa, max total income for inclusion is $37,296 (fiscal year 2017)).There was no significant geographic or socioeconomic difference between groups, as all participants were already established as WIC beneficiaries and as such were predetermined (by a healthcare professional) to be parenting a child at "nutritional risk."The majority had no greater than a high school education; and the majority of caregivers were mothers (95-97%). Participants were asked to participate at the commencement of the WIC group meetings.Recruitment was based on basic understanding of the English language, ability to write for survey completion, and also having a child in the appropriate age range (2 years of age or less). For the purposes of the study, ECD material was added to existing dietary education curricula for randomly selected groups.Other randomly selected control groups received all survey components; however, no ECD material was supplemented to their usual WIC education.For inclusion there were no age, cultural, or economic constraints for study participation; and all participants were blinded as to which group they were included.Participants could elect to drop the study or decline participation at any time. Procedures. All participants were informed and signed consent documents in the presence of WIC staff.If they elected to be involved in the study, they were advised to return in one month's time after the group session to fill out an additional survey.After filling out this secondary survey (hereafter: postintervention survey) they would receive their WIC supplement check.This WIC financial supplement is provided to all WIC involved parents as a baseline public health practice of the organization to assist with the feeding of their child.If they elected to participate in the study intervention, their WIC check was cut to cover one month of support after the first group meeting in order to incentivize a return to the WIC clinic for postintervention survey participation in one month's time.If they returned at one month and completed the postintervention survey to assess home ECD behaviors, they received the remaining portion (2 months) of the financial supplement.If they elected not to participate in the study, they could still participate in the group session as normal; however, they did not receive any ECD intervention survey material and instead received their usual 3-month WIC supplement check.For the regular WIC groups, mothers are additionally incentivized into participation through materials given out during the course sessions to include age appropriate children's books and educational resources.Both the control and intervention groups received the same incentives. All group sessions took place in the small group meeting room of WIC 2300 Euclid Ave, Des Moines, IA.Group sizes ranged from 1 to 8 mothers (some with accompanying children) with variance due to weather and transportation (a usual and anticipated barrier for these Iowa-based lowincome populations).This intervention sought to generally determine if a pilot ECD intervention would be received well in a group setting at a WIC clinic-as such, groups were treated equally, and we did not seek to measure the impact of group size and its association with ECD material receivability. For the randomly selected intervention groups, at the initial phase there was 100% voluntary participation with presentation of the objectives and the structure of the project.Only a single participant was excluded from the ECD surveys by staff due to an inability to complete survey material due to an inadequate understanding of the English language.The total amount recruited for the intervention phase, taking place in January 2015, was 37 participants.At one month's time 26 participants (70.3%) returned to complete the postintervention survey.The control group sessions took place over the month of February 2015 and received 36 random participants.The control group was used to compare receivability and parental capacity at the initial session.One-month outcomes from the control group were limited due to high rates of lack of follow-up among the cohort (with 25 individuals not returning).For intervention flow diagram please see Figure 1. The participants of both the control and the intervention group were blinded.This was a single blinded study as WIC staff were exposed to the teaching material and responsible for directing the group sessions, and therefore they could recognize differences.[15,[20][21][22]29].The complete Care for Child Development module contains activities and learning modules described in the Care for Child Development: Facilitator Notes [30]. Educational For the adaptation of the Care for Child Development module, the facilitator notes section was consulted and the following were borrowed from the model to use for education of parents: an explanation of the significance of ECD, followed by discussions which elicited information about home behaviors relating to early childhood development [30].Additionally, the following were included: recommendations for play and communication, effective coping mechanisms for stress, and instructions/demonstrations on how to create or use items at home (e.g., toys/puzzles) to shepherd the stimulation of cognitive exercise and ECD advancement [30].Supplemented to this material was Reach Out and Read supported education about encouraging at home reading practices.Lastly, material was included from the American Academy of Pediatrics' advice on the elimination of television exposure to children less than 2 years of age, and minimal exposure following [31]. The intervention educational session took place during a single one-hour group session and was led by WIC staff.The dynamic of involvement for the participants was discussion engagement through direct questions, with encouragement for concerns, thoughts, and verbal understanding-similar to the discussion dynamic of the regular WIC education groups.To ensure, however, that direction of the discussion was aligned with ECD established principles, interactions were coached through the Care for Child Development counseling protocol to guide recommendations and interactions [30].These interactions include specifics on greeting the mother in a cordial fashion, making eye contact, encouraging back and forth discussion, using positive verbal and body language, demonstrating play activities as identified in the counseling card, and troubleshooting problems [30].There was no additional training other than that which was provided through Survey Collection. ECD behavior surveys were conducted at the initial meeting (preintervention survey) and compared with postintervention surveys completed at one month's time, similar to the timeline of prior ECD intervention assessment of outcomes [15]. The background for the pre-and postintervention ECD surveys of the participants is based on the ECD, "Supportive Environment in the Home" survey, as published in Care for Child Development: Monitoring and Evaluation Guide under, "Tools to evaluate the impact of the intervention" [32].This survey consists of measures for the home environment to assess pragmatic home ECD practice (e.g., reading aloud, story time, singing songs, exploration, and interactive play). One hundred percent of participants in both study arms completed this initial survey. Additionally, a preconceived WIC survey was offered at the end of the teaching session to access agreeability among participants and capacity for learning.One hundred percent of intervention arm participants completed this survey and 94.4% of control (2 participants elected to leave without offering input). All surveys were completed on paper, created with Microsoft Word software, and were conducted anonymously.Surveys were collected discreetly and remained unopened until the conclusion of the study. To ensure the methods and surveying procedure were carried out in an ethical and acceptable format, IRB approval was obtained from Des Moines University, and exempt approval was granted. ECD Pre-and Postintervention Survey for Measurement of Home Behavior Change. Ordinal ECD responses consisted of four possibilities: "not at all," "few days of the week but not every day," "one or two times every day," "more than two times, every day".These ordinal responses received a value of "0," "1," "2," and "3," respectively.Integer values assigned to these variables were cumulatively grouped into a final ECD score per participant.The preintervention and postintervention surveys of each participant were nested within either the control or the ECD intervention group.Each postintervention survey was matched to the preintervention survey using the participants' child's name and age, and total ECD scores were compared.All survey data that consisted of a matched pair (pre-and postintervention surveys) was retained.Unfortunately, too many were lost from the control group (with only 30.6% returning) to accurately assess one-month outcomes of the control participants.However, 70.3% returned at one month to complete the postintervention survey from the intervention group.Therefore, one-month behavior outcomes were measured from the intervention group and compared with initial surveys. In order to compare differences before and after the WIC group session, a one-sided paired-samples -test was computed for the intervention arm (based on the hypothesis that ideal ECD behaviors would increase in the home environment following intervention).The null hypothesis was defined to be that no difference is expected between pre-and postintervention total survey scores, with an alpha significance level of 0.05. Perceived Receivability Surveys. For the standard WIC receivability surveys completed at the end of the group sessions, ordinal data was transformed to evenly spaced integers (strongly agree → 5, agree → 4, okay → 3, disagree → 2, and strongly disagree → 1).There were three particular measures assessed: (1) participant elicited enjoyment of the WIC group sessions, (2) participant determined learning amount from the group session, and (3) how much participants felt they were able to share in the group setting.Each of these three measures was averaged by nesting the intervention versus the control.Thereafter, a one-way ANOVA test for a difference among independent means was carried out for each of the three measures. Results and Discussion 3.1.ECD Home Behavior.The average ECD one-month behavior outcomes of the intervention are listed in Table 1.Contained within this table are home behaviors at baseline and changes that took place in the home environment of participants over the one-month interval after the WIC intervention (utilizing data of only the participants that returned for the postintervention survey).The individual outcome scores are group averages for each home behavior (e.g., reading exposure), using ordinal conversions with the following algorithm: (not at all → 0, few days of the week, but not every day → 1, one or two times every day → 2, and three or more times every day → 3).Table 1 also displays average differences for each ECD variable (postintervention outcome score minus preintervention outcome score).Overall, 60% of the participants in the intervention group improved their global ECD score (excluding a single participant that did not complete the entire postintervention survey), while twenty percent experienced negative growth. Evident in Table 1, there are profoundly positive home ECD behavior changes (86%) for the intervention group over the one-month interval-reaching significance when taken cumulatively for a total ECD score (paired sample -test value = 0.006).This significance found in ECD total score represents an overall ideal behavior change in ECD home practice following the ECD-supplemented WIC intervention.Due to insufficient follow-up from the control group, we cannot decisively state that this observation was independent Participant Receivability. The data of the WIC survey which evaluated group session receivability are described in Table 2.The two compared groups listed in Table 2 consist of the control group (no change in preconceived WIC session) and the intervention group (ECD education supplemented to the standard WIC session).All tabulated means are ordinal conversions using the following algorithm: (strongly agree → 5, agree → 4, okay → 3, disagree → 2, and strongly disagree → 1).All three individual markers of a successful WIC group session were higher for the intervention group.The means of the receivability measures (Table 2) were analyzed with a one-way ANOVA, with the following results: "I enjoyed WIC group today": = 0.008; "I learned something today": = 0.011; "I was able to share something I know with others": = 0.023.Thus, for all three independent variables of the WIC receivability survey there was significance found at an alpha level of 0.05 for the intervention relative to the control.This translates to a higher participant agreeability of the intervention ECD session.This finding also indicates high capacity for learning ECD competencies among WIC mothers.Receivability, as a primary measure of this WIC-based pilot intervention (using previously validated ECD material), suggests that the infrastructure provided by WIC could be an ideal setting for ECD intervention.Therefore, the integration of nutritional and ECD material into WIC groups could represent an ideal strategy to target vulnerable populations utilizing WIC services. Limitations. The main limitation of this study is the small of the selected WIC groups, increasing the chances for a Type I error in reporting significance.Response rates were high in both intervention and control groups in the assessment of receivability; however, logistically the behavior change measure presented follow-up concerns.The intervention study, where significance in home behavior was found, received fairly good secondary response rates (70%) (relative to the expected winter logistical challenges for vulnerable Iowa-based WIC populations).The control group, which was independently statistically assessed in relation to home behavior change (so as not to bias intervention analytical results), had a much lower response rate at 31% and was therefore formally excluded from behavior change analysis. The primary goal of this intervention was the assessment of receivability of ECD materials in the WIC setting, while home behavior change was a secondary evaluation.As such, although home behavior change was observed at one month's time in this pilot intervention, ideally this study would be followed by a longer measure (e.g., 6 months-1 year) with an increased to assess permanence/long-term sustainability of home ECD behavior changes. 3.4. Recommendations for Future Research.One consistently observed trend within the group discussions was a misunderstanding by parents that certain television programs are acceptable, or even ideal for the development of their infant to two-year-old child.This is contrary to the American Academy of Pediatrics' firm stance that the safe amount of weekly television for children of less than two years is zero hours [31].Thus an intervention needs to specifically target this topic.The initiative should seek the identification of behavioral change barriers, from where misconceptions concerning television arise, and how to best intervene. It is also recommended that further research take place on larger scale interventions that incorporate ECD integration with standard WIC nutritional education.Therefore, such studies could develop a national case for broader inclusion of early childhood development practices and education into the already existing WIC infrastructure. Conclusions Relative to the control group, this study has discovered significance in the receivability and parental capacity measures ( = 0.008, 0.011, 0.023).This demonstrates that incorporation of broader early childhood development education into the WIC setting is well received by parents.Concurrently, cumulative one-month behavior outcomes of the ECD intervention sessions ( = 0.006) are at least optimistic, though limited by comparison data of the control group due to low follow-up. The informational group sessions were modeled after the proven Care for Child Development intervention; however the time spent on material was terse in comparison to the regular multiday Care for Child Development complete module.These one-month behavior outcomes of this WIC intervention are similar to one-month outcomes using Care for Child Development material delivered through the health care sector [15] and reinforce Care for Child Development's wide application. Furthermore, the home behaviors that were increased following the intervention, that is, reading aloud, decreased television exposure, and improved play-time (Table 1), are specific behaviors that are part of a more stimulating home environment [15] and carry an established link to better developmental outcomes for children including higher literacy [24].Therefore, these observed changes are more than just an adoption of arbitrary behaviors-these changes represent areas that optimize the growth of the developing mind of a vulnerable child. Moreover, as demonstrated by the WIC receivability surveys, parents of vulnerable children both significantly enjoy and believe they are learning from participation in additional ECD education.High receivability indices among these parents of vulnerable children have demonstrated both willingness to learn and high capacity to incorporate key ECD competencies. This significance observed in receivability of the intervention, coupled with the early integration of ECD principals into home practice (observed by the significance found in pre-and postintervention surveys), represents parents' overall readiness to enhance the home environment for their child if merely educated how. Although the importance of early childhood development is widely acknowledged, there are significant barriers within the current healthcare environment, including low reimbursement rates, time constraints, and lack of training to conduct these services [33].However, this study has established that brief and simple early childhood development discussions, even a single encounter, are well received by parents and could be adequate to elicit behavior change at 1 month.Moreover, this study supports that ECD education is not constrained to only the pediatrician's office.Concurrently, this pilot initiative suggests that WIC provides an ideal setting for delivering early childhood development education beyond traditional counseling in nutrition. 1 -Figure 1 reading Figure 1 Intervention.The group education material was borrowed from UNICEF's Care for Child Development module.The literature evidence in support of the ECD implementation of UNICEF's Care for Child Development module is established and ideal for group learning settings Table 1 : Nested mean scores for supportive early childhood development home behaviors.*1-monthbehavior outcomes of the intervention group (participants were matched by completion of pre-and postintervention surveys); * * Ideal early childhood development results are demonstrated by bold differences (1-month outcome mean scores -initial mean scores); * * * Paired sample -test value = 0.006.2.5.Analytic Strategy.Following the completion of survey collection, ordinal variables were transformed into integer values (i.e., each unique answer choice, such as "Few days of the week but not every day," was arbitrarily assigned to an uniquely corresponding integer value, such as "1").Data was then quantitatively analyzed among pre-and postintervention surveys collected from the intervention group.Receivability and parental capacity data for intervention and control were compared with an ANOVA.The data analysis tools were the Statistical Package for the Social Sciences (SPSS) version 22.0, released in 2013, and Microsoft Excel, 2010. Table 2 : Comparison of means for teaching session receivability scores.
2018-12-14T06:54:54.118Z
2018-01-15T00:00:00.000
{ "year": 2018, "sha1": "7e061786e88ab914c1c2b0b751857f7508a02fe3", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2018/3943157.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7e061786e88ab914c1c2b0b751857f7508a02fe3", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Psychology" ] }
118968934
pes2o/s2orc
v3-fos-license
Transient processes in disordered semiconductor structures under dispersive transport conditions: Fractional calculus approach We continue to develop a new approach to description of charge kinetics in disordered semiconductors. It is based on fractional diffusion equations. This article is devoted to transient processes in structures under dispersive transport conditions. We demonstrate that this approach allows us (i) to take into account energetic and topological types of disorder in common, (ii) to consider transport in samples with spatial distributions of localized states, and (iii) to describe transport in non-homogeneous materials with distributed dispersion parameter. Using fractional approach provides some specifications in interpretation of time-of-flight experiments in disordered semiconductors. Saying about anomalous transport (AT), we can mean an unusual value of diffusivity or its time-or space-dependence in the framework of the standard diffusion approach. When the diffusivity is highly irregular it is more convenient to interpret it as a random field and the process itself as a complex process consisting of many normal processes with wide distributions of their characteristics. These processes are denoted by the term dispersive transport (DT). Numerous experiments manifest the presence of universal DT properties which weakly depend on the detailed atomic and molecular structure of matter [25,26]. Theoretical foundations of this approach were laid by Scher and Montroll (1975). Their model known as CTRW (Continuous Time Random Walk) has proved to be very fruitful for description of charge kinetics in disordered semiconductors and was followed by a series of articles that used the waiting time distributions of Lévy type [25]. From physical point of view, the dispersive transport may be explained by involving various mechanisms: multiple trapping of charge carriers into localized states distributed in the mobility gap, hopping conduction assisted by phonons, percolation through conducting states, etc. [25,43,3,44,45]. The variety of approaches reflects a complexity of the systems and processes under consideration. For this reason, the construction of a consistent dispersive transport theory based on first principles is still an unsolved problem. Experimental data revealing universal behavior of some important characteristics of dispersive transport (e.g. time-behaviour of transient photocurrent) indicates predominance of statistical laws over dynamical ones. Interest in non-Gaussian transport theory has recently revived in connection with the observation of anomalous relaxation-diffusion processes in nanoscale systems: nanoporous silicon, glasses doped by quantum dots, quasi-1D systems, and arrays of colloidal quantum dots. These systems are very promising for applications in spintronics and quantum computing. They can also be useful for studying the fundamental concepts of physics of disordered solids: localization, nonlinear effects associated with long-range Coulomb correlations, occupancy of traps and Coulomb blockade. Due to the preparation method of colloidal nanocrystals, the energy disorder is always presented in these systems, which is confirmed by experiments on fluorescence blinking of single quantum dots (CdSe, CdS, CdSe/ZnS, CdTe, InP, etc). As shown in some recent papers [36,37], the Lévy statistics plays a crucial role in the interpretation of experiments with charge transfer in QD arrays. It is reasonable to believe that kinetic equations describing such transport processes must have similar forms for different materials. Nevertheless, the Sher-Montroll version of DT for disordered systems is expressed in form of integral equations while the standard version for ordinary systems has form of partial differential equation. Embedding fractional derivatives in the theory [27,28,29,30,31,32,33,34,35] removed this unwanted feature and opened opportunities for the development of normal and anomalous kinetics in the framework of unified mathematical formalism. In this paper, we focus on a subclass of DT processes called fractional dispersive transport (FDT) characterized by involving differential equations of fractional orders [24], which produce long-tail distributions of the power type. We demonstrate some applications of fractional dispersive transport equations to transient processes in disordered semiconductor structures. In the next section, we list main modifications of the FDT-equation, which describe different situations. Then we briefly consider the time-of-flight methodic, the case of non-uniform distribution of localized states over the sample, and the case of medium with distributed dispersion parameter. We calculate transient process in the diode at dispersive transport conditions. Using fractional approach allows us to provide some specifications in interpretation of the time-of-flight experiment in organic semiconductors. To the end, we consider the influence of topological disorder and percolation on transient current curves. The family of fractional dispersive transport equations Here, we are listing some modifications of the fractional dispersive transport equation. They contain fractional Caputo and Riemann-Liouville derivatives [39,40,41] α • The fractional Fokker-Planck equation 1 for the total concentration of nonequilibrium carriers n(r, t) (see details in [4]): can be used for hopping conduction over the Poisson ensemble of traps and for multiple trapping into band tail states with the exponential energy distribution [32,35,46]. Here, 0 < α ≤ 1 is the dispersion parameter, K ∝ E the anomalous advection coefficient, and C the anomalous diffusion coefficient. For the multiple trapping mechanism, the absolute value of K is expressed through microscopic parameters as K = c α l, where c = w 0 [sin(πα)/πα] 1/α , w 0 is the capture rate of carriers into localized states, µ and D are mobility and diffusion coefficient of delocalized carriers, K = τ 0 c α µE is a dispersive advection, C = τ 0 c α D is an anomalous diffusion coefficient. Parameters τ 0 and l are the average time and length of delocalization, respectively. For variable range hopping, c = ν 0 [sin(πα)/πα] 1/α , where ν 0 is the characteristic rate of jumps between the traps. Solutions n α (r, t) of fractional equation (1) are expressed through the solutions n 1 (r, t) of the ordinary Fokker-Planck equation by the relation [48,32]: ct ατ where g + (t; α) is the one-sided Lévy stable pdf [49], which can be determined by its Laplace transform Eq. (2) allows to find analytical solutions in simple cases and to derive the general Monte Carlo algorithm [47]. • The equation for the density of delocalized carriers n d (r, t) in case of multiple trapping has the form • The transport equation taking into account the recombination of localized carriers is derived [38,4,47] in the form: where γ is a recombination rate for localized carriers. • The fractional dispersive transport equation taking into account the monomolecular recombination of delocalized carriers is obtained in Ref. [47]: where τ mr is the monomolecular recombination time, and δn the concentration of nonequilibrium carriers. • The fractional formalism has allowed to derive the bipolar diffusion equation for dispersive transport in case of multiple trapping [47] Here, σ n = µ n n d , σ p = µ p p d are conductivities of delocalized electrons and holes, σ = σ n + σ p ; µ amb = µ * p µ * n (n d − p d )[µ * n n d + µ * p p d ] −1 is bipolar dispersive drift mobility, and D amb = (µ * n nD * p + µ * p pD n )(µ * n n + µ * p p) −1 bipolar diffusion coefficient. The fractional bipolar transport equation contains two fractional derivatives of different orders in the general case. This is a particular case of distributed order equation. • In case of distributed dispersion parameter, the transport equation is of the form [47] ∂n(r, t) ∂t Here, ρ(α) is the distribution density of dispersion parameter. • For exponentially truncated power law distributions of localization times in the generalized Scher-Montroll model, where γ is a truncation parameter. In this case, localization (waiting) times have a finite variance, the Central Limit Theorem is applicable in this case, transport at large times is normal. The transition from the dispersive regime to the Gaussian one in the time-of-flight experiment is theoretically described on the base of the truncated Lévy statistics in Ref. [50]. • In frames of the multiple trapping model, the equation for the delocalized carrier concentration in case of arbitrary density of states ρ(ε) and percolative nature of conduction ways, is obtained in the form [4]: The term with fractional derivative of order β is consistent with the comb model of a percolation cluster [51]. The constant τ β is the characteristic residence time in "dead bonds" of a percolation cluster. For hopping in a medium with the Gaussian energetic density of states, the equation for the carrier concentration n eff (x, t) near the transport layer is as follows [4]: The second term describes thermally activated hops between localized states distributed with Gaussian density, i.e. ∝ exp(−ε 2 /2σ 2 ). Photocurrent decay in the time-of-flight experiment In classical "time-of-flight" experiments, electrons and holes are usually generated in a sample by a pulse of laser radiation from the side of the semitransparent electrode. The voltage applied to the electrodes is such that the corresponding electric field inside the sample is significantly stronger than the field of nonequilibrium charge carriers. The electrons (or holes, depending on the voltage sign) enter the semitransparent electrode, while holes (or electrons) drift to the opposite electrode. In the case of normal transport, drifting carriers in the field E give rise to a rectangular photocurrent pulse: where the time of flight t T is given by drift velocity v d and sample length L: t T = L/v d . Taken together, the scattering of delocalized carriers during the drift, trapping into localized states, and thermal emission of the carriers lead to packet spreading. Such a packet has a Gaussian shape with a mean value of x(t) ∝ t and width ∆x(t) ∝ √ t. In this case, the transient current I(t) remains constant until the leading edge of the Gaussian packet reaches the opposite edge of the sample. The current decrease takes a time of ∆x/ v d . As a result, the right edge of the photocurrent pulse becomes smooth. Such a picture is typical for most ordered materials. However, when determining drift mobility in certain disordered (amorphous, porous, disordered organic, strongly doped, etc.) semiconductors, a specific signal of transient current I(t), is observed, having two regions with the power-law behavior of I(t) and an intermediate region: Exponent α, termed the dispersion parameter , depends on the medium characteristics and can vary with temperature. Parameter t T is called transient time (or time of flight) in analogy with normal transient processes, but has a different physical sense. It has been shown experimentally [25,12] that in the dispersive transport regime the following relationship takes place: where U is the voltage. As noted in Refs. [25,52] the shape of the transient current signal in the reduced coordinates is virtually independent of the applied voltage and sample size. This property, inherent in many (but not all: see [53]), materials, is referred to as the property of shape universality of transient current curves. Occurrence of these features in many disordered materials confirms the universality of transport properties. A large number of experimental observations of this universality were reported both in early and recent publications (see for details Refs. [2,3,25,26,16]). The transient photocurrent I(t) in a sample of the length L is determined through the conductivity current density as and related to the one-dimensional concentration of injected carriers n(x, t) by the following relation: Rewriting equation (15) in the one-dimensional form and neglecting by the diffusion component, we arrive at the equation which has the following solution Here, N is a surface density of injected carriers. Substituting the latter function into Eq. (14), we arrive at the expression for the transient current density: The transient current curves calculated by Eq. (17) are presented in Fig. 1 in comparison with solutions of the Arkhipov-Rudenko τ -approximation [54] and the Arkhipov-Rudenko-Nikitenko diffusion equation [55]. The following parameters have been taken for calculations: E = 5 · 10 5 V/cm, w 0 = 10 6 c −1 , µ 0 τ 0 = 2.5 · 10 −16 m 2 /V, l = 12, 5 nm. Parameters of fractional equations: 1) α = 0.5, K = 8 µm/s 0.5 , L = 75 µm; 2) α = 0.7, K = 73 µm/s 0.7 , L = 50 µm; 3) α = 0.9, K = 343 µm/s 0.9 , L = 25 µm. In case of truncated waiting time distributions, equation (7) leads to the following expression for the conduction current where γ is the truncation parameter. Transformation of transient current curves with an increasing L/l ratio is studied in [50]. If the transient time t T is much smaller than the truncation time γ −1 the transport remains dispersive and does not pass to Gaussian asymptotics. For t T ≫ γ −1 , transport in the long-time asymptotic regime becomes normal. 13) and (18). appearance of a plateau. In this case, the power law tail of current is not observed. We can meet another situation when the plateau and the power law tail are presented together in curves. This fact can not be explained by the boundedness of the band tail. Possible explanations are given below. Non-uniform spatial distribution of localized states Consider the case of inhomogeneous spatial distribution of localized states. When traps are distributed over a sample with the density ρ(x), the average number of localization events for one carrier in a layer of thickness x is equal to k = x 0 ρ(x)dx, and the conduction current density has the form Here c is the scale parameter of the localization time distribution: one can find the total concentration of carriers, and the transient current, Different types of spatial distribution of localized states are considered in Refs. [56,4,70]. Fractional approach confirms results obtained in [56]. Take a look at Fig. 3 showing the transient current curves in the case of surface layers depleted or enriched by traps, exponential distributions of traps over the sample have been taken. In the first case we observe the appearance of a maximum on the curves, in the second case we obtain a more diffuse characteristics than in the case of homogeneous distribution of traps in the sample. Analytical results are in accordance with the Monte Carlo simulation of the transport by multiple trapping. The influence of surface layers can be analyzed by considering the three-layer structure [16]. The outer layers are surface layers and the main bulk of the material is located between them. Here, barrier effects are neglected, that is correct for large voltages applied to the structure. Calculation has been performed for the case of hopping in a material with Gaussian energetic disorder (σ = σ/kT ). Transient current in each layer can be found from Eqs. (9,14), the total current is calculated as I 1 (t) + I 2 (t) + I 3 (t). In Fig. 4, transient current curves generated by surface (time-of-flight method) and uniform injection of carriers into the three-layer system are presented. These calculations show that appearance of a hill on transient current curves can be explained by the presence of disordered surface layers. This result is consistent with the calculations in frames of the Arkhipov-Rudenko τ -formalism [16]. Bässler's model of Gaussian disorder The Scher-Montroll approach [25] and the Arkhipov-Rudenko theory [54] predict a transition to the Gaussian regime, when the dispersion parameter α tends to 1. In the framework of multiple trapping and thermoactivated hops, transition to the normal statistics is observed when temperature is increased. However, it should be noted that the model for hopping transport in organic semiconductors predicts the transition to the normal transport, when sample thickness is increased or applied voltage is decreased [17]. In other words, a change in transport statistics can be due to changes in macroscopic large-scale parameters. For small transient times, i.e. small values of sample thickness and/or high voltages, the normalized transient current curves are almost universal, and correspond to the dispersive mode of transport. In samples with greater thickness, or at lower voltages, a plateau on the curves of I(t) is observed [17,16], which indicates the Gaussian mode of transfer. This phenomenon demonstrating the spatiotemporal scale effect relates to the case with many low molecular weight, molecular-doped and conjugated polymers and can be described in terms of the theory of quasi-equilibrium transport [45]. Bässler's model assumes that the energy distribution of hopping centers involved in tunnelactivation transfer is described by the Gaussian function. In this case, waiting time distribution have truncated power law form. All moments of sojourn times are finite, and normal transport regime has to be observed at large times. Detailed analysis of the localization time distributions [4] shows that the complementary cumulative function Ψ(t) = Prob(τ > t) in the Bässler model can be described by an inverse power function multiplied by a stretched exponential one. It is also important that the index of the stretched exponential function is not arbitrary: it is twice smaller than the power law index α 1 = kT /σ: It is worth to note that waiting time distributions and transient current curves obtained in frames of the multiple trapping model are in agreement with the results of direct simulation for the hopping mechanism [61]. This means that in not too strong electric fields, the macroscopic manifestations of both mechanisms are indistinguishable, despite their significant physical difference. In the opinion of Hartenstein et al. [61], the cause of this lies in the existence of the transport energy level in the hopping model. The transport level plays a role of the mobility edge [62,63,64,45]. Distributed dispersion parameter Transient current relaxation in certain disordered semiconductors, for example, porous silicon [57], assumes the form The Scher-Montroll model of charge transport in disordered semiconductors leads to the current dependence (11), where α i = α f = α. As shown in Ref. [58], the value of α found from the dependence of carrier flight time in porous silicon on the electric field strength does not coincide with that determined from transient photocurrent curves. The authors explain this fact assuming additional dispersion in terms of carrier mobility in structurally inhomogeneous porous silicon samples. It seems quite natural to extend this idea by involving dispersion of the parameter α. As will be seen below, this assumption is enough to substantiate dependence (21), at least for the discrete spectrum {α 1 , α 2 , . . . , α m }. Let k j be a portion of traps that capture carriers for random time τ distributed according to an asymptotically power law with exponent α j . The distribution of waiting times averaged over α has the form where b j are normalization constants. The relationship between concentrations of localized and quasi-free carriers takes now the form Combined with the continuity equation, expression (22) gives the drift-diffusion equation for the concentration of delocalized carriers in the case of the discretely distributed dispersion parameter: To calculate the transient current governed by the latter equation, we neglect diffusion, regard the electric field as being uniform, and align the x-axis along field E. Then, equation (23) can be rewritten as The Laplace transform satisfies the equation solution of which (for the case α j < 1) has the form with A standing for the sample area transverse to the electric field. On assumption that traps are uniformly distributed over the sample, the time transform of the total charge carrier density n(x, t) for x ≫ l is written as For the Laplace image of the transient current, we have: In order to see the long-time dependence of transient current, one should apply the Tauberian theorem, according to which the behavior of function I(t) for t ≫ c −1 j is determined by that of function (25) for s ≪ c j : Here α min is the minimum value from the set {α 1 , α 2 , . . . , α m } and b min is the corresponding value of the normalization constant. The inverse Laplace transformation leads to In the case of s/c j ≫ (l/L) 1/α j for all j, it follows that where α max is the maximum value from the set {α 1 , α 2 , . . . , α m } and b max is the corresponding value of the normalization constant. Hence follows Thus, if the exponent in the carrier residence time distribution in traps takes on one of the values from an ordered set {α 1 , α 2 , . . . , α m } (discrete spectrum), the transient current behavior is determined by the maximum value of α max = α m in the initial time segment, and by the minimum value of α min = α 1 = α m (Fig. 5) in the terminal one, in agreement with the results of the aforementioned experiments. In Fig. 5, there are transient current curves for dispersive transport characterized by two dispersion parameters α 1 = 0.5 and α 2 = 0.75. µEτ 0 = 10 nm, E = 10 6 V/cm. Fraction of the first type traps is k 1 , k 2 = 1 − k 1 . In Fig. 6, transient current curves are shown for the case of non-monotonic density of localized states. Details are indicated in the insets. These curves are calculated by the fractional diffusion equation with distributed orders. These calculations confirm the results obtained in Ref. [59]. In particular, the non-monotonic density of localized states leads to the appearance of plateau in the transient current curves. Some new aspects are taken into account: the energetic width of the defect states and the shift below the band edge. 4 Transient processes in a diode under dispersive transport conditions: Turning on by the current step The fact that the fractional differential approach allows us to describe both normal and dispersive transport in terms of the unified formalism can be used for analysis of transients in structures based on disordered semiconductors, by analogy with similar structures based on crystalline semiconductors. We demonstrate this by calculating the transition process in a semiconductor diode under conditions of dispersive transport. In this case, the current I(t) and/or the voltage U(t) play the role of time-dependent transient parameters. The diode performs the transition from the neutral state to the conducting one due to the current step, i.e. the load resistance R l is substantially greater than the resistance of the diode R d [65]. On the assumption that low injection conditions are fulfilled, we shall calculate the process for semi-infinite planar diode with n-type base. Recombination and generation in the space charge region are neglected. Holes are injected from the p-region into the n-region with a sharp turn on of current. Later, an equilibrium distribution of holes for a given current step I s is established as a result of competition between the injection and recombination processes in the base. The dispersive transport of non-equilibrium holes is described by the generalized diffusion equation Here p d (r, t) is the concentration of non-equilibrium holes. In the case of one-dimensional diffusion (planar diode), it can be rewritten in the form: Here γ l and γ d are parameters of recombination of localized and quasi-free carriers, respectively. This equation is written for concentration of mobile (quasi-free) carriers, which is applicable in the model of multiple trapping or percolation model "backbone -dead ends". Making the Laplace transformation on time yields Using the evident conditions and neglecting by time of flight through the spatial charge region of the diode we obtain solution to this equation in the form At point x = 0p In the case of dispersive transport, carriers are localized in traps for vast time interval, and one can neglect by recombination of mobile (delocalized) carriers As a result, we obtain the expression: Performing the inverse Laplace transformation, we find where Γ(ν; t) is the incomplete gamma-function [66]. Comparing this relation with Eq. (26), we obtain the equation Solving this equation with respect to U(t) yields It is easy to obtain approximate formulas for the two cases In the case of normal transport α = 1, and taking into account we arrive at the expression for the diode based on crystalline semiconductors. Fig. 7 shows the voltage kinetics for different values of dispersion parameter of holes in the n-region, when the diode is switching on by a current step. On interpretation of the time-of-flight experiment Often, the universal form (11) of the transient current curves is explained by the exponential density of localized states [2,16]. For such density of states, multiple trapping and hopping lead to proportionality α ∝ T . In most experiments, there is observed considerable deviation from this temperature dependence. Sometimes experimenters do not pay proper attention to this fact and continue to use exponential density of states. It is known, that the transient curves are very sensitive to the shape of the energy distribution of traps: the presence of defect states (even with small concentration) may have a significant effect. The exponential representation of localized state density is an evident idealization of more complicated situations in real disordered semiconductors. Nevertheless, the transient current curves with two power-type section are observed more often than one might expect [16]. Some authors explain the weak dependence of α on T as a result of topological disorder in these semiconductors [67] rather than the energetic one, as it takes place in cases of multiple trapping and hopping. Topological disorder can form the percolation character of the mobility zone and conduction channels. Such phenomena are clearly observed in porous semiconductors. In this case, the percolation due to the topological disorder can be described in terms of the fractional differential kinetics with a temperature-independent dispersion parameter [68]. Note that the existing analytical approaches to dispersive transport (Scher-Montroll, Arkhipov-Rudenko, Nikitenko, Tyutnev models) do not take into account the percolation caused by topological disorder. In Ref. [69], we have shown that the fractional version of the diffusion equation can be derived directly from the universality of transient current curves and the power law dependence of the transient time on the sample thickness. This means that the equation is valid in the case of a weak dependence of α(T ). On the other hand, the comb model of a percolation cluster leads to equations with fractional derivatives whose orders are temperature-independent in the case if the correlation length ξ is temperature-independent [68]. Since percolation caused by topological disorder and transfer over the mobility zone are independent processes, we can write the equation of multiple trapping, taking into account the percolation nature of the zone [4]: The first term with the fractional derivative of order β appears due to an asymptotic power law distribution of the residence time of the carriers in the "dead branches" of percolation cluster. The second term reflects trapping into distributed localized states with arbitrary density. Fig. 8 presents the curves of the transient current I(t) calculated for the multiple trapping in states with the Gaussian density ρ(ε) ∝ exp(−ε 2 /2σ 2 ). The current is found by solving equation (27) using the Monte Carlo method, and substituting j(x, t) = eµEn d (x, t) into the expression for the transient current I(t) = L 0 j(x, t)dx. The multiple trapping by traps with the Gaussian DOS without percolation leads to the non-universal curves of the transient current. However, the increase of the parameter γ suppresses the energy disorder and the curves I(t) takes the universal form. The equation (9) similar to Eq. (27) was obtained for hopping in Ref. [70] by involving the transport level concept. Fig. 9 presents the comparison of the calculated transient current curves with experimental data and results calculated by Nikitenko and Tyutnev [55] for 1,1-bis (di-4-tolilaminophenyl) cyclohexane. Noteworthy is the fact that our simulations of 1D hopping yields the results perfectly consistent with those obtained from the Nikitenko diffusion equation. This coincidence is attributed to the fact that the one-dimensional diffusion equation with time-dependent coefficients neglects percolation nature of the trajectories. Involving fractional derivative of order β allows to take into account the nature of non-Brownian trajectories of hopping particles (see inset in Fig. 9, a). Conclusion We have presented results obtained in the framework of the fractional differential approach to description of charge kinetics in disordered semiconductors. The most important property of these processes is their non-Markovity, in other words, the presence of memory. This means that such kinetics has to be described in terms of integro-differential equations. Self-similarity of these processes leads to fractional kinetic equations [73,74,32,47,77,1,76,78]. Some results of this approach concerning transport in disordered semiconductors, samples with spatial distributions of localized states, multilayer structures, transport in non-homogeneous materials with distributed dispersion parameter, and transient process in the diode at dispersive transport conditions are given and discussed in this work. Concluding, we should stress that the new approach allows to provide important specifications in interpretation of time-of-flight experiments in disordered semiconductors thanks to the fact that this approach can describe energetic and topological types of disorder in common. Often, shape of transient current curves is explained by a specific density of localized states. For example, the dispersive transport is interpreted in terms of the exponential density of localized states for inorganic semiconductors (eg a-Si:H) and the Gaussian density for organic semiconductors. In non-organic semiconductors, multiple trapping is often realized, which is evidenced by the high mobility of non-equilibrium carriers. As shown above, the topological disorder can suppress the influence of distributed energy of localized states and lead to "universal" curves of transient current, as in the case of the exponential density of localized states, but differs from the latter by weak temperature-dependence of the dispersion parameter. In organic semiconductors, the charge transfer almost always occurs by hopping and the percolation nature of the conduction band does not play an essential role. Thus, the topological form of the mobility edge has to be taken into account in the procedure of reconstruction of the density of localized states from transient current curves. This reconstruction should be performed in close association with the analysis of temperature dependence of current curves. The fractional differential approach forms a mathematical basis for such procedure. Acknowledgements Stimulating discussions with Prof. S. Timashev and Prof. V. R. Nikitenko are gratefully acknowledged. The reported study was partially supported by RFBR (research project 12-01-97031) and the Ministry of Education and Science of the Russian Federation. Appendix. Table of
2013-10-01T18:14:10.000Z
2013-10-01T00:00:00.000
{ "year": 2013, "sha1": "fcfad98be4be225c54c810ae426fa49b3e980011", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "fcfad98be4be225c54c810ae426fa49b3e980011", "s2fieldsofstudy": [ "Physics", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
227253392
pes2o/s2orc
v3-fos-license
Suppose your bus broke down and nobody came The absence of a human driver creates novel challenges for fully automated public transport. Passengers are likely to have different expectations, needs, or even fears when traveling without a driver in potentially dangerous situations. We present the results from two field studies in which we explored incident management in a driverless shuttle bus. We explored participant’s behavior and willingness to assist in solving problems in a variety of scenarios where the bus suddenly stops for technical reasons or a hypothesized situation of harassment. In a follow-up study, we focused on auditory remote assistance and investigated problem solving through the passengers. We found that diffusion of responsibility is an existent barrier, when passengers are involved in the resolving of potentially dangerous situations. It can be overcome, when incident-relevant instructions are designed explicitly, briefly, timely, distinguishable from regular on-trip information, and address auditory and visual sensory channels alike. expected to play an essential part on the road to this goal [6]. Vehicle automation is expected to help achieve vision zero by reducing-and eventually eliminating-human errors. This means that the vehicle does not merely replace the human driver role, it must surpass it in order to perform better than a human driver would. In public transport, the driver's role is often not limited to getting the vehicle from point A to point B. The driver can also be responsible for ticketing and capacity management, interventions in case of incidents (be they of technical nature or between passengers), or act as a source of information regarding itinerary and possible connections. Thus, it stands to reason that, beyond the driving performance, a fully automated or driverless means of public transport should be able to address and provide these functions or it would not be a fully realized replacement for the (formerly) human driver role. In public transportation today, especially as far as accessibility is concerned, there are many people who still rely on a human driver to receive help when needed [1]. Furthermore, emergencies in driverless vehicles can be perceived to be worse or more severe than in conventional buses [7]. This suggests that substantial additional effort is required in order to elevate automated shuttles to the same level as conventional ones as far as passenger needs in safety critical situations are concerned. In this paper, we present an investigation into potential incident situations in a driverless shuttle bus together with an attempt to guide and support passengers in the incident resolution via auditory announcements. We begin with an outline of the specific research questions and goals, followed by an overview of related work in the area. We then describe the technical setups and procedures for both studies that were conducted. After that, we present the quantitative results across both studies as well as the more in-depth qualitative results from both studies separately. We conclude with a reflection in the discussion. Study goals and research questions In order to address the aforementioned challenge of autonomous shuttle behavior in incident situations, we set out to define the scope of investigation and corresponding research goals. As technology matures and automated vehicles are more and more integrated into the infrastructure (and in line with previous research [7,8]), we expect the following cases to occur and/or be relevant in a variety of different contexts: -The vehicle stops outside of a designated stop, either in the middle of the road or on the roadside without a discernible cause. This could be due to any number of reasons ranging from technical defects to missing or incorrect routing information from the vehicle's sensors or infrastructure provider. -The vehicle stops outside of a designated stop, either in the middle of the road or on the roadside with a (potentially) discernible cause. This could occur due to congestion, roadwork, road damage, accidents involving other traffic participants, objects on the road that cannot be easily evaded, and similar situations. -Passengers feel uneasy, threatened, or otherwise unsafe before boarding or while riding an automated bus. Such situations could include dirty interiors, intoxicated individuals being present in the bus, or riding at night alone or with only complete strangers present in the bus. We defined unplanned vehicle stops with discernible versus not discernible causes separately for the reason that the latter might, depending on the situation, provide the opportunity for the passengers to interfere and resolve the situation themselves. Due to any number of possible quirks or faults, it is not inconceivable for the bus to stop due to something comparably trivial (e.g., a light object on the road, misdetected as a larger one). At the same time, passengers cannot be expected to act in the way a professional driver would, so it is interesting to investigate both the capabilities and willingness of passengers to interfere in such situations. To this end, we decided to focus on the following research questions: RQ1: Which are passengers' needs for intervention capabilities and information provision in a highly automated shuttle bus regarding Scenario 1: unplanned stops without discernible cause, Scenario 2: unplanned stops with discernible cause, or Scenario 3: potentially threatening or otherwise unsafe situations? RQ2: Can passengers be expected to intervene and resolve certain unplanned stop situations and, if so, is it possible to provide assistance via standard communication means inside a bus? For providing incident-related information and assistance, the auditory channel was chosen because it is the primary channel for communication of information related to unexpected events in public transportation (e.g., planes, buses, trains). Visual information mostly communicates itinerary information (and related deviations), general rules of conduct in emergencies, and information that accompanies the auditory announcements whenever the event was among the expected risks. A lot of work in automotive HCI focuses on general acceptance of driverless means of transport and operation under normal conditions. The volume of user-centered research on incident management in automated public transport is still rather low. Therefore, we opted to start on a level that focuses on user needs in specific incident situations (RQ1), together with an investigation of potential solution strategies (RQ2). We expect these results to inform research and development related to unexpected situations or nonstandard behavior in automated public transport means-both in relation to potentially occurring situations and the impact these have on the passengers, as well as what can to be implemented in the vehicle or provided to the passenger in order to resolve the situation. In addition, we have documented our study setup, which enables wizarding of auditory announcements and invehicle passenger displays without requiring access to the bus' internal systems. Related work Existing work and literature specific to incident management in automated public transport is still rather limited, with most work focusing on incident management in cars in relation to take-over-requests (TORs), acceptance of automated buses in general, or communication between automated buses and passengers or the driving environment in mostly non-emergency conditions (e.g., [9][10][11][12][13][14][15]). Faltaous et al. [16] investigated how to communicate to the driver of a SAE level 2-3 vehicle when the system failed or an unexpected situation for the vehicle has occurred. Their results are five design guidelines, which are targeted toward driver space design for level 2-3 vehicles and, thus, are of limited suitability for fully automated public transportation scenarios. Verma et al. [17] presented a co-design study with the goal of communicating intent or operational state of an automated shuttle to other road users. Their work focused primarily on the co-design approach and aimed at gathering overall user requirements for successful interactions. As a result, the design implications are similarly general with no specific focus on incident management. Wintersberger et al. [18] investigated the acceptance of passengers in an autonomous bus compared with a taxi. The results showed that only the slow speed of the bus reduces the usefulness. In public transport, constant information presentation is required [19]. This includes pre-trip and on-board/wayside information targeted at different user groups such as age groups (children vs. older passengers) or type of travel (e.g., long-distance commuters vs. tourists) [9]. Millonig and Fröhlich [20] identified four passenger needs in automated shuttles: availability, affordability, accessibility, and acceptability. They state that it is difficult to transfer findings from the automotive domain to automated buses and stress the need for transparency and efficiency when the bus is communicating with its passengers. Brown [21] and Brown and Laurier [22] raised the issue of automated vehicles having to respond to a number of social challenges, where "correct" behavior from a legal and technical standpoint can be interpreted incorrectly from a social point of view (e.g., in relation to gaps between vehicles). Eden et al. [23] further stressed that there are not only technical but also social challenges when designing for level 4 automated shuttles. Passengers' safety perceptions are known to have an impact on public transport ridership, i.e., on people's willingness to use public transportation. Yet, the effects that feelings of personal safety have on transit ridership are not widely researched (Delbosc and Currie [8]). In a study within the CityMobil2-project, Salonen et al. [7] found that 54% of passengers in a small driverless shuttle bus found the emergency management (fire, vehicle failure, etc.) of the bus either worse or even much worse than that of a conventional bus. Mahmoud and Currie [24] identified measures to address personal safety issues while travelling on public transport vehicles, with 55% of people (n = 239) ranking roaming security guards on public transport as their preferred measure to take. Another 16% would want to refuse entry to intoxicated persons, and 12% would feel safer with security cameras on board. Alarms or panic buttons to alert guards were ranked first by 10% of people, while another 4% would feel safer if the lighting on public transport would be increased. Also, Stradling et al. [25] investigated people's reasons to not take public transportation, and again drunken passengers at a nightly hour discouraged people the most to get onto the next bus (45%, n = 1.012). While there is some valuable work available related to management of incidents and unexpected situations in automated vehicles, there is still a gap of identifying specific relevant situations and stakeholders' requirements within these situations, as well as strategies for effective incident management, especially in driverless public transport. We contribute to closing this gap by presenting the results from an in-depth qualitative investigation for incident management in an automated shuttle bus. Study setup We conducted two studies to address our two research questions. Study 1 was intended to address RQ1 and gather passengers' requirements for the three defined scenarios. Study 2 was a follow-up study with a focus on scenario 2 (unplanned stop with discernible cause) addressing RQ2. We implemented standardized auditory messages in the bus to assist passengers to address RQ2. Both studies were conducted in the field-study 1 on a closed test track and study 2 in a real road environment (see Fig. 3). For both studies, ethical approval and data privacy measures were obtained. Study 1 lasted for 3 days and took place at a closed test track for driver training. The shuttle bus was an EasyMile EZ10 first generation model. Study 2 was realized on a road in a small village in a real road environment and lasted for 1 day. The shuttle bus was an EasyMile EZ10 secondgeneration model. Both models target operation on SAE level 4. For safety and legal reasons, however, a trained operator had to be present during all rides. The operator interacted with a control unit, which was inactive unless input was required. The study tracks at both locations were circular and had approximate lengths of 800 (study 1) and 3000 (study 2) meters. The track of study 1 included a stop at a traffic light, a roundabout section, and two bus stops (see Fig. 1). The track was even and the average lap time was 5 min. The bus drove with a speed between 8 and 12 km/h, faster on the straight segments and slower during turns and similar maneuvers. The only other road users occasionally present at the track during the study were student drivers and their attendants. The track of study 2 was a regular road with six bus stops, two of them serving as turning points. The track had ascents Fig. 1 The two tracks used for study 1 (left) and study 2 (right) and descents and the average lap time was 24 min. The bus drove at a speed of 5 to 15 km/h depending on the current traffic situation and gradient. Different types of road users (e.g., cars, bikes, tractors, buses, pedestrians) were present during the study. The buses used in both studies provided seating for six passengers. Both studies required two wizards: one in the bus to control announcements (wizard B) and a second one outside the bus to simulate an intercom control center (wizard I). Wizard B had a tablet for control of the auditory announcements in the bus and was seated left of the entrance on the seat furthest to the back. Passengers were not informed about the role of the wizard but were told that this person was a researcher logging technical data. Wizard I was equipped with a mobile phone to simulate the chatbot communication in case the passengers used the intercom for support. The operator stood in the bus at all times to be able to reach all emergency controls in case of an unexpected incident. Passengers were told not to communicate with the wizard and operator but behave as if they were not present. In study 2, another seat was taken by an observer, who was there to observe the incident situation but was introduced as being a fellow passenger (see Fig. 2). In both studies, an intercom equipped with a touch display was installed in the bus next to the seat on the right, furthest to the back. The intercom interface was used to call the control station in case of an emergency. When the participants pressed either and SOS or info button on the intercom, a connection to an intercom chatbot was established. This chatbot was operated by wizard I who was outside the bus remotely via a smartphone app. The bus was equipped with two GoPro cameras at the back and in the front to record all rides. In order to simulate an obstacle on the road (required for scenario 2), a luggage trolley on wheels was placed in the buses' path, which could be removed easily by the passengers while being big enough to be visible from inside the bus. Technical setup for audio announcements The bus announcements in both studies were stored on the intercom as individual audio files. Via a simple Web interface on a tablet, which sent https requests to the intercom, wizard B could play back each of these files individually via a simple tap. The announcements in study 1 contained regular on-trip information (bus stops, information of imminent departure, etc.). Information relating to incidents was limited to a general notification and guidance on how to contact the control center (see items 7, 8, and 14 in Table 1). Instructions on how to resolve the incident were entirely provided by wizard I acting as a chatbot based on a pre-defined speech protocol. In study 2, the aim was to resolve the situation by using only the pre-recorded bus announcements. Thus, an extended list of items for incident-related announcements (see full list in Table 1) was used. Wizard I, who simulated the control center, was still present but only as a fallback option. For study 1, we used a machine voice to record the "An interruption of operations has occurred." 8 "The bus is currently unable to resume driving." 9 "We ask for your understanding." 10 "An obstacle has been detected, which appears to be blocking the bus' path." 11 "If possible, please remove the obstacle." 12 "Please note that this is an unplanned stop." 13 "Be careful when leaving the vehicle." 14 "Should you require additional help, you can contact the control center via the SOS-button on the intercom." 15 "The bus will attempt to resume its journey once the obstacle has been removed and the bus doors are closed." 16 "Please close the doors only once all passengers are aboard the bus." 17 "The issue has been resolved. The bus will resume operations shortly." 18 "Doors are closing." bus announcements. Due to feedback from the participants that the announcements were perceived to be unpleasant and difficult to understand at times, we used a different solution for study 2. For study 2, all speech items (see Table 1) were recorded by a researcher taking care to use proper pronunciation and intonation as it is usually used in public announcements. The samples were then processed with the audio software Logic Pro [26]. In order to achieve a better quality of the recordings the very low (< 20 Hz) and the very high (> 20 kHz) frequencies of the files were cut and voice clarity was improved. At the end, the files were compressed to −20 RMS and −8.0 peak to further improve voice volume. There was a limit to the possible number of files as the space on the intercom's internal storage was very limited (about 18 MB). Specifics study 1 Study 1 aimed at covering all three kinds of scenarios defined in the study goal section in order to identify passengers' requirements. As it was situated at a test track, there were fewer restrictions regarding traffic regulations and contextual constraints as in a real road environment. Scenario 1: interruption of operations Scenario 1 was set to occur during the second round of a three-round ride on the test track. Bus announcements activated by wizard B informed the passengers about an interruption of operations due to unknown reasons. Then, after approx. 2 more minutes without any further information, participants were informed that the bus would be able to continue its journey. Scenario 1 did not require the participants to take any action as the situation was resolved automatically after some time had passed. Scenario 1 focused on the lack of the bus driver as a person to speak to in the case of an irregular stop and what amount of information is sufficient enough to feel safe in automated public transport in the case of a service interruption. Scenario 2: obstacle Scenario 2 was a variation on scenario 1. Once again, the bus had an unplanned stop but this time due to an obstacle, which obstructed the buses' path. Initially, this was not revealed to the participants. Instead, they were first informed about the bus not being able to proceed and that they should call the control station with the help of the intercom in the bus. These instructions were also activated by wizard B via bus announcements. Throughout the scripted conversation with wizard I via the intercom, participants were then offered two choices, either to remove the obstacle themselves or to call an emergency vehicle to remove the obstacle for them. The scenario concluded if the obstacle could be successfully removed and the bus had completed its ride or immediately, in case the participants chose to call an emergency vehicle instead. Scenario 2 focused on the participants' willingness to take over responsibility and act as well as the usefulness of the chatbot conversation for resolving the situation. Scenario 3: threats Scenario 3 comprised three hypothetical threat scenarios, which were discussed with the participants while they rode in the automated shuttle bus. The focus of scenario 3 was set on vandalism, harassment, and being late to catch the shuttle bus in time and the question whether or not the bus should stop in a situation like this. Due to their nature, these situations were not simulated like scenarios 1 and 2 had been. The three scenarios were read to the participants aloud, after which they were asked to give their opinion on each of these situations separately. Specifics study 2 Study 2 focused entirely on scenario 2, since we wanted to further explore passengers' problem solving potential in a more focused setup. Also, we used feedback from study 1 to implement an improved auditory setup for study 2 After study 1, we found this scenario to be the most relevant and interesting one with regard to finding out about the needs of passengers in automated public transport in the case of an unexpected incident. As we had already experienced the phenomenon of diffusion of responsibility during the initial study, we wanted to pursue it further. Also, some of the feedback collected during scenario 2 in study 1 got already implemented in study 2. For example, the information provided by the intercom chatbot had been experienced as over-lengthy and complex. Therefore, the auditory information was reduced to concise auditory recordings/in-bus announcements, which were played automatically after the incident occurred without any further passenger interaction necessary. The intercom chatbot was only the secondary means of interaction and set up as a fallback, in case the participants needed more guidance. Hence, it was no longer necessary to rely on the intercom chatbot to resolve the situation but instead the in-bus announcements supposedly provided sufficient information to the passengers to resolve the situation efficiently and in a satisfying manner (see Table 2 for overview on in-bus announcements). Study procedure Participants were recruited via various channels (e.g., mailing lists, bulletins at municipal office, local associations). Exclusion criteria were unaccompanied children under the age of 14, wheelchair users, and people with baby carriages due to legal reasons. None of the subjects participated in both studies. Studies 1 and 2 proceeded very similar. Participants were welcomed at an appointed meeting place and time. They were introduced to the study procedure and signed an informed consent. Legal guardians signed for their underage children. All participants filled in a pre-ride questionnaire (see Table 2). The two children taking part in the real road study were also provided with age-adjusted versions of all questionnaires that were handed out during the study. In study 1, one seat was occupied by wizard B. In study 2, two seats were occupied by wizard B and the observer. Thus, participants were split into groups of a maximum of five (study 1) and four (study 2) people. Each group was led to the bus stop where the bus initially departed from. Before getting into the bus, the participants received behavior instructions (e.g., to stay seated at all times during the ride) and were advised to not interact with the operator and the researcher (wizard B). During the second study on the real road, they were allowed to talk with the observer, who was introduced as being a fellow passenger. All participants had the task to take a ride until their designated bus stop. The setup for scenario 2 varied a bit between the two studies. While on the test track (study 1), the obstacle occurred immediately when the bus was about to leave the bus stop; on the real road (study 2), the obstacle occurred at the bus stop, which was the turning point on the route after a 12-min ride. During the ride as well as the incident, wizard B played the bus announcements in accordance with what was actually happening on the route (e.g., "Next stop: [stop name]," or "An obstacle has been detected, which appears to be blocking the bus' path."). The obstacle for scenario 2 was placed in the bus-after the first bus stop in study 1 and at the turning point bus stop (after a 12-min ride) in study 2. After the incident was resolved either way (participants removed obstacle) or the other (control station sent for emergency vehicle to remove obstacle), the bus continued its ride back to the first bus stop. Participants, then, got out of the bus and filled in the post-ride questionnaire (see Table 2). The study design slightly varied here again. Participants in study 1 only had to fill in the post-ride questionnaire, while participants in study 2 had to fill in two additional x Perceived quality of voice experience (2 items) x Post-ride: conjectures about AV, satisfaction, safety, trust (4 items) x x questionnaires. One addressed the perceived quality of the bus announcements and the other the perceived safety during the ride (see Table 2). After filling in the questionnaires, the incidents were discussed in the group with respect to the participants' experiences. Suggestions for improvement with regard to the announcements and emergency measures were collected. The final discussion was recorded with a voice recorder. One run with one group of participants took about 1 to 2 h in total, depending on the number of incidents the participants experienced and how quickly they resolved scenario 2. If participants were to take part in study 1, scenario 1 always happened prior to scenario 2 because the procedure was similar to the one in scenario 2 but with one major difference: participants were not held responsible for solving the problem, but were just experiencing it. Scenario 3 was always set up to conclude the study as it was an extension of the already ongoing discussion started after scenario 1 and/or 2. It was, in fact, an in situ discussion group, with participants discussing threat scenarios, while riding in an autonomous shuttle bus. Participants Overall, 24 participants went on a ride with the automated shuttle bus and experienced an incident in study 1 (13 participants in three groups) and study 2 (11 participants in three groups). In study 1, four participants experienced the obstacle scenario (2) only, while four other participants only experienced the interruption-of-operations incident (scenario 1) and discussed in-vehicle security based on the three threat scenarios (scenario 3). Five participants were involved in all incidents during the test track study (Fig. 3). Of the 24 participants, 16 passengers were female (66.7%) and eight passengers male (33.3%). Five participants belonged to the age group of 18-to-25-year-olds (20.9%), 13 to the age group of 26-to-50-year-olds (54.1%), and four participants were over 50 years old (16.7%). Also, 2 children took part in study 2, who were 8 and 10 years old. Over 90% of the participants (n = 22) have had no pre-experiences with self-driving vehicles before taking part in the study. One has been riding in a Tesla and one in a different kind of automated shuttle bus. One-fifth of them reported to use public transport on a daily basis (20.8%), another fourth several times a week (25.0%). In total, 16.7% use public transport at least several times a month, onefourth one time a month or less often (25%), and 8.3% reported to never use public transport at all. Two participants indicated to be near-sighted and one participant reported to have a mild form of eye cataract. Results and general reflection The pre-ride and the post-ride questionnaires were completed by all participants in both studies. These were intended to provide a general insight into the potential effect of incident situations with automated buses on passengers' perceived safety, reliability, and convenience. Thus, these results are reported for the whole sample. Where relevant, we inserted quotes from the discussions or video observations (primarily from study 1) for the purpose of illustration. The qualitative results are reported afterwards and separately for each study. In general, the results from the quantitative scales are, due to the participant numbers, primarily to be considered a basis to be supplemented with the qualitative results. Before and after the ride, participants judged the bus on a 4-point scale (fully agree, rather agree, rather not agree, not agree at all) with regard to the characteristics: reliability, safety, and convenience. There was also a "don't know" option available. Overall, participants seemed to be able to better form an opinion on the shuttle bus, which can be seen in the decline in "do not know" answers after the ride. Regarding safety, before the ride, 40% of participants were not sure whether being in an autonomous shuttle bus would be safe. After the ride, nearly 55% of them fully agreed or rather agreed to the ride being safe, but also over 40% did rather not or not agree at all to the ride being safe. As one participant mentioned: "I'm rather disillusioned about trusting the bus to be safe after today's ride, because without an operator nothing works." Another one stated: "The audio system failed, so one is at the mercy of the bus. I don't find this trustworthy. I thought my trust towards the bus would be higher than it was after this ride." Another participant questioned the bus's safety with increasing speed: "If the bus has a speed of 10 km/h it's fine but if it's speeding up to 50 km/h, I would feel safer with a safety belt." (study 1). Since all participants had experienced incidents during the ride, a change in perception toward more concrete opinions (both positive and negative) was to be expected. Interestingly, however, these changes in perception of the bus' safety turned out to be non-significant after performing a Wilcoxon signed-rank test (z = −1.490, p < 0.136). This suggests that the participants might not actually have considered all the situations they experienced as safety critical but rather associated them with the reliability of the bus. With regard to reliability, the "do not know" ratings before and after the ride were rather similar to the ones for safety. Before the ride, over 45% of the participants were not sure whether the bus would be reliable or not. After the ride, two-thirds of them fully agreed or rather agreed to the bus being reliable, while one-third considered the bus to be rather not reliable, which is an increase of slightly more than 20% in this category compared with before the ride, which supports the assumption that people not only assessed the incidents as safety relevant but also as critical with regard to the reliability of the bus. These results were significant (z = −2.250, p < 0.024). Especially, when the actions of the bus were not transparent for the participants, they felt left alone: "We stood on the street for 3 minutes, without knowing why. We also pushed the button but the door did not open." (study 2). Another one stated: "You think: 'Why is the bus stopping now?' but you get no answer. That is a weird feeling." (study 2). It was a bit different with the estimated convenience because over 40% of the participants were rather sure initially that a ride in an autonomous shuttle bus would be a rather convenient experience, but afterwards nearly as many participants rather did not agree to that. A fifth found it to be fully convenient, though, and no one found it to be not convenient at all. These results were also significant (z = −2.390, p < 0.017). One participant criticized: "I sat against the driving direction, which I found unpleasant. I would like to keep an eye on the surroundings." (study 2). One participant underlined the comfort of a bus driver being present in the bus. who passengers can talk with: "Frankly speaking, it's just more convenient to have a bus driver in the bus, who tells you what is happening right now." (study 1). Overall, the participants appear to consider the issues and failures presented to them during the study as potentially inconvenient but less as indicators for the bus' performance under safety critical (i.e., with potential for bodily harm) situations (Fig. 4). After getting off the bus, participants were asked how much they liked the ride on a 4-point scale ranging from very much to not at all. In total, 30% liked the ride very much and 70% liked it. Participants were also asked how safe they felt during the ride on a 4-point scale (very safe, safe, less safe, not safe at all). Nearly 80% of the participants felt either safe (58%) or very safe (30%). The remaining 12% felt less safe, but no one felt not safe at all. The adult participants (n = 22) were also asked if they would let their (potential) children take a ride on the autonomous shuttle bus. All participants, who were parents agreed that they would let their children take a ride on the self-driving shuttle bus (36.4%). Of the participants without children, one-half agreed to allow their children to get on board of the autonomous shuttle bus (31.8%), while the other half would not allow that (31.8%). Qualitative results In the following, at first, the qualitative findings from the discussions, video observation, and additional scales used are presented. Once again, we first report the results from study 1 and afterwards the results from study 2 separately. Since study 1 focused on requirements in three case scenarios (RQ1), the findings consist of a number of participant requirements for each case. In addition, potential supportive and hindering conditions as well as solution strategies are presented. For study 2 (primarily addressing Fig. 4 Study 2 results regarding perceived reliability, safety, and convenience RQ2), we report applied solution strategies and supportive and hindering conditions as well as participants' perceived safety and perceived voice quality [27]. The latter was added to further supplement RQ2 and to detect potential influences caused by the study setup (specifically the quality of the recorded announcements). Scenario 1: interruption of operations The bus suddenly stopped after it left the bus station and the audio information that an interruption of operation occurred was given. After a few seconds, the information was given that the ride will continue. In general, the participants experienced the situation as non-hazardous. The participants stated the following requirements: -Audio information The provided audio information was assessed as helpful and sufficient. Repetition of information was considered necessary, if the interruption lasts longer than 5 min. -Textual information Textual information should be provided additionally on displays to inform the passengers about the duration of the interruption, if they could set any action or have any possibilities to get more information about the situation, e.g., use of intercom in the bus. -Interaction via intercom In case of longer interruptions, the participants expected be able to interact with a real person via an intercom. -Getting off the bus A further requirement was to include a function that passengers can get off the bus in case of interruption of operations. -Video surveillance (CCTV) Although video surveillance as an additional feature in the bus was discussed controversially, it was considered important in more threatening emergency cases. ("If it is necessary it would be good that the cameras are active, but one is observed all the time.") -Interaction with environment The interaction of the bus with the environment to inform other road users that there is an interruption of operation was also an important aspect for the participants. Scenario 2: obstacle on the road As outlined, the second incident was a situation in which an obstacle in front of the bus hinders the bus from continuing the ride. The participants were informed that there is interruption of operation and that the control center should be contacted. Based on the video recordings of the rides of the three test groups, we coded which strategies the participants applied to manage the incident. In the final discussion, the participants were asked how they experienced the incident, if the acoustic information was helpful to manage the incident, and which improvements they would suggest. To structure the findings, in the following, the applied strategies, hindering conditions as well as the suggestions for improvement are presented. The applied strategy can be characterized as active and cooperative. Both test groups reacted immediately after the audio information, that the control center should be contacted, was given. The coordination of action was experienced as easily: the person next to the intercom contacted the control center and a volunteer got off the bus and removed the obstacle. The following conditions were regarded as hindering: -Explanation of functions of buttons There was a short confusion if the emergency button or the info button should be pressed and what possible consequences might be. Here, a clear information is requested. Does pressing the emergency button alert the police or ambulance or just the control center? ("I pressed the info button, but it is was not clear if the info or the emergency button should be pressed. It was not an emergency, thus, I pressed the info button.") -Chatbot vs. human The conversation with the chatbot was assessed as too long, especially for life-threatening situations immediate help is expected, preferably by interacting with a human instead of a chatbot. ("The communication took quite long. It should be possible to get help quicker in case of emergency.") -Diffusion of responsibility Although both groups resolved the situation by active involvement, the phenomenon of diffusion of responsibility was discussed, which describes the phenomenon that a person is less likely to take responsibility to act accordingly when others are present. The setting on the test track was experienced as familial and non-hazardous, but the participants doubt that it would be that easy to cooperate with strangers in such a situation. Thus, a suggestion was to nominate a person in the bus, who should take the responsibility in such a situation, e.g., the person next to the intercom. -Shift in responsibility The participants also provided an explanation why passengers of autonomous shuttles might react reserved, as it is a fundamental shift in the common hierarchy of responsibility. The participants were asked to overtake responsibility for the undisturbed operation of the shuttle and are asked to act, which is currently very unusual when using public transport. ("If nobody in the bus has the feeling to be competent, then more passengers won't feel addressed.") The following suggestions for improvement were given: -Audio information The requirements referring to the audio information resp. conversation with the chat bot were that the volume should be higher, and the provided information should be more precise, e.g., "Please press the emergency button on the intercom to contact the control centre.", instead of "Please contact the control centre via the intercom.") -Textual information In general, the willingness to follow the instructions of the control centre was high, but as already stated for incident 1 the combination of audio information and textual information was suggested as an improvement. Scenario 3: potential threats In order to collect requirements for exceptional conditions, three different situations were presented to and discussed with the participants (harassment, dirt, and catch the bus). The findings of the discussions show that in general participants had the same requirements and expectations as they would have in current public transport, especially the underground, where the driver also is not present. With respect to harassment, the following information was given to participants: "You are sitting alone in the bus; another person enters bus and sits down next to you and comes very close. You feel uncomfortable." Participants' feedback was given to the following aspects: -CCTV In case of harassment, the use of CCTV is considered helpful, as it strengthens the individual's feeling of safety. Some of the participants attribute a deterrent effect to CCTV and expect that CCTV would immediately support them if one is confronted with harassment. -Mobile phones or emergency buttons These were considered helpful devices resp. functions in an exceptional situation, as their handling is well known. -Getting off the bus The wish for a possibility to get off the bus was a common request in all test groups. With respect to the use case dirt, participants were told: "You want to enter the bus, but it is very dirty. A sticky fluid is on the floor and it has a strong smell." Participants gave the following responses: -Interaction via intercom The participants agreed on the requirement that in case of a dirty bus it should be possible to contact the operating company already at the bus station, e.g., via intercom, and notify them that the bus is dirty. -Features of an autonomous bus station The requirements are quite similar to requirements for common public transport stations, e.g., weather protection, info displays, rubbish bin, but additional an intercom is suggested and plug sockets, as the supply with infrastructure for charging mobile phones is considered as getting more and more important. For the use case catch the bus, participants were told: "You are in the bus, which is already leaving the station, when you see a person running after the bus trying to catch it." Participants responded with the following: Study 2 In study 2 in a real traffic environment, only the incident management for the second scenario (obstacle hinders the bus to drive on) was tested. The provided audio information was adapted based on the findings of the test at the test track. Based on the video recordings of the rides of the three test groups, we coded which strategies the participants applied to manage the incident. In the final discussion, the participants were asked how they experienced the incident, if the acoustic information was helpful to manage the incident and which improvements they would suggest. To structure the findings, in the following: the applied strategies, the supportive, and the hindering conditions as well as the suggestions for improvement are presented. All 3 test groups did react on the acoustic information, but the applied strategies to manage the incident as well as the range of actions set differed. Two of the three test groups applied a strategy which can be characterized as active and cooperative. The participants agreed on the definition that they are confronted with a test task and coordinated their further actions. The applied strategy of the third test group can be characterized as passive awaiting. The test persons were uncertain about the situation and they did not jointly define the incident as a test situation. The following conditions were regarded as supportive: -Relationship between participants Some of the participants know each other, so cooperative action was supported. As stated before, the initiation of cooperation with strangers was assessed as difficult. -Audio information The provided audio information not only triggered the participants to define the incident situation as a test situation, but also was assessed as credible and helpful. The audio information was experienced as clear and accepted as a guidance for action. -Attention level and readiness to overtake responsibility The participants of two groups were attentive to the surroundings and the bus during the ride (e.g., looked out of the window and observed overtaking cars, searched for information on the display). These participants felt responsible for resolving the incident. The participants of the third group, on the contrary, paid hardly to no attention to the surroundings, talked busily with each other, and during the incident did not take any action and did not feel responsible. The following conditions were regarded as hindering: -Ambiguity of audio information The instruction for action should be clearer: "Press the green button to open the doors," instead of "open the doors." Some participants stated that the wording was irritating, as they associated big troubles with the term "incident" and felt rather discouraged at the first moment to set any action to resolve the incident. -Seriousness of audio information unclear Related to the aspect of ambiguity is the fact that the provided information was not clearly discernible as a relevant and serious information for all passengers. A kind of sound signature or marker for important audio information was recommended, e.g., "Attention please, or beeping." -Additional textual information There were no further information, besides the audio information in the bus how to manage the incident. The repetition of the audio information was not considered helpful, but as even more overstraining. -Technical failure The connection to the control center was disturbed during one of the test groups, which hindered the participants to develop an alternative strategy to manage the incident. -Missing agreement among participants Group dynamics are also an important aspect when analyzing incident management. In one of the test groups, the participant, which tried to set any action to manage the situation, was too uncertain to take any action without the affirmation of the other persons. -Shift of responsibility As already stated, the request to intervene was irritating for some of the participants, they did not expect to be confronted with such a task and considered this as the duty of the operator: Further some of the participants felt unable to cope with the situation, as they had the feeling not to have the competences to deal with a problem in a high-tech bus ("What if everybody presses another button, this is certainly adverse." "I thought, that I am the passenger and I won't get out of the bus."). -Rigid compliance to instructions The participants were instructed before the test and asked not to press the emergency buttons for the operator. Thus, the audio information to contact the control center was not accepted as a task from all participants, as they wanted to comply with the rules and not touch any of the buttons in the bus. The following suggestions for improvement were given: -Nomination of person in response As already stated in the discussion on the test track, the discussion in the real environment test confirmed that a crucial aspect is the decision who should set the first action. The phenomenon of diffusion of responsibility in the view of the participants could be to nominate a specific person, e.g., "The person on the left by the door." -Additional features in the bus requested were (1) a central, more visible position of the intercom, so that each passenger can see it easily; (2) the integration of the function, that in emergency cases the passengers can directly contact the police or the ambulance; (3) additional cameras outside the vehicle, so that the control center also can see what is going on around the bus; (4) additional touchscreen for additional information retrieval; (5) additional light signal on the bus to attract the attention of persons, who are distracted or wear headphones. Perceived safety questionnaire Six participants out of nine agreed or fully agreed to the statement that they felt relaxed during the ride on a 6point scale ranging from fully agree to not agree at all. Two participants felt rather relaxed and one person was not relaxed at all. Four participants, also, fully agreed that they felt safe during the bus ride, while two agreed and three rather agreed to that statement. Only, one participant did not feel safe while in the bus. Also, three out of nine participants rather agreed to not feeling in control during the ride. One person fully agreed, one rather not agreed, and two not agreed or not agreed at all to that statement. To summarize, half of the participants felt rather in control, while the other half did rather not. Seven participants felt not nervous (at all) during the ride. One participant felt totally nervous and another at least nervous. Eight participants did not agree to the statement, that they wanted to get out of the bus during the ride. One rather agreed, though. Also, one participant out of nine would not want to take a ride in an autonomous bus again, while seven definitely would and one rather would. When asked if the the pre-recorded bus announcements assured them of having received all necessary information, although no bus driver was present during the ride, five out of ten participants fully agreed to that, three agreed, and two rather agreed. No participant did not agree (at all). When asked if the the pre-recorded bus announcements assured them of knowing what to do in the case of an incident with no bus driver being present, four participants fully agreed, three agreed, two rather agreed, and one rather not agreed. Perceived quality of voice experience questionnaire Participants were also asked via two items on a 6-point scale regarding the quality of the pre-recorded in-bus announcements. Item 1 asked whether the voice was clearly audible; item 2 whether the voice sounded as if a real person was talking to them. Eight out of nine participants fully agreed with item 1, with only one participant not agreeing at all. Thus, perceived sound quality was rated very good overall. However, for item 2, three participants out of nine did not agree at all and three did not agree. Only two rather agreed and one fully agreed. These results suggest that the voice interaction alone might not sufficiently compensate for no human driver being present. Discussion In the following, we discuss the findings with respect to identified passenger needs and expectations regarding information provision and intervention possibilities in the defined scenarios (RQ1) as well as passengers' experiences with resolving incidents in specific situations (RQ2). Diffusion of responsibility and precision of information While scenario 2 was resolved successfully in every case but one, it should not be expected that this quota is representative for all such situations in a real deployment. Diffusion of responsibility and a resulting unresponsiveness to related messages are potential issues requiring appropriate design solutions. Responsibility needs to be first appointed and then also accepted by the individual(s) potentially feeling responsible. In order for this to work, instructions can be directed better, the shorter and more concise they are. This is also something that worked better in study 2, where only very few and concise instructions were communicated. Once one person felt responsible and started to move, others would start to move as well. Thus, as long as one individual in the bus can be reached, diffusion of responsibility can be countered. It is likely that an in-time solution alone, which apportions responsibility once an incident has occurred, is not sufficient. Adequate information regarding responsibility and expected behavior in case of an incident should also be provided before an incident can occur, not dissimilar to emergency instructions in an airplane. After all, a train passenger can expect to never be asked to do an engine or rail track inspection after having boarded a train. This should be the same for automated buses. In addition, not every passenger is a healthy, middle-aged individual fluent in the local language. Thus, different problem solution capabilities on the parts of the passengers are to be expected. It should be kept in mind at this point that the goal of automation should not be to put the burden of compensating for automation failures on the passenger. This is not the point of this investigation and would arguably miss the point of automation technology altogether. However, there is a difference between, e.g., a bus stopping due to engine failure versus an inattentive teenager immersed in her/his smartphone blocking the bus' path. The former requires professional intervention, the latter not necessarily. Appropriate cueing Directly related to how diffusion of responsibility might be resolved is the issue of appropriate cueing. As it turned out, the fact that, from their auditory qualities alone, incident-related messages were indistinguishable from regular announcements was a contributing factor to passengers paying less attention in case of an incident than they might have otherwise. Auditory cueing before announcements is not a novel concept and is nowadays standard, e.g., in trains, supermarkets, airports, and similar contexts. However, such cues are usually uniform, which means there is one cue with the sole function of signifying that information content is to follow but without specifying the type of content. Brown and Laurier [22] suggest using initial motions to cue the driver of a level 3+ vehicle toward the intended maneuver before it is executed. While this cannot be directly applied to the automated shuttle context, as there is no driver and neither are most situations related to specific driving maneuvers, the same channel that is already being used for passenger interaction. Thus, if the primary information channel for non-emergency information used in the vehicle is the auditory one, then cueing should be done via the same channel. While this may sound trivial in itself, there is further differentiation necessary for the individual cues and the situations they relate to. Different cues should relate to different types of content and be consistent with standard communication (i.e., not incident-related) inside the bus. Regular announcements should be clearly distinguishable from nonregular announcements before they occur, so the passengers can appropriately adjust their attention beforehand. These need not be limited to auditory cues either. Ambient lighting to visualize the bus status (driving, planned stop, unplanned stop, approaching stop, about to start driving, etc.) can similarly be used to direct passengers' attention to incoming situation-relevant information. Open door policy? An interesting point, which came up several times across all studies and scenarios, was related to the bus doors. Quite understandably, the participants expressed a desire to be able to exit the bus at any time when confronted with incident situations. While it would be difficult to argue for why passengers would want to be trapped inside an automated vehicle, there is the valid question of whether it is really the best solution to have the doors freely able to be opened at all times. With intended use comes unintended use and a door that can be opened at all times can be opened while the bus is driving as well. This might not be that big of an issue if the bus has just departed or if it is an urban shuttle gently driving along in an otherwise pedestrian zone. However, things look different when the vehicle is driving on a country road going 60 km/h, where both boarding and departure would be neither safe nor sensible. Thus, an automated vehicle will need to be able to detect the driving environment also with regard to passenger safety and not just what is necessary to execute driving maneuvers. At the very least, the vehicle should provide recommendations on whether it is safe to leave the vehicle and which potential hazards could be expected if they do so (or whether they should stay in the vehicle and that responders are on the way). Even then, there is no driver who would, in case of an accident, take safety precautions so no further accidents occur. As one participant in study 1 mentioned: "A [human] bus driver would simply put up a warning triangle but here you just have this bus standing in the middle of the road. Maybe it should have a display, just like the police cars do sometimes, that says 'Warning, accident ahead!' or something similar." But then there are not just accidents but also other situations of potential danger to passengers. If someone is trying to, e.g., rob a passenger in the bus, then one of the safest measures would be to make it as easy as possible for the passenger to escape. On the other hand, imagine a situation where a vulnerable individual inside the bus is approaching a bus stop, alone and at night, and it is clear that the unsavory-looking individuals waiting at the stop are just waiting for their victim to arrive-after all, there is no human driver present and no way for the passenger to keep the doors from opening [24]. As one participant asked "Can I lock myself inside the bus, so nobody can come inside?" A valid question, which is unlikely to have a single answer that works for all contexts. The "lazy" passenger On a more general level, another finding of our studies is that we might need to rethink the nature of riding in a bus, as the human's role in it might change with the introduction of automated buses. Riding in a bus with a driver ensures that I am the passenger and play only a very passive role. I can operate my smartphone or just listen to music and enjoy the environment. The driver will solve any problems and if there is anything I am required to do I am told to do so by another person (e.g., when showing my ticket). It is not required to take any other action nor to take over responsibilities save for what is necessary to get on and off the bus at my desired stops. This is how we are socialized given how public transportation has worked for decades. However, unless the ideal goal of flawlessly working fully automated public transportation is realized, riding in an automated bus might require active involvement from passengers at key points or instances. But this is-and rightfully so-not how bus riding is seen today. A shift in expectations of what a bus ride in an automated bus is needed. And paradoxically enough, this could mean higher potential for passenger involvement when the bus is fully automated. If there is the possibility that passengers have to take over responsibilities, this has to be communicated in advance-so far in advance, in fact, that it is part of their expectations before boarding the bus, so that it is part of their informed choice of which means of transport to use. A clear transparency of what might or might not be required from passengers is needed, otherwise user experience in automated buses might could a bumpier road ahead than expected. Limitations Due to legal requirements, it is not yet possible to operate vehicles of SAE level 4 and above in Austria. Regardless of vehicle automation level, each vehicle must have a designated human driver who is capable of interventions and responsible in case of incidents (in this case, this role was taken by the operator). For the user studies, this meant that it was not possible to simulate an environment that is fully identical to a future level 4 or 5 automation scenario, as the operator was always present and could not be disguised as a passenger, due to semi-regular interaction with the operator control unit in the bus, which was visible to the passengers. In order to simulate full automation as best as possible, the passengers were instructed to ignore the operator and the operator was instructed to not communicate with the passengers until a trial had been concluded. Still, it should be expected that any observed effects might be different (and potentially stronger) in a vehicle without a human operator present in the vehicle. With respect to external validity and experiencing scenarios in real life, we had to experience some limitations. The choice to design the threat scenarios as interactive discussions was primarily made for reasons of safety and feasibility. The more realistic such a threat is to a participant, the higher the likelihood that said participant would respond in an unexpected manner. This presents a risk to both the researchers and other participants. For the same reason, ethical approval (which we did have for the presented study) for such a procedure would have been difficult to obtain as well. Presenting these scenarios in a realistic way also requires appropriate immersion of the participants, potentially requiring conduction of the study during nighttimes, appropriate acting skills on part of the researcher, and other such factors. If, at any point during the study, the immersion is lost, then so is the external validity. Apart from this, the "missed the bus" scenario is simply very hard to time properly, so that the participants can neither leisurely stroll to the bus and easily board it nor miss it before they have even a chance to reach it. The secondary task before boarding needs to be carefully calibrated, so as to not distract the participants too much, while, at the same time, not appear to specifically set them up just to catch the bus and nothing else, or external validity is lost once again. A contextual discussion, while starting from a lower level of external validity, does not run these risks to the same degree. By having the discussion in the bus as it is driving, we attempted to provide a safe minimal degree of immersion, as the participants would articulate their points from their passenger role in that moment. Our methodological choice has lowered the external validity in comparison with more realistic studies. The number of participants for both studies (N = 13 and N = 11) is not sufficient to draw meaningful quantitative conclusions. Thus, quantitative means were used sparingly and intended as a basis for the qualitative results. As with any qualitative study, the focus is on identifying and attempting to explain individual phenomena, while its results might or might not hold on a larger scale and across different demographics. Investigation of perception and management of incidents in automated public transport on a larger scale is, therefore, subject to further research, which is planned to continue in early 2020. Conclusion In this paper, we presented two studies that investigated the problem of passenger information and interaction in a driverless shuttle bus when the bus faces an incident. We found a number of factors surrounding the aspects Diffusion of responsibility, proper announcement/interaction cueing, and door behavior. We found that diffusion of responsibility can be an issue and can effectively halt passenger interaction with the interface(s) unless resolved. Clear and focused instructions aimed toward the passengers are needed when passengers have to take actions. Regular announcements and information about incidents need to be clearly distinguishable from one another via appropriate cues. In more concrete terms, we found and suggest that in case of an incident, audio information about the incident is needed. This information needs to be repeated at least every 5 min. Additionally, textual information about the duration of the interruption and the possibility to get further information need to be provided. For longer interruptions, a communication with a remote operator is needed via an intercom. Passengers need to have the possibility to exit the bus if desired, although it will need to be handled on a context-sensitive basis in order to ensure a good balance between passenger freedom and safety. CCTV is helpful to increase perceived safety but is subject to potential privacy issues. Information on the purpose and consequences of emergency and information buttons need to be very precise. For example, does pressing the emergency button alarm the police/ambulance or the control center? The usage of a chatbot in an emergency situation is not recommended, since it might lead to delays. If a chatbot is used, then the volume needs to be such so that it is audible for all passengers. Allocation of responsibility should be done explicitly (e.g., "The person next to the intercom"). A distinction between traditional voice information (e.g., estimated time of arrival) and information form serious information can be achieved by a sound signature or auditory icons. While it should not be expected that the human passenger in an automated bus is a reliable fallback strategy in case of incidents, incidents can and will occur and appropriate strategies need to be devised, when a human driver is no longer present. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommonshorg/licenses/by/4.0/.
2020-10-29T09:06:53.727Z
2020-10-26T00:00:00.000
{ "year": 2020, "sha1": "c4494b58a638baf0f0f40b1ff97b7cfe78573a7f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00779-020-01454-8.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "94032a18480f548dc7534ae2c75bb1c2c0b0197e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
3120361
pes2o/s2orc
v3-fos-license
Huntingtin facilitates polycomb repressive complex 2 Huntington's disease (HD) is caused by expansion of the polymorphic polyglutamine segment in the huntingtin protein. Full-length huntingtin is thought to be a predominant HEAT repeat α-solenoid, implying a role as a facilitator of macromolecular complexes. Here we have investigated huntingtin's domain structure and potential intersection with epigenetic silencer polycomb repressive complex 2 (PRC2), suggested by shared embryonic deficiency phenotypes. Analysis of a set of full-length recombinant huntingtins, with different polyglutamine regions, demonstrated dramatic conformational flexibility, with an accessible hinge separating two large α-helical domains. Moreover, embryos lacking huntingtin exhibited impaired PRC2 regulation of Hox gene expression, trophoblast giant cell differentiation, paternal X chromosome inactivation and histone H3K27 tri-methylation, while full-length endogenous nuclear huntingtin in wild-type embryoid bodies (EBs) was associated with PRC2 subunits and was detected with trimethylated histone H3K27 at Hoxb9. Supporting a direct stimulatory role, full-length recombinant huntingtin significantly increased the histone H3K27 tri-methylase activity of reconstituted PRC2 in vitro, and structure–function analysis demonstrated that the polyglutamine region augmented full-length huntingtin PRC2 stimulation, both in HdhQ111 EBs and in vitro, with reconstituted PRC2. Knowledge of full-length huntingtin's α-helical organization and role as a facilitator of the multi-subunit PRC2 complex provides a novel starting point for studying PRC2 regulation, implicates this chromatin repressive complex in a neurodegenerative disorder and sets the stage for further study of huntingtin's molecular function and the impact of its modulatory polyglutamine region. Huntington's disease (HD) is caused by expansion of the polymorphic polyglutamine segment in the huntingtin protein. Full-length huntingtin is thought to be a predominant HEAT repeat a-solenoid, implying a role as a facilitator of macromolecular complexes. Here we have investigated huntingtin's domain structure and potential intersection with epigenetic silencer polycomb repressive complex 2 (PRC2), suggested by shared embryonic deficiency phenotypes. Analysis of a set of full-length recombinant huntingtins, with different polyglutamine regions, demonstrated dramatic conformational flexibility, with an accessible hinge separating two large a-helical domains. Moreover, embryos lacking huntingtin exhibited impaired PRC2 regulation of Hox gene expression, trophoblast giant cell differentiation, paternal X chromosome inactivation and histone H3K27 tri-methylation, while full-length endogenous nuclear huntingtin in wild-type embryoid bodies (EBs) was associated with PRC2 subunits and was detected with trimethylated histone H3K27 at Hoxb9. Supporting a direct stimulatory role, full-length recombinant huntingtin significantly increased the histone H3K27 tri-methylase activity of reconstituted PRC2 in vitro, and structure -function analysis demonstrated that the polyglutamine region augmented full-length huntingtin PRC2 stimulation, both in Hdh Q111 EBs and in vitro, with reconstituted PRC2. Knowledge of full-length huntingtin's a-helical organization and role as a facilitator of the multi-subunit PRC2 complex provides a novel starting point for studying PRC2 regulation, implicates this chromatin repressive complex in a neurodegenerative disorder and sets the stage for further study of huntingtin's molecular function and the impact of its modulatory polyglutamine region. INTRODUCTION In 1993, genetic studies identified the CAG trinucleotide repeat mutation that causes Huntington's disease (HD) (1). This dominantly inherited disorder is characterized by loss of brain neurons, especially in the striatum, and the inexorable onset of motor, cognitive and behavioral symptoms (2). The HD mutation comprises expanded versions of a polymorphic CAG repeat that elongate a variable polyglutamine segment in the huntingtin protein, from the normal range (8 -37 residues) to 38 or more residues (1). This polymorphic polyglutamine segment is thought to confer a subtle structural alteration and gain of huntingtin function (3,4). Indeed, genotype -phenotype studies have demonstrated that the polyglutamine region normally modulates huntingtin function in cellular energy metabolism (3,5,6) and the expression of expanded polyglutamine tracts within the endogenous huntingtin protein is associated with dominant phenotypes in model systems (7,8). However, while polyglutamine, alone or embedded in short polypeptides, exhibits striking physical properties (9,10), typically measured in aggregation assays (11), little is known of the molecular impact of the polyglutamine region on the structure and function of the full-length huntingtin protein, though this information is needed to fully understand huntingtin biology and the trigger of HD pathogenesis. Full-length huntingtin is now thought to comprise a large a-helical solenoid (repeated units arranged in a continuous superhelix/coil) (12) rather than a globular protein, as its 3144 amino acid length was predicted to be entirely spanned by loosely conserved HEAT/HEAT-like repeats (13). Indeed HEAT repeats, curling anti-parallel a-helical units, were first recognized within huntingtin (14) and were named for huntingtin and for elongation factor 3 component eIF3k, protein phosphatase 2A regulatory subunit PR65/A and target of rapamycin TOR1 (14), though this structural element now defines a larger class of proteins (15). HEAT and HEAT-like repeats may augment other motifs (e.g. eIF3k and TOR1) or may encompass the entire protein (e.g. PR65/A and importin-b), stacking to confer dramatic conformational flexibility and multi-contact protein interaction topologies suited to the role of these a/a-solenoid molecules as facilitators of dynamic multi-subunit complexes. PR65/A, for example, facilitates diverse phosphatase holoenzymes involved in cell signaling and metabolism (16), while importin-b serves distinct nuclear transport complexes that engage a variety of cargos (17,18). The emerging molecular view of huntingtin as a predominant HEAT/HEAT-like a-helical facilitator protein is supported by the results of studies of the full-length protein. For example, endogenous full-length murine huntingtin exhibited distinct subcellular epitope patterns, implying multiple alternate conformations (8,19), and the first circular dichroism spectra of recombinant full-length human huntingtin denoted a predominantly a-helical molecule (20). Moreover, while the specific players in most cases remain to be identified, huntingtin is thought to be multi-functional, participating in diverse subcellular processes ranging from vesicle trafficking to energy metabolism and gene transcription (21). As an approach to defining huntingtin's essential functional molecular interactions in a mammalian system, we, and others, are studying the consequences of targeted inactivation of the murine HD gene. Huntingtin was initially shown to be required in the extraembryonic tissue (22) to bypass a block early in embryonic development just before head-fold formation (23 -25). Our subsequent analysis of huntingtin null embryos then revealed a constellation of other morphological and molecular phenotypes, including anterior streak and mesoderm patterning deficits, failure to properly silence growth (e.g. Nodal, Fgf8) and transcription (e.g. Evx1, T) factor genes, that also were reminiscent of embryos deficient in polycomb repressive complex 2 (PRC2), due to loss of core components Ezh2 (26), Suz12 (27) or Eed (28), thereby implying a possible intersection of huntingtin with this epigenetic silencer (29). Here we have investigated the a-solenoid view of the HD protein, by determining the domain organization of a set of full-length recombinant human huntingtins and by utilizing these reagents, in conjunction with targeted mutations at the murine HD locus that probe the endogenous protein, to examine the specific hypothesis that huntingtin may assist PRC2. The results of our analysis nominate huntingtin as a novel a-solenoid stimulator of this multi-subunit histone H3 lysine 27 (H3K27) methyltransferase complex (30 -33). Recombinant full-length huntingtin hinged a-helical domain structure To determine whether full-length huntingtin might fulfill the flexible segmental organization of an a/a-solenoid protein, we analyzed a set of full-length (.3144 residues) recombinant human FLAG-tag huntingtins, with polyglutamine tracts of 23, 32 or 43 residues. The proteins were expressed in Sf9 insect cells from Baculovirus vectors, generated as described in Materials and Methods, and, though large, each recombinant protein was enriched by column chromatography from Sf9 insect cell extracts (Materials and Methods), yielding a sharp elution peak (see Materials and Methods, Supplementary Material, Fig. S1). As illustrated for Q23-huntingtin (Supplementary Material, Fig. S1), the enriched protein was full-length (350 kDa) and of high purity, yielding a single Coomassie blue stained band that immunoblot revealed was detected by N-and C-terminal huntingtin antibodies. Analysis of the recombinant protein by circular dichroism produced spectra that confirmed a predominant a-helical structure (data not shown), as previously reported (20). To investigate the protein's overall domain organization, FLAG-tag huntingtin was examined by negative stain electron microscopy (EM). Analysis of about 10 000 interactively selected Q23-huntingtin particles revealed 100 structurally distinguishable classes (Supplementary Material, Fig. S2A), illustrated by the representative class averages shown in Figure 1A. These structural classes were also similar for Q32 huntingtin and Q43 huntingtin (Supplementary Material, Fig. S2B), revealing that the polyglutamine region did not overtly alter huntingtin's remarkable conformational variability. Consistent with a flexible segmental a-helical domain organization, limited tryptic digestion of the recombinant protein readily yielded two large domains (150 kDa and (Supplementary Material, Tables S1 and S2) located a major accessible hinge region between residues 1184 -1254. Notably, the smaller fragments produced by continued digestion ( Fig. 1B) revealed that the extreme N-terminus, with its polyglutamine segment, was buried within the 150 kDa domain, on an initially inaccessible 60 kDa subdomain. This was consistent with the results of CHOP, for structural domains (34,35), which predicted residues 124 -971 in a super-helical architecture, with significant similarity (2.5e 220 ) to PR65/A (PDB I.D. 1b3u). Multiple sequence alignment of eight representative chordate huntingtins (sea squirt to human) (see Materials and Methods), summarized in Figure 1C and the phylogram in Figure 1D, revealed that the conservation of the unstructured hinge segment and pattern of a-helical structure were conserved through 500 million years of evolution, strongly implying that huntingtin's overall modular organization and conformational flexibility are critical for its biological function. Impaired PRC2 function in the absence of huntingtin The hypothesis that huntingtin may facilitate PRC2 was first explored in vivo by examining huntingtin null Hdh ex4/5 homozygote embryos and cultured EBs for molecular epigenetic phenotypes reported in PRC2-deficient embryos (26 -28,36). Though Hdh ex4/5 homozygote embryos at embryonic stage E7.5 have previously been shown to properly localize stageappropriate markers such as Otx2, Hnf3b and Hesx1 (29), the results of whole mount in situ hybridization, in Figure 2A, revealed that huntingtin null embryos failed to properly repress PRC2 regulated Hox gene expression, as evidenced by ectopic Hoxb1, Hoxb2 and Hoxb9 mRNA. Moreover, female huntingtin null embryos exhibited decreased differentiation of trophoblast giant cells, marked by PL-1 mRNA expression, which normally requires proper obligatory silencing of the paternally inherited X chromosome. Indeed, in the absence of huntingtin, female embryos inheriting a paternally transmitted X chromosome marked by a GFP-transgene, displayed inappropriate reactivation of GFP-signal in cells of Tables S1 and S2) and major trypsin cleavage site (asterisk). Below this, huntingtin (open line) is depicted with the polyglutamine tract (black block) and amino acid coordinates (above), NORSp predicted disordered regions (light blue), matching the predictions of PROF where pHsec, pEsec and pLsec represent the probability (1 ¼ high, 0 ¼ low) for helix (red), strand (blue) and neither helix nor strand (green). Below this, compressed multiple sequence alignment of huntingtin from human and seven chordates with increasing intensity of blue shading for residues identical in 4 to 8 organisms and physico-chemical properties (from Jalview) conserved for each amino acid position, from dark brown (least) to bright yellow (most), with height corresponding to increasing conservation. (D) Phylogram based upon alignment of huntingtin homologues for the eight representative chordates, with branch lengths proportional to the inferred evolutionary change. the extra-embryonic tissue, though random X chromosome inactivation in the embryro proper appeared to be normal. Investigation of histone H3K27 methylation in Hdh ex4/5 null EBs, developing in cell culture from embryonic stem cells, demonstrated that huntingtin was required for efficient re-establishment of global tri-methylated histone H3K27. The level of tri-methylated histone H3K27 was decreased at day 2 (data not shown) and day 4, as illustrated by immunostaining and immunoblot analysis, in Figure 2B Fig. S3B), indicating that lack of huntingtin specifically affected H3K27 tri-methylation. The phenotypes that indicated impaired PRC2 function in huntingtin-null embryos were less severe than complete loss of PRC2 methyltransferase activity, due to lack of the Ezh2 catalytic subunit, and appeared milder than phenotypes due to loss of Eed. This is consistent with a role for huntingtin, not as a core PRC2 component, but as an essential, potentially dynamic, facilitator of PRC2 activity during development. Full-length huntingtin stimulated PRC2 tri-methyltransferase activity Developing day 4 EBs were chosen as a system amenable to biochemical analysis, to evaluate the possibility that fulllength huntingtin might intersect with PRC2 in the nucleus. Full-length huntingtin was detected by immunoblot analysis in nuclear, as well as cytoplasmic extracts (Supplementary Material, Fig. S3C), and antibody reagent AP194, which immunostained nuclear though not cytoplasmic conformations of full-length huntingtin (8), revealed huntingtin in the nuclei of cells in all three germ-layers, with Ezh2-stain, especially in the outermost endodermal cells (Fig. 3A). Furthermore, the results of analysis of nuclear extracts by gel filtration chromatography demonstrated that full-length huntingtin was co-eluted with PRC2 subunits Ezh2 and Suz12 (Supplementary Material, Fig. S3D). Analysis by co-immunoprecipitation, with specific antibody reagents, yielded a proportion of fulllength huntingtin, with Ezh2 and Suz12, as revealed by immunoblot analysis of the precipitated proteins shown in Figure 3B. In addition, as summarized in Figure 3C, chromatin immunoprecipitation (ChIP), with anti-huntingtin or antihistone H3K27me3, enriched Hoxb9 sequences from wildtype, though not from huntingtin null day 4 EB nuclei, thereby placing huntingtin at Hoxb9 chromatin in wild-type cells and supporting a functional role for huntingtin in stimulating histone H3K27 trimethylation. This interpretation was confirmed by the results of in vitro experiments, to determine whether recombinant full-length human huntingtin would interact with and alter the histone H3K27 methyltransferase activity of reconstituted PRC2 in a previously reported in vitro assay (30-33) (see Materials and Methods). As shown in the immunoblot in Figure 4A, full-length FLAG-tag Q23 huntingtin added to recombinant PRC2 was co-immunoprecipitated with Ezh2 and Suz12, and, as illustrated in Figure 4B, the recombinant protein significantly increased PRC2-specific histone H3K27 methylation, as judged by the intensity of bands of incorporated tritium. The stimulatory effect of full-length huntingtin, compared with reactions without huntingtin or with control peptides, was observed over a range of huntingtin (Supplementary Material, Fig. S4A) and nucleosomal array (Supplementary Material, Fig. S4B) concentrations. Furthermore, consistent with the finding that huntingtin was needed to stimulate tri-but not di-methylation of histone H3K27 in vivo (Fig. 2B, Supplementary Material, Fig. S3B), the results of immunoblot analysis of the in vitro PRC2 reaction products demonstrated that recombinant huntingtin specifically enhanced histone H3K27 tri-methylation but not di-methylation (Supplementary Material, Fig. S4C). The polyglutamine region modulated huntingtin PRC2 stimulation Functional interactions involving globular proteins are typically validated by structure -function experiments that entail targeted disruption of a single point-to-point protein interaction motif. However, consistent with previous evidence of striking conformational variability in vivo, our analysis of recombinant huntingtin revealed a flexible, non-globular HEAT repeat a-helical domain organization (Fig. 1, Supplementary Material, Fig. S2). For other HEAT solenoids, protein-interaction entailed dramatic conformational switches and complex multiple points of contact along the idiosyncratic contours formed by HEAT repeat packing, not accurately mapped by the methods that determine the sites of docking-interactions between globular proteins (37). Therefore, in the absence of detailed knowledge of huntingtin's likely complex interactions with PRC2, and perhaps its chromatin substrate, we assessed the potential impact of huntingtin's only known naturally occurring functional polymorphism. The polyglutamine region has been shown to subtly but significantly modulate the consequences of endogenous full-length murine and human huntingtin in vivo (5,6). Moreover, our analysis of recombinant full-length huntingtins demonstrated that the polymorphism, even into the expanded HD range, did not overtly alter the protein's overall structure (Supplementary Material, Fig. S2), consistent with the finding that this modulatory region did not impair (38) and indeed was not needed for huntingtin's early developmental activity (5). The potential effect of the polyglutamine region on PRC2 activity was first assessed by analysis of day 4 EBs expressing endogenous murine huntingtin with 111-glutamines, from the previously described Hdh Q111 knock-in allele (19,39). Compared with wild-type EBs, and in contrast to huntingtin-null EBs, the cells of Hdh Q111/Q7 EBs exhibited elevated levels of tri-methylated histone H3K27, by immunostaining and immunoblot analysis, as shown in Figure 2B, though not di-methylated H3K27 (Supplementary Material, Fig. S3B). Moreover, ChIP analysis revealed increased enrichment of huntingtin and tri-methylated histone H3K27 at Hoxb9, as summarized in Figure 3C. Consistent with these findings, as shown in Figure 4C, in the in vitro assay with reconstituted PRC2, full-length recombinant human proteins with polygluta-mine segments longer than 23-residues (32-and 43-residues) progressively increased huntingtin stimulation of histone H3K27 methylation. Thus, in both the cell-based and the molecularly-defined structure -function experiments, the impact of the polyglutamine modulatory region confirmed huntingtin's role in facilitating the PRC2 histone H3K27 methyltransferase complex. DISCUSSION The expansion of the polymorphic polyglutamine region in the huntingtin HEAT repeat protein is the root genetic cause of HD pathogenesis. Despite this compelling reason, and though this ancient protein is of interest because it is the founding member of a growing class of HEAT repeat proteins, huntingtin's molecular organization and function have attracted relatively little attention. The results of our genetic structure -function experiments extend the single report of native huntingtin's predominant a-helical nature (20), by demonstrating features consistent with a flexible a-solenoid organization and by providing strong empirical support for huntingtin as a dynamic facilitator of at least one multifunctional macromolecular complex, the PRC2 methyltransferase. Probing the structural organization of full-length native recombinant human huntingtin by circular dichroism spectroscopy, we confirmed the protein's previously reported predominant a-helical nature (20). However, in contrast to that report, our purification strategy yielded a sharp chromatographic peak comprising only full-length huntingtin, without the reported 220 kDa piece of huntingtin (starting at residue 622), which we speculate may have arisen from proteolysis of partially denatured protein produced by the harsher purification conditions utilized in that study. Limited proteolysis of the native full-length recombinant huntingtin initially yielded two major products, an N-terminal (2) and presence of 2 nM recombinant huntingtins, with different polyglutamine sizes, and below a plot of quantified band intensities, relative to baseline PRC2 activity, demonstrating a progressive increase in huntingtin's stimulation of PRC2 as polyglutamine size is increased (n ¼ 3; Q43Htt or Q32 versus no Htt à P , 0.017; Q43Htt versus Q23Htt Ãà P , 0.045). C-terminal segment (starting at residue 1254), implying a domain organization comprising two large nearly equal sized a-helical arms, separated by an accessible hinge region. Continued digestion of these domains yielded additional bands, including an 60 kDa N-terminal fragment bearing the polyglutamine region that implied cleavage within an unstructured region located at residue 500. Thus, though it may be susceptible to proteolytic cleavage when huntingtin is denatured, this sub-domain was buried within the 150 kDa N-terminal arm of the native protein, likely because this domain may assume a super-helical structure resembling the PR65/A a-solenoid (PDB I.D. 1b3u). The segmental a-helical domain organization and the conformational flexibility of full-length huntingtin, revealed by negative stain EM, are general structural features expected of a predominant HEAT repeat a/a-solenoid protein. However, high-resolution analysis will be needed to prove the continuous helical structure of the molecule. Indeed, though the elongated and often curved shapes formed by stacking of adjacent HEAT and HEAT-like repeats are similar for different HEAT repeat proteins, the precise contours are determined by the amino acid sequences of these degenerate structural elements. To date, crystal structures for the first 60 amino acids of huntingtin (encoded by exon 1) have been analyzed, directly demonstrating the a-helical secondary structure of the first 17 amino acids, the structural variability of the abutting 17 residue polyglutamine segment and helical arrangement of the adjacent polyproline rich segment (40). Knowledge of the domain organization of native huntingtin, which was not accurately predicted using various pieces of the protein (41), should now spur efforts to determine the higher-order structure of the functional protein. In support of huntingtin's role as a facilitator, implied by its predominant HEAT domain structure, the lack of huntingtin led to impaired PRC2 epigenetic gene and chromatin silencing function in embryos and impaired reestablishment of global histone H3K27 tri-methylation in developing EBs, whereas full-length recombinant human huntingtin specifically stimulated the tri-methyltransferase activity of reconstituted PRC2. Furthermore, as implied by co-immunoprecipitation of the full-length endogenous and recombinant proteins with core PRC2 members, huntingtin's direct role in stimulating PRC2, in vivo and in vitro, was confirmed by the progressive effect of huntingtin's polymorphic polyglutamine region, previously recognized as a modulatory segment of full-length endogenous murine (5) and human (6) huntingtin. Though our data reveal that huntingtin's role in facilitating PRC2 tri-methyltransferase activity is important for normal murine embryonic development, the timing and duration of this interaction, as well as the subset of PRC2-target genes that may, like Hoxb9, be modulated, are areas that remain to be investigated. Though it is not clear exactly how huntingtin may interact with PRC2 and/or its nucleosomal histone substrate, it is unlikely to stimulate PRC2 in the same manner as PHF1, a recently reported globular PRC2 accessory protein that is thought to contact the Ezh2 catalytic subunit via its PHD finger domains (42,43). Indeed, as discussed earlier, full-length huntingtin's proposed a-helical solenoid structure promises to offer a novel mode of PRC2 regulation. It seems reasonable, from the striking alternate sub-cellular epitope patterns of the full-length endogenous protein, that this will entail dramatic conformational switches and complex contacts along the topological contours formed by stacking of the protein's adjacent HEAT/HEAT-like repeats, as reported for other predominant a-solenoid proteins (16 -18). The availability of a system for purifying native fulllength recombinant huntingtin, and empirical knowledge of its domain organization, should now enable high-resolution structural studies to determine the details of huntingtin's functional molecular interactions with PRC2/chromatin, including the potential structural role of the modulatory polyglutamine region. In summary, the proposal that full-length huntingtin comprises a large hinged a-helical solenoid, which serves as a facilitator of the PRC2 complex, now provides a nuanced view of the molecule and its polymorphic polyglutamine region, thereby setting the stage for defining other functional complexes that full-length huntingtin may assist. Furthermore, it provides novel starting points for understanding the in vivo regulation of mammalian PRC2 and suggests that enhanced activity of this epigenetic regulator merits investigation as a potential contributor to HD neurodegeneration. MATERIALS AND METHODS Human FLAG-huntingtin insect vector expression clones pFASTBAC1 vector (Invitrogen), with a FLAG-tag sequence adjacent to the unique BamHI site, was cleaved and a 10 kb EagI-BssHII fragment encoding human huntingtin (23 glutamines) (Genbank accession number L12392) was inserted. pFASTFLAGHttQ23 encodes FLAG-tag-KGERGAASRPEA SGDCRAGRETA polypeptide in frame with the 3144 amino acid huntingtin sequence (FLAG-Q23 huntingtin). pA LHDQ 32 , pALHDQ 43 encoding full-length human FLA G-Q32 and -Q43 huntingtins, respectively, were generated in pFASTBAC1, modified to insert a polylinker containing FLAG, 6X histidine tag sequence and TEV protease recognition site, between the unique BamHI -KpnI sites. NcoI-XhoI HD cDNA fragments, encoding huntingtin amino acids 1-171 with different size polyglutamine tracts (Q32, Q43) (10,44) were inserted between the unique NcoI and XhoI sites, followed by in frame insertion of a 9046 bp human HD cDNA XhoI-SacII fragment, encoding human amino acids 172 -3144 (10,44). All clones were verified by full DNA sequence analysis. FLAG-huntingtin purification FLAG-tag huntingtin was expressed from pFAST-FLAGHttQ23 in the Bac-to-Bac Baculovirus Expression system (Invitrogen). The Sf9 cell lysate, generated by freeze/thawing in buffer A (50 mM Tris -HCl pH 8.0, 500 mM NaCl, 5% glycerol and complete protease inhibitors), was spun at 15 000 rpm (2 h). The supernatant was incubated with M2 anti-FLAG beads (Sigma) (2 h, 48C). FLAG-huntingtin was eluted with buffer (50 mM Tris -HCl pH 8.0, 300 mM NaCl, 5% glycerol) containing 0.4 mg/ml FLAG peptide and loaded onto a calibrated Superose 6 TM 10/300 column, equilibrated with 50 mM Tris-HCl pH 8.0, 150 mM NaCl. FLAG-huntingtin eluted discretely and was estimated to be at least 90% pure by Coomassie staining. Recombinant FLAG-Q32 and -Q43 huntingtins were purified in exactly the same manner. Comparisons of huntingtins with different polyglutamine sizes were performed with an equal amount of each protein, as judged by Bio-Rad DC protein assay and R-250 Coomassie blue staining of bands on 6% SDS -PAGE, which controlled for potential differences in purity and confirmed equal amounts of protein. The molarity for all huntingtins was calculated using a molecular weight of 350 kDa deduced from the human cDNA sequence (GenBank accession number L12392). EM and image processing Samples were prepared by negative staining with 0.75% (w/v) uranyl formate as described previously (45). Images were collected with a Tecnai T12 electron microscope (FEI, Hillsboro, OR) equipped with an LaB 6 filament and operated at an acceleration voltage of 120 kV. Images were recorded on imaging plates at a nominal magnification of 67 000x and a defocus value of 21.5 mm using low-dose procedures. Imaging plates were read out with a Ditabis micron imaging plate scanner (DITABIS Digital Biomedical Imaging System AG, Pforzheim, Germany) using a step size of 15 mm, a gain setting of 20 000 and a laser power setting of 30%. 2  2 pixels were averaged to yield a pixel size of 4.5 Å on the specimen level. Using BOXER display (EMAN software package) (46), 10 061 particles interactively selected from 53 images were windowed into 64  64 pixel images using the SPIDER software package (47), which was also used for all other image processing procedures. The particles were rotationally and translationally aligned and subjected to 10 cycles of multireference alignment, with K-means classification, specifying 100 output classes, after each round. The references used for the first multi-reference alignment were randomly chosen from the raw images. Huntingtin structure prediction and evolutionary conservation Human huntingtin amino acid sequence (Homo sapiens; NP_002102) was analyzed for predicted secondary structure using: NORSp (48) and PROF (Profile network prediction HeiDelberg) (49) from the PredictProtein server (http://www. predictprotein.org/) (50). Huntingtin orthologues for the multiple sequence alignments: dog (Canis familiaris; XP_536221), mouse (Mus musculus; AAA89100), opossum (Monodelphis domestica; XP_001364862), chicken (Gallus gallus; XP_420822), zebrafish (Danio rerio; NP_571093); lancelet (Branchiostoma floridae; ABP04240) and sea squirt (Ciona intestinalis; NP_001119700), were aligned using Clus-talW2 (European Bioinformatics Institute: http://www.ebi.ac .uk/Tools/clustalw2/) (51) and viewed and edited using Jalview 2.3 (http://www.jalview.org/) (52). The initial alignment used the ClustalW2 server default parameters (except iteration:tree and numiter:8). For secondary structure predictions, 'extra' sequences of greater than five amino acids relative to human huntingtin, were deleted from orthologues (apparent insertions of 23, 50 and 24 residues in zebrafish, lancelet and sea squirt at human 1051, 10 and 20 amino acids in sea squirt and lancelet at human 2145, 11 residues in lancelet at human 2195 and 26 residues in sea squirt at human 2642). After re-alignment, gaps of ,5 residues in the human sequence were removed from any of the other sequences. The final 'no-gap' set was re-aligned producing the final multiple alignment. This was exported as an image file (PNG format) and compressed in PowerPoint. Jar files of the initial and final alignments are available on request. The physicochemical properties conserved for each amino acid position were calculated in Jalview (53). CHOP (34,35) predicted tertiary structure, with searches of the CATH protein structure classification database, to identify potential structural domain homologues. FoldIndex was also used to predict unfolded human huntingtin structure (54). Mice and embryos Wild-type and Hdh ex4/5 /Hdh ex4/5 embryos were obtained from timed matings of Hdh ex4/5 /Hdh þ heterozygote mice and genotyped by PCR assay as described (23). The day of plug was defined as E0.5. GFP X chromosome transgene mice were from The Jackson Laboratory (strain 003116). In situ hybridization Dissected embryos and decidua were fixed in 4% paraformaldehyde at 48C brought through a sucrose gradient (15% sucrose, 30% sucrose), embedded in OCT and sectioned at 10 mm. RNA in situ hybridizations, performed as reported (55), were with antisense PL-1 probe synthesized with Sp6 and anti-sense Hox probes generated as described previously (56). Chromatin immunoprecipitation and quantitative PCR ChIP assays were performed using the Agilent mammalian ChIP-on-chip protocol as specified by the manufacturer (Agilent Technologies), except for huntingtin immunoprecipitation, where mAb 2166 was incubated with chromatin for 2 h at room temperature, to reduce non-specific background. Antibodies were anti-histone H3K27me3 (Abcam ab6002) antibody, anti-huntingtin (Millipore mAb 2166) and control IgG (Sigma). Purified input of chromatin (25 ng) or immunoprecipitated (IP) DNA (100 ng) from ChIP were used as a template in 50 ml reactions containing 25 ml of 2X SYBR Green Master Mix (Applied Biosystems) and 10 pmol of each primer. PCR reactions were performed with iCycler thermal cycler (Bio-Rad) and as follows: 50 cycles of 958C for 15 s, 548C for 15 s and 728C for 15 s. All PCRs were performed in triplicate and threshold amplication cycle numbers (Tc) using iCycler software were used to calculate IP DNA quantities as percentages of corresponding inputs using the following equation: IP DNA as a percentage of input ¼ 2 (DTc)  100, DTc ¼ input DNA Tc 2 IP DNA Tc. Statistics were analyzed using Student's t-test. Hox B9 promoter primer sequences for q-PCR were; left primer 5 0 -TGGCCTTAGGCAGGCTAT AA-3 0 and right primer 5 0 -GGCTCTTCCCTTGATCCTTT-3 0 . Acid precipitation of histones Cells were lysed in 5 -10 volumes of lysis buffer (10 mM HEPES, pH 7.9, 1.5 mM MgCl 2 , 10 mM KCl), hydrochloric acid was added to 0.2 M final concentration, incubated (30 min, ice) and the extract spun at 11 000 g (10 min, 48C). The supernatant was dialyzed twice against 20 volumes of 0.1 M acetic acid (1 h) and then dialyzed three times against 20 volumes of water. The acid precipitated proteins were loaded on 12% SDS -PAGE gel for immunoblot. Size exclusion chromatography fractionation Day 4 EB nuclear lysates were subjected to size exclusion chromatography on a pre-calibrated (with size standards listed in legend) Superose-6 TM HR 16/60 column, equilibrated with 20 mM HEPES pH 7.5 containing 1 mM MgCl 2 , 150 mM NaCl and 0.5 mM DTT. Fractions (1.5 ml) were collected. Blue Dextran was used as void volume marker and was eluted at fraction number 26. Cell-free assay for PRC2 histone H3 methyltransferase activity Pre-assembled human PRC2 complex, FLAG-EED, EZH2, SUZ12 and RbAp48 proteins, was purified, from Sf9 cells co-infected with the cognate pFastBac1 constructs, by M2 anti-FLAG bead affinity column and Superose 6 TM gel filtration chromatography (equilibrated with 50 mM Tris -HCl pH 8.0, 150 mM NaCl, 10% glycerol). The molarity was calculated using a MW of 270 kDa, which assumes one copy of each subunit per complex. The G5E4 nucleosomal array was assembled with Xenopus recombinant histones and G5E4 DNA fragments containing 12 nucleosomal positioning sequences, as reported (58,59). The reconstituted PRC2 activity assay was optimized from a previous method (33). A reaction mixture of 15 ml comprised: 100 mM Tris -HCl pH 8.3, 1 mM DTT, 0.5 mM 3 H-SAM, 4 nM PRC2 and 0.025 -0.1 mM nucleosomal array. Incubations were at 308C for 30 min, unless otherwise stated. For comparison of huntingtin's with different polyglutamine tracts, reactions were performed at 308C with 2 nM of each protein for 30 min. Sample buffer was added to stop the reactions, which were subjected to 12% SDS-PAGE, transferred to Immobilon-P SQ membrane (Millipore), and exposed to phosphorimager screen. The 3 H-H3K27 bands detected with Typhoon (ImageQuant as software, GE Healthcare Life Science) were quantified with Quantity One software (Bio-Rad). For the scaled up cold assay, a 60 ml reaction included: 100 mM Tris-HCl pH 8.3, 1 mM DTT, 2 mM SAM, 50 nM PRC2 and 0.1 mM nucleosomal array, with or without 40 nM FLAG-huntingtin (4 h, 308C). Bands on immunoblots were scanned with GS-800 Calibrated Densitometer (Bio-Rad). All values were expressed as mean + 1 SD. The statistical significance was determined by Student's t-test.
2014-10-01T00:00:00.000Z
2009-11-23T00:00:00.000
{ "year": 2009, "sha1": "c0adbf876a6937cf47baed7ffa87f3e1abde22d6", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/hmg/article-pdf/19/4/573/17250520/ddp524.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a550db4e6bd9572292be7a8eef51df6eb2a1f600", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
18856721
pes2o/s2orc
v3-fos-license
Polarization and Thickness Dependent Absorption Properties of Black Phosphorus: New Saturable Absorber for Ultrafast Pulse Generation Black phosphorus (BP) has recently been rediscovered as a new and interesting two-dimensional material due to its unique electronic and optical properties. Here, we study the linear and nonlinear optical properties of BP flakes. We observe that both the linear and nonlinear optical properties are anisotropic and can be tuned by the film thickness in BP, completely different from other typical two-dimensional layered materials (e.g., graphene and the most studied transition metal dichalcogenides). We then use the nonlinear optical properties of BP for ultrafast (pulse duration down to ~786 fs in mode-locking) and large-energy (pulse energy up to >18 nJ in Q-switching) pulse generation in fiber lasers at the near-infrared telecommunication band ~1.5 μm. We observe that the output of our BP based pulsed lasers is linearly polarized (with a degree-of-polarization ~98% in mode-locking, >99% in Q-switching, respectively) due to the anisotropic optical property of BP. Our results underscore the relatively large optical nonlinearity of BP with unique polarization and thickness dependence, and its potential for polarized optical pulse generation, paving the way to BP based nonlinear and ultrafast photonic applications (e.g., ultrafast all-optical polarization switches/modulators, frequency converters etc.). in resonance cannot be used, and thereby give relatively large insertion losses [6][7][8]19 ; On the other hand, mono-layer graphene typically has rather weak absorption (~2.3% 20,21 ), not suitable for various lasers (e.g., fiber lasers), which typically need relatively larger modulation depth [8][9][10] ; Layered transition metal dichalcogenides (TMDs) (e.g., MoS 2 22 , WS 2 23 , and MoSe 2 24,25 ) have also been demonstrated for SAs, but with limited performance for current lasers typically operating at the near-infrared and mid-infrared range, due to their comparatively large bandgap near or in the visible region 26 (~1.8 eV for MoS 2 , ~2.1 eV for WS 2 , ~1.7 eV for WSe 2 27 ). Black phosphorus (BP), a layered material consisting of only phosphorus atoms, has recently been rediscovered for various applications in electronics and optoelectronics (such as transistors, solar cells, and photodetectors). In contrast to graphene and TMDs, BP has its own unique properties . For example, its direct electronic band gap can be tuned from ~0.3 to ~2 eV (corresponding to the wavelength range from ~4 to ~0.6 μ m), depending on the film thickness . This is particularly interesting for photonics, as it can offer a broadly tuneable bandgap with number of layers for the near and mid-infrared photonics and optoelectronics, and thus bridge the present gap between the zero bandgap graphene and the relatively large bandgap TMDs 33 . However, thus far, intensive research efforts on BP have mainly focused on its electronic properties (e.g., transistor performance) and linear optical response (e.g., photo-detector performance). In this paper, we investigate the thickness and polarization dependent linear and nonlinear optical properties of BP thin films, which are integrated into fiber devices, the most commonly-used format for optical telecommunication. Our results show that both linear and nonlinear absorption properties are strongly thickness/polarization dependent, completely different from other typical two-dimensional layer materials (e.g., graphene [8][9][10][11] and the most studied TMDs [22][23][24][25]. We also demonstrate the use of nonlinear optical property of BP for ultrafast (pulse duration down to ~786 fs in mode-locking) and large-energy (pulse energy up to > 18 nJ in Q-switching) pulse generation in fiber lasers at the near-infrared telecommunication band ~1.55 μ m. Intriguingly, we observe that the output polarization state of our pulsed fiber lasers is linear (with a degree-of-polarization ~98% in mode-locking, ~99% in Q-switching) due to the unique anisotropic absorption property of BP. These results open the avenue to BP based nonlinear and ultrafast photonic applications (e.g., ultrafast optical switches/modulators, frequency converters etc.). Results and Discussion Atomic Force Microscopy and Raman spectroscopy. BP thin films are produced by micromechanical cleavage of a bulk BP crystal, and then transferred to optical fiber ends (details in Methods). The thicknesses of the transferred films on fiber ends are measured by Atomic Force Microscopy (AFM). Figure 1a,b show AFM image taken from a typical BP film and its line profile along the dashed white line. The circular fiber cladding can be resolved from the image, and the location corresponding to the fiber core (marked with the green circle) of a standard single mode fiber (Corning SMF-28, with a core diameter of ~10 microns) is drawn schematically in Fig. 1a. The thickness of the transferred BP film is estimated to be ~25 nm at the location corresponding to the fiber core (Fig. 1b). Typically, the thickness of transferred BP films ranges between ~20 nm and ~1 μ m, depending on the micromechanical cleavage process. To verify that the transferred material is BP, we perform polarization-resolved Raman scattering measurements. Raman spectrum of a BP crystal is depicted in Fig. 1c. Three peaks located at the wavenumbers of 363 cm −1 , 441 cm −1 and 469 cm −1 can be observed from the Raman spectrum, and attributed to A g 1 , B 2g and A g 2 vibration modes of BP crystal lattice, respectively. This agrees well with previously published results on BP films 30,31,55 . The Raman peak intensity is also strongly dependent on excitation light polarization ( Supplementary Fig. 2) due to its highly anisotropic optical responses 31,45,46 , and this has been noted to offer a unique method for determining the crystal orientation of BP films 31,45,46 . Thickness and polarization dependent linear optical absorption. We characterize the linear absorption properties of BP films transferred to the optical fiber ends. The linear transmittance results Fig. 2b) show that the transmittance of BP thin films decreases with the film thickness. Note that transmittance includes the contribution from light absorption and reflection. As shown in Fig. 2a,b, the transmittance T agrees well with the fit (solid lines) using the Beer-Lambert law (i.e., = ∼ (−α × ) T exp d where α is the absorption coefficient and d is the film thickness) with the fitted values of α 642 nm = ~5.7 μ m −1 , α 520 nm = ~10 μ m −1 . These values are comparable to what previously measured and predicted 36 . Thanks to the availability of our polarization-tuneable continuous-wave light source at 1.55 μ m (~0.8 eV), we measure the transmittance change of our BP films as a function of incident light polarization angle at this wavelength (i.e., 1.55 μ m). The results from 25 nm and 1100 nm thick BP films are given in Fig 2c. It appears that the input light polarization direction strongly affects absorption of the BP film (and thus the transmittance). For instance, we observe that the transmittance of the 1100 nm thick BP film can increase by a factor of > 9 (from 3.6% to 33.2%) when the input polarization direction is altered. It clearly shows the absorption anisotropy of BP 36,37,51,54 . The polarization directions corresponding to the maximum and minimum transmittance are assigned along the zigzag and armchair directions of BP thin films 36,37 , respectively. Therefore, such an anisotropic absorption property can be employed to determine the crystal orientation of BP films, similarly to the Raman approach 31,45,46 . Worth noting that this property can be utilized directly for various polarization-based photonic applications (e.g., polarizers). We find that the polarization dependent transmittance change is significantly larger in thicker samples, which agrees with the recent theoretical simulation 36,37 . For example, the transmittance change (~29.6%) of the 1100 nm thick sample is > 6 times larger than the result (~4.8%) of the 25 nm sample (Fig. 2c). Detailed transmittance of samples with variable thicknesses at two orthogonal polarized light directions (Fig. 2d) further confirms that the polarization-introduced transmittance change, which is linked to the selection rules associated with symmetries of the anisotropic material 36,37,54 , is thickness dependent. At this wavelength (i.e., 1.55 μ m, Fig. 2d), we also observe that the film thickness dependent transmittance matches well with a bi-exponential decay fit which contains two different absorption coefficients, in contrast to the single exponential decay fit of using the Beer-Lambert law at the wavelengths of 642 nm and 520 nm (Fig. 2a,b). As depicted in Fig. 2d, the transmittance decreases first rapidly until the thickness of ~80 nm. After that, the transmittance decreases slowly. The thickness-dependent bandgap (E g ) change of BP has been predicted to follow a power law (e.g., ≈ + . . , in which n is the number of layers) 36,37,49 . Hence, the change in bandgap attributable to the increasing film thickness can be deduced to be extremely small and will not affect the absorption significantly (compared to the 0.8 eV (~1.55 μ m) photon energy used in this experiment), when the sample is thicker than 10 nm. However, it has been calculated that sub-bands close to the bandgap significantly change with the thickness 36,37,43 . Therefore, we assign the rapid decrease in transmittance (when the flake thickness is < 80 nm) mainly to evolution of sub-band energy states 36,37,43 in BP with the film thickness. We believe the Beer-Lambert law dominates the thickness-dependent transmittance change for thicker samples (> ~80 nm), similarly to what we observed for the relatively large photon energy transmittance measurement experiments (642 nm in Fig. 2a, and 520 nm in Fig. 2b). Thickness and polarization dependent nonlinear optical absorption. The nonlinear absorption measurement results are illustrated in Fig. 2e,f. In our measurement setup ( Supplementary Fig. 4), we placed a polarization controller before the BP films to adjust the polarization direction of the input ultrafast pulses. Figure 2e depicts the nonlinear absorption measurement results of an 1100-nm thick BP film with two orthogonal polarization directions. A clear increase in the transmittance with the increased pump fluence can be observed in the 1100 nm thick BP sample and is attributed to saturable absorption 43,53 . The polarization dependent nonlinear optical performance difference is also observed in Fig. 2e, which is of great interest for various photonic applications, e.g., tuning operation states in ultrafast lasers 56 , switching optical pulses with their polarization directions, and ultrafast vector soliton generations. Figure 2f shows the relative transmittance change (Δ T/T 0 , where Δ T and T 0 are transmittance change and the transmittance at the minimum input power, respectively) for three BP films with the polarization state corresponding to the maximum absorbance (i.e., the armchair-polarized input). Nonlinear saturable absorption is clearly observed in all samples and occurs when the fluence reaches to ~100 μ J/cm 2 . We also note that the thicker sample has ~8-time larger relative transmittance change than the thinner one. This shows that the nonlinear property of BP can be adjusted by the thickness (i.e., number of layers). Such property can be utilized for pulse generation in different laser formats (e.g., fiber and semiconductor lasers), in which nonlinear saturable absorbers with different parameters are needed [8][9][10][11] . To estimate the saturation fluence and modulation depth from the nonlinear absorption curves, we use a simplified fluence dependent absorption formula to fit the measurement results (descripted in Supplementary Information). The fitted curves match decently with the measurement results and are plotted with solid lines in Fig. 2e,f. The obtained saturation fluence from all the samples varies in the range of 2000 μ J/cm 2 and is, therefore, around an order of magnitude larger than that typically measured with SAs fabricated from CNTs or graphene [7][8][9][10][11]16 . On the other hand, the transmittance change obtained from the measured curves is observed to be larger than 1% (Fig. 2e). However, the modulation depth obtained from the fits typically ranges between 50% and 90%. If true, this observation is promising as the fitted modulation depths are extremely large. However, we note that the fitted modulation depth is probably unrealistically high and most likely relate to the fact that the nonlinear absorption measurement should be continued to larger fluence range which is currently unavailable in our setup. In our nonlinear absorption measurement setup ( Supplementary Fig. 4), the available maximum fluence is ~450 μ J/cm 2 . Q-switched high-energy pulse generation. We use our BP integrated fiber device to build a pulsed fiber laser working at the main telecommunication window of 1.55 μ m. Fiber laser is selected in our experiments, as it can offer simple and compact design, efficient heat dissipation, and high-quality pulse generation 57,58 . The layout of our designed fiber laser is schematized in Fig. 3a. A ~1-m Erbium-doped fiber (EDF) is utilized as the gain medium, which is pumped by a 980 nm laser diode (LD) via a wavelength division multiplexer (WDM). A polarization-independent isolator (ISO) is placed after the gain Scientific RepoRts | 5:15899 | DOi: 10.1038/srep15899 fiber to maintain unidirectional operation. A polarization controller (PC) optimizes pulse operation state. A 10/90 coupler is used to extract the light from the cavity for measurements. The total cavity length is ~11 m. We get Q-switched optical output from the fiber laser, only after inserting the BP integrated device inside the cavity. Q-switching operation is achieved with all BP samples, but the 1100 nm thick BP film gives better performance, as expected from the relatively large transmittance change performance in the device (Fig. 2f). The output performance using the 1100 nm thick BP film is listed in Fig. 3b-f. The threshold pump power for continuous wave lasing is ~11 mW (The output power as a function of pump power is given in Supplementary Fig. 5). When the pump power is increased to ~23 mW, stable Q-switching can be achieved. The peak wavelength is ~1532.5 nm, with the full width at half maximum (FWHM) of ~3 nm (Fig. 3b). The output repetition rate and pulse duration are pump power dependent (Fig. 3c), a typical signature of Q-switching. This is because: when the pump power increases, larger gain is provided to saturate the SA, and thus the repetition rate increases and consequently the pulse duration reduces. In our experiment, the repetition rate increases from ~26 to ~40 kHz, and the pulse duration decreases from ~9.5 to ~3.1 μ s, when the pump power is raised from ~23 to ~55 mW. For Q-switched lasers, one of the key parameters is pulse energy, which is also linearly dependent on the pump power, as shown in Fig. 3d. The maximum output pulse energy is ~18.6 nJ. Figure 3e 5). Note that this output performance is very comparable to typical Erbium-doped fiber lasers Q-switched with other nanomaterials (e.g., CNTs and graphene [8][9][10][11] ). Then, we further examine the output polarization property of our Q-switched BP fiber laser (shown in Fig. 3f) by placing a rotatable polarizer plate between the laser output end and the power meter (the measurement setup is given in Supplementary Fig. 7). Interestingly, we observe that the output pulses of the laser can be perfectly linearly-polarized. The degree-of-polarization (DOP) (DOP = (P max -P min )/ (P max + P min ), where P max and P min are the maximum and minimum power measured, respectively) 56 of the linearly-polarized output is ~99%. The linear polarization output of our BP fiber laser is attributed to the anisotropic absorption in the BP saturable absorber (shown in Fig. 2f). Mode-locked ultrafast pulse generation. When the fiber cavity length is increased to ~14.2 meters (after adding ~3 m of SMF-28 single mode fiber in the laser cavity), the total group velocity dispersion of our fiber cavity is ~− 2.5 × 10 −1 ps 2 . In this case, it can facilitate soliton-like pulse shaping through the interplay of group velocity dispersion and self-phase modulation 58 . Indeed, after inserting our BP integrated fiber device in this fiber cavity, stable mode-locking can be initiated by introducing a disturbance to the intra-cavity fiber. Once stable output is achieved, no further polarization controller adjustment is required. The output power is ~1.6 mW when the pump power is 68.9 mW. Figure 4 summarizes the mode-locked laser performance. The laser mode-locks at 1558.7 nm, with the FWHM of 6.2 nm. The side bands (at 1546.76, 1551. 16, 1566.36, 1570.76, and 1574.36 nm, shown in Fig. 4a) fully confirm our soliton-like mode-locking, as they are typical for soliton-like pulse formation, resulting from intra-cavity periodical perturbations of discrete loss, gain and dispersion 59 . Figure 4b gives a typical output autocorrelation trace, which is well fitted by a sech 2 temporal profile. The pulse duration is ~786 fs. The time-bandwidth product (TBP) of the mode-locked pulses is ~0.6. The deviation from the TBP value of ~0.315 anticipated for transform-limited sech 2 pulses suggests the presence of chirping of the generated ultrafast pulses 58 . Shorter pulses may be obtained by using fiber lasers with specifically-design dispersion map (e.g., stretched-pulse fiber laser design 15,17,60 ). To investigate the laser output stability 61,62 , we characterize the radio frequency spectrum. We first measure broad-span frequency spectrum up to 500 MHz ( Supplementary Fig. 6). It presents no significant spectral modulation, implying no Q-switching instabilities 61,62 . Figure 4c gives the radio frequency spectrum around the fundamental repetition rate (f 0 ). A > 50 dB signal-to-background ratio (corresponding to > 10 5 contrast) is observed, showing good mode-locking stability 61,62 . The inset of Fig. 4c depicts the output pulse train, with a period of 68.16 ns, corresponding to the cavity fundamental repetition rate f 0 of 14.7 MHz, as expected from the total fiber cavity length of ~14.2 meters. We also measure the output polarization property of our mode-locked BP fiber laser with the method identical to the polarization measurement setup used for the Q-switched laser (see Supplementary Fig. 7). We observe that the polarization state of the mode-locked BP fiber laser is also linearly polarized, as shown in Fig. 4d. The DOP is ~98%. Such linearly-polarized output is also attributed to the anisotropic absorption in BP saturable absorber, which is different from other commonly used saturable absorber materials such as graphene and SESAMs. Note that the performance of the BP mode-locked laser (output power level, repetition rate, pulse duration, etc.) is comparable to what was typically achieved with CNTs and graphene based fiber lasers [6][7][8][9][10][11] . However, given the unique bandgap tuning property from visible to mid-infrared range, we expect superior performance of BP thin films for ultrafast lasers at this spectral range, worthy of future research. Particularly, the unique polarization/thickness dependent optical properties of BP studied here, are completely different from that of other typical two-dimensional layer materials (e.g., graphene [8][9][10][11] and the most studied TMDs [22][23][24][25]. This can potentially introduce paradigms of novel optical devices for both linear (e.g., polarization dynamics control) and nonlinear photonic applications (e.g., ultrafast linearly-polarized pulse generation 63 ). For example, the thickness dependent property offers a tunability to the effective response spectrum due to the layer controlled direct band gap; the anisotropic absorption property can provide an effective method to tune the output polarization state in laser applications. In summary, we have studied the thickness and polarization dependent linear and nonlinear optical properties of BP thin films, and then utilized their nonlinear anisotropic absorption property to generate ultrafast and large-energy linearly-polarized pulses with BP integrated fiber devices. Our results exhibit the practical potential of this promising material for various nonlinear and ultrafast photonic and optoelectronic applications. During the preparation of this paper, we became aware of two experimental works studying pulsed fiber lasers with BP on arXiv.org (arXiv: 1504.04731, arXiv: 1505.03035). Methods BP device fabrication. BP was synthesized under a constant pressure of 10 kbar by heating red phosphorus to 1,000 °C and slowly cooling to 600 °C at a cooling rate of 100 °C per hour. Red phosphorus was purchased from Aladdin Industrial Corporation with 99.999% metals basis. The high-pressure environment was provided by a cubic-anvil-type apparatus (Riken CAP-07). After that, BP films were produced by micromechanical cleavage of bulk BP crystals directly onto a viscoelastic polydimethylsiloxane (PDMS) stamp. A selected BP film on the PDMS stamp is then placed on a fiber end with the help of optical microscope and micromanipulator. Due to viscoelastic properties of PDMS, the BP film adheres to the fiber end when the PDMS stamp is gently lifted off 47,48 . AFM and Raman spectroscopy. AFM measurements were performed in semi-contact mode using NTegra Aura AFM apparatus equipped with a scanning head. A custom-made measurement stage was fabricated allowing us to characterize the BP films attached on the fiber end. The maximum scan size of the setup was 100 × 100 μ m 2 . Raman spectra were performed by using a confocal Raman microscope (Witec alpha 300 R) equipped with a frequency doubled Nd:YAG green laser (λ = 532 nm). The samples were placed on the SiO 2 /Si substrate, fabricated with the same fabrication approach discussed above, and the thicknesses of the characterized films were measured by AFM. Linear absorption measurement. A home-made erbium-doped fiber based amplified spontaneous emission source was used to characterize the absorption spectrum from ~1500 to 1600 nm. Its output polarization was changed with a prism based polarizer to measure the polarization dependent transmittance. Absorption properties at different wavelengths (e.g., 520, 642 nm) were measured with various fiber coupled non-polarized laser diodes (i.e., without polarization-tuning capability). The input power for the linear absorption measurement was set less than 1 mW. Nonlinear absorption measurement. A power-amplified home-made ultrafast fiber laser (~15 mW, 530 fs, 62 MHz) was employed to measure the saturable absorption property of the BP based fiber devices. A polarization controller was used to change the light polarization direction to measure polarization dependent saturable absorption performance. A double channel power meter (Ophir, Laserstar) was used to achieve high-accuracy measurement. Laser characterization. An optical spectrum analyser (Anritsu, MS9740A), a power meter (Ophir, Nova II), and a second-harmonic generation autocorrelator (APE, Pulse-check50) were used to characterize the generated ultrafast pulse performance. Pulse train was measured by an oscilloscope connected with a photodetector, while the radio frequency spectrum was taken by a radio frequency analyser (Anritsu MS2692A) with an ultrafast (> 25 GHz) photodetector.
2018-04-03T01:12:39.957Z
2015-05-03T00:00:00.000
{ "year": 2015, "sha1": "a18bf75f6a6c1067498325227cbda57f981070b9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep15899.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65fd823ffc7768ff263dbb29176869316e3c8e1a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
245088224
pes2o/s2orc
v3-fos-license
Stress-Responses of Performance and Microbial Community in Anaerobic Digestion System Under Long-Term Enrichment of Phenanthrene . The expanded granular sludge blanket reactor (EGSB) was operated for 198 days to study the long-term effects of phenanthrene (PHE) enrichment on system performance and microbial community. The results showed that the PHE was significantly enriched in the reactor. The final PHE concentration in effluent and sludge reached to 1.764±0.05 mg/L and 12.52±0.42 mg/gTS, respectively. While the average daily methane production was decreased by 5.0%-9.8% under long-term PHE exposure. The 3D-EEM of effluent indicated that PHE stimulated the microbial metabolism with the higher intensity of soluble microbial byproduct-like materials (SMP) and proteins. Moreover, the removal efficiency of soluble chemical oxygen demand (SCOD) and NH4+-N gradually diminished with the enrichment of PHE. PHE shaped the microbial community, and the predominant fermentative bacteria (Mesotoga) was severely inhibited. Contrarily, the bacteria (Syntrophorhabdus, Acinetobacter, Desulfovibrio, Desulfomicrobium) involved in PHE-degradation was enriched at end of Phase Ⅴ. In addition, the relative abundance (RA) of hydrotrophic methanogens (Methanofastidiosum, Methanolinea, Methanobacterium, Methanomassiliicoccus) increased by 0.96-fold with the long-term enrichment of PHE, while the RA of acetoclastic Methanosaeta obviously decreased. Introduction In recent years, soil contamination caused by the production and transportation of petroleum has attracted increasing attentions. The polycyclic aromatic hydrocarbons (PAHs) are a common class of pollutants at contaminated sites, which has harmful effects on living and non-living taxa due to their recalcitrant and lipophilic nature [1]. Therefore, researchers attached great importance to the studies about PAHs removal [2,3]. The biological treatment with the advantages of environmental friendliness, low operational and investment costs has been applied to remove PAHs. Many studies have focused on the aerobic biodegradation of PAHs, and the high removal efficiency has been achieved [4]. However, in the deep soil, the oxygen transfer is limited, which is not conducive to the survival of aerobic microorganisms. Thus, the anaerobic microorganisms play an important role in the attenuation of PAHs. Nevertheless, the anaerobic biodegradation of PAHs still suffers many challenges such as long biodegradation period and high microbial sensitivity [2]. So far, few studies focused on the response of system performance and microbial community in anaerobic environment under the long-term enrichment of PAHs. In present study, the phenanthrene (PHE) was selected as the model pollutant to investigate the effects of long-term PAH exposure on the anaerobic system. Meanwhile, the carbohydrate (starch) was added to provide the sufficient carbon source for the metabolism of anaerobic microbes. The variations of biomethane production, SCOD, NH4-N and dissolved organic matter (DOM) in effluent were analyzed. Also, the succession of microbial community under the long-term enrichment of PHE was evaluated. Experimental Design The PAH of PHE was purchased from the reagent company. The inoculum sludge was taken from an industrial plant which was operated at mesophilic condition in Shandong province. The experiment was conducted in an expanded granular sludge blanket reactor (EGSB) with 6.0 L work volume filled with 2.96 kg sludge. The EGSB was operated with a HRT of 48 h under 35±2℃, and the up-flow rate was constant (0.69 L h -1 ) with effluent recirculation. The characteristics of influents in different phases are shown in table 1. The indexes including SCOD, NH 4 + -N, pH, TS and VS were determined according to standard methods [5]. The volumes of biogas and biomethane were measured by a glass injector, and the CH 4 content in biogas was measured via absorbing the CO 2 by saturated sodium hydroxide solution. The concentration of PHE in effluent and sludge was analysed by High Performance Liquid Chromatography (Shimadzu LC-2030) [1]. EEM Analysis Excitation-emission matrix (EEM) fluorescence spectrum of effluent was determined by a fluorescence spectrophotometer (Hitachi Japan, F-4600). The emission wavelengths from 200 to 550 nm at 5nm increments and the excitation wavelengths from 200 to 500 nm at 5nm increments were set. Milli-Q water was used as reference to eliminate the inner filter effect [6]. DNA Extraction and Sequencing The sludge samples were washed with phosphate buffer saline (PBS) three times and centrifuged at 10000 G for 2 min. Microbial DNA was directly extracted from 2.0 g sludge of each sample with a MetaVx™ (GENEWIZ, Inc., South Plainfield, NJ, USA) according to manufacturer's instructions. Then, the full length of 16S rRNA was amplified using the primers (27F: 5'-AGAGTTTGATCCTGGCTCAG-3'; 1492R: 5'-GGTTACCTTGTTACGACTT-3') and sequenced using PacBio Sequel system (Pacific Biosciences, USA) at Biomarker Technologies Co, Ltd. (Beijing, China). PHE Impacts on EGSB Treatment Performance The EGSB was operated for 198 days to evaluate the variations of treatment performance with the PHE enrichment. As shown in figures 1a and 1b, the concentration of PHE in reactor gradually increased at the end of different phases after Phase Ⅱ. The highest final PHE concentration in effluent and sludge reached to 1.764±0.05 mg/L and 12.52±0.42 mg/g TS at Phase Ⅴ (influent PHE=100 mg/L), respectively, indicating that the most of PHE was absorbed in sludge due to the hydrophobic property. The enrichment of PHE adversely affected the system performance. At Phase Ⅲ (influent PHE=1 mg/L), the average daily biomethane yield (DMY) was decreased from 1461.24±151.40 mL/d in Phase Ⅱ (influent PHE=0) to 1317.74±171.77 mL/d (figure 1c). It is suggested that the inhibition of biomethanation was caused by feeding PHE to anaerobic system. The previous study has reported that the PHE is toxic to methanogens which have low growth rate and are sensitive to changes in the environment [1]. Interestingly, the biomethane production of reactor didn't continue to decrease after Phase Ⅲ , and the average DMY of Phase Ⅳ (influent PHE =10 mg/L) and Phase Ⅴ with the values of 1351.21±131.33 mL/d and 1388.44±106.79 mL/d were slightly higher than Phase Ⅲ . It was might ascribed to that the microbial community in reactor changed under the long-term PHE exposure, as discussed in section 3.3. However, they were still lower than that of Phase Ⅱ, indicating that the biomethane production was suppressed by the PHE enrichment. Furthermore, the long-term enrichment of PHE also posed negative effects on the removal of SCOD and NH4 + -N (figures 1d and 1f). After Phase Ⅰ, the average influent COD concentration was 6.0 g/L during the whole experimental period. At Phase Ⅱ (Startup period) without adding PHE, the highest removal efficiency of SCOD was achieved in Phase Ⅱ with the value of 65.5%±2.8%. However, when the PHE target pollutant of 1 mg/L was added in Phase Ⅲ, the microbes in the reactor were sensitive to PHE, thus the SCOD removal efficiency decreased to 62.5%±2.8%. With increasing the influent PHE concentration, the removal efficiency of SCOD gradually decreased, and the lowest SCOD removal efficiency was found in Phase Ⅴ (55.0%±3.1%). It is might because the PHE is toxic to anaerobes, which may inhibit the activities of anaerobes. Similar to the removal of SCOD, the NH 4 + -N removal efficiency continued to decline with the enrichment of PHE. Compared with Phase Ⅱ (37.3%±8.2%), the removal efficiency of NH 4 + -N was significantly diminished by 62.7%, indicating that the PHE could adversely affect bio-metabolic activity of NH 4 + -N. DOM Characteristics of Effluent in Different Phases Three-dimensional excitation-emission matrix (3D-EEM) fluorescence spectroscopy of effluent in different phases was obtained, as shown in figure 2. The information from the EEM could provide a high value of reference for the metabolism of microorganisms during anaerobic digestion process [7], as it usually showed the relevant characteristics of dissolved organic matter (DOM) in effluent samples comprehensively, such as the components and source of organics. The locations of the DOM fluorescence peaks were identified based on excitation/emission (Ex/Em) (figure 2a), which can be summarized into five peaks, as followed: Peak A: Ex/Em=275-285/325-350 nm; Peak B: Ex/Em=225-240/320-340 nm; Peak C: Ex/Em=275-280/445-450 nm; Peak D: Ex/Em=320-340/410-430 nm; Peak E: Ex/Em=380-400/450-470 nm. Figure 2 showed that the characteristics of DOM in effluents of different phases. The five peaks were all found in the Phase Ⅰ with the high intensity, but the peak C, peak D and peak E belonged to humic substances [6]. It is indicated that the severe humification was occurred and the system of reactor was unstable. Fortunately, figure 2b showed that only peak A and peak B were detected in the fluorescence spectrum of Phase Ⅱ, and they represented soluble microbial byproductlike materials and the component of tryptophan-like protein, respectively [7], indicating that the community structure of microorganisms in reactor achieved a stable state. After Phase Ⅱ, with the increase of PHE concentration, the fluorescence intensity (FI) of SMP and tryptophan-like protein gradually increased. It is obvious that FI of components in effluent of Phase Ⅴ was higher than that of Phase Ⅱ. The possible reason was that the PHE stimulated the bio-metabolism, which leaded to more byproducts from microbial activities including the PHE biodegradation byproducts. Succession of Microbial Community Sludge samples were collected at the end of Phase Ⅱ and Phase Ⅴ to identify the structure of microbial community, as shown in figure 3. Eight kinds of bacterial phyla were detected including: Thermotogae, Proteobacteria, Firmicutes, Bacteroidetes, Chloroflexi, Synergistetes, Verrucomicrobia, Actinobacteria. As the most predominant phylum, the Thermotogae occupied 87.0% in Phase Ⅱ. Previous study has reported that members of Thermotogae could ferment a various of simple sugars (e.g., glucose) and complex polysaccharides (e.g., xylan and starch) [8]. However, it's relative abundance (RA) significantly reduced to 47.9% in Phase Ⅴ with the enrichment of PHE, indicating the PHE posed negative effects on fermentation of substrate (starch). The RA of Proteobacteria in Phase Ⅴ (39.3%) was comparatively higher than Phase Ⅱ (9.3%). And the phyla of Firmicutes was enriched by 4.3 times in Phase Ⅴ compared with Phase Ⅱ (1.3%). Lee et al. reported that Proteobacteria and Firmicutes were the dominant phyla in the oil contaminated sediment and potentially participated in the degradation of PAHs [9]. At genus level (figure 3a), eight dominant genera were found, as followed: Mesotoga, Syntrophorhabdus, Acinetobacter, Clostridium, Bacteroides, Chryseomicrobium, Desulfovibrio, Desulfomicrobium. Among them, the hydrolytic Mesotoga is the most predominant genus with the RA of 91.2% in Phase Ⅱ, but it declined to 58.4% in Phase Ⅴ. Clostridium and Bacteroides were responsible for carbohydrate hydrolysis in anaerobic system [10]. Their RA increased from 1.3% and 0.5% in Phase Ⅱ to 7.3% and 2.9% in Phase Ⅴ, respectively. Moreover, the abundance of Syntrophorhabdus and Acinetobacter which were typical acetogens increased with the enrichment of PHE in Phase Ⅴ, reaching to 11.8% and 10.8%, separately. Compared with Phase Ⅱ, Desulfovibrio and Desulfomicrobium belonging to sulfate-reducing bacteria exhibited higher abundance, which are reported to take part in the anaerobic degradation of PHE [11]. Above results indicated that the enrichment of PHE shaped the bacterial community. The variations of archaeal community are showed in figure 3c. It was obvious that acetoclastic Methanosaeta occupied the highest proportion with the value of 71.1% in the Phase Ⅱ. However, it's abundance was decreased by 29.8% in Phase Ⅴ, indicating that the increase of PHE caused suppression on the growth of Methanosaeta. Conversely, the RA of hydrotropic methanogens including Methanofastidiosum, Methanolinea, Methanobacterium, and Methanomassiliicoccus was promoted from 28.3% in Phase Ⅱ to 55.6% in Phase Ⅴ. It was suggested that the long-term enrichment of PHE affected methanogenic activities of archaea. Conclusions The variations of anaerobic digestion performance and microbial community were investigated in EGSB under the long-term enrichment of PHE. There was slight suppression on biomethane production after the addition of PHE. Compared with Phase Ⅱ (influent PHE=0), the average daily biomethane yield was diminished by 5.8-9.8% in the next phases. Meanwhile, PHE posed negative effects on the removal of SCOD and NH 4 + -N, and the biggest inhibition ratio (16.0% of SCOD removal and 62.7% of NH 4 + -N removal) in Phase Ⅴ (influent PHE=100 mg/L). The sequencing results showed that the abundance of predominant fermentative bacteria (Mesotoga) significantly decreased under PHE exposure. Conversely, the typical acetogens (Syntrophorhabdus, Acinetobacter) and sulfate-reducing bacteria (Desulfovibrio, Desulfomicrobium) which potentially participated in biodegradation of PHE were enriched with increasing PHE concentration. Moreover, PHE also affected the structure of archaeal community. The growth of hydrotropic methanogens (Methanofastidiosum, Methanolinea, Methanobacterium, and Methanomassiliicoccus) were promoted, while acetoclastic Methanosaeta was inhibited.
2021-12-12T17:44:05.101Z
2021-12-03T00:00:00.000
{ "year": 2021, "sha1": "0aedc043eda596b94808f0d1dfd60cac5c591fd8", "oa_license": "CCBYNC", "oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/ATDE210322", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c65e14bbec679f2875a277cfb194fe52972a6f54", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [] }
24922005
pes2o/s2orc
v3-fos-license
Exploiting the potential of intranet for managing drug spectrum a web base publication in a Tertiary Care Hospital in Mumbai Objective: The study surveyed the availability of the intranet in campus and also the knowledge related to drug spectrum an intranet publication. Materials and Methods: Institutional ethics committee permission was obtained. Verbal consent was taken from the faculty and resident doctors of departments where all the facilities were available. Universal sampling method was used for recruitment. Pre‑validated questionnaires were given to approximately 100 faculty and 500 resident doctors in the year 2012‑2013. The questionnaire contained 15 items. Content analysis was done. The study questionnaire focused on a survey to obtain participants feedback on the use of the intranet and to evaluate the use of intranet as a source of knowledge. It also dealt on the relevance of the drug spectrum in the context of their subject. The responses were taken after giving the participants sufficient time. Data was entered into an Excel 2003 spread sheet and analyzed by using descriptive statistics. Results: The total number of respondents who participated in our study was 134 including faculty and residents from various departments. A total of 117 (89.66%) respondents stated that their departments have access to the internet. having access to intranet (76.29%). respondents have accessed. 67 (49.62%) did not have the time to visit intranet site whereas 67 (49.62%) have not accessed intranet. 89 (65.92%) respondents were not aware of the drug spectrum. 101 (74.81%) respondents felt that drug spectrum is a useful activity on intranet. 45 (33.33%) knew about the intranet periodical drug spectrum, but most of the respondents (33.33%) explained the meaning of the word drug spectrum according to their understanding, but never knew about the online intranet journal drug spectrum. Conclusion: The study found that the intranet is available in the campus, but it is not being utilized. The awareness and knowledge regarding drug spectrum is lacking, but the participants had a lot of suggestions. Thus, intranet has immense utility, and to make drug spectrum more readable suggestions of the respondents needs to be incorporated which in turn can benefit the medical fraternity as a whole. INTRODUCTION Drug information is applicable to almost all branches of medicine. The fundamental basis for modern drug therapy knows the pharmacological concepts and its application in routine clinical practice. One of the principle duties of pharmacologist is to provide drug related information to all the specialities. The drug information includes new drug information, drugs withdrawn from the market, newer approvals in the country, controversy regarding some drug, therapeutic guidelines for any disease and any serious adverse drug reaction. P h a r m a c o l o g y d e p a r t m e n t b r i d g e s t h e g a p between practicing physicians and pharmaceutical companies. [1] Disseminating recent developments in the field of Pharmacology and Therapeutics in an academic institution to the clinicians is a challenge. As knowledge of pharmacology can be fluid and ephemeral, organizations can only communicate and collaborate effectively through the intranet and exploit their competitive advantage of sharing the new developments in the field of medicine if specific techniques and processes are adequately put in place. Today, intranet also prevails as an organizational knowledge base. It has advantages over prior digital knowledge bases in that it facilitates the capturing and handling of unstructured and implicit knowledge, in comparison to database management systems that require very structured schemes to be effective. Intranets that are networked across organizational boundaries are seen as user-friendly and cost-effective ways of achieving the goal of facilitating knowledge sharing. [2] This is an important factor since it enables the organization more freedom in sharing information. [3] In this context, the Department of Pharmacology and Therapeutics has been involved in publishing E-journal (drug spectrum) every 3 monthly providing in depth drug information on various aspects (from discovery to recent arrivals) since last 5 years. Being a departmental driven activity, it failed to achieve its purpose. Hence, it was of interest to find out the reasons for not accessing the drug spectrum site and to find out the feasibility of intranet access in the institution by the stakeholders. MATERIALS AND METHODS The study was commenced after acquiring permission from Institutional Review Board (EC/0A-51/2013). In the departments where all these facilities were available, the concerned faculty and resident doctors were approached, and verbal consent was taken. The privacy and data confidentiality was maintained, and all the entries were coded. The names of the faculty and the resident doctors were not revealed while conducting the study and are not revealed in the publication. The study was conducted by Department of Pharmacology and Therapeutics at Seth Gordhandas Sunderdas Medical College and King Edward Memorial Hospital at Mumbai. There are approximately 40 departments (including pre-clinical, para-clinical, speciality and superspeciality) in our institution. There was no formal sample size calculation done. Universal sampling method was used for recruitment. Questionnaires were given to approximately 100 faculty (current total 133) and 500 resident doctors (current total 760) in the year 2012-2013. A pre-validated questionnaire was used, and content analysis was done [Appendix 1]. The questionnaire was composed of 15 questions (items). The study questionnaire focused on a survey to obtain participants feedback on the use of the intranet and to evaluate the use of intranet as a source of knowledge. The questionnaire also focused on the relevance of the drug spectrum in the context of their subject matter. The questionnaires were given to the participants and after giving them sufficient time there responses were taken. Data were transposed from self-completed paper questionnaires into an Excel 2003 spreadsheet (Windows 8, Ms Office 2013, Sony Vaio). Data was analyzed by using descriptive statistics. RESULTS Overall, there were 134 respondents which included faculty and residents from various departments [ Table 1]. 115 (85%) respondents of our study have the habit of reading online journals in the subject of their expertise and 96 (71%) read in print journals. 124 (91.85%) participants of the study responded that their departments had computer facilities and 9 (6.66%) participants have responded that their departments do not have computer facility in their department. When asked about availability of computers in individual departments, the responses we received has been depicted in Table 2. 121 (89.62%) faculty and resident doctors own a laptop. 117 (89.66%) respondents stated that their departments had access to internet and 103 (76.29%) responded that their departments had intranet access. Only 67 (49.62%) respondents have accessed, and 67 (49.62%) did not have the time to visit intranet site whereas 67 (49.62%) have not accessed. 116 (85.92%) respondents had visited sites which are relevant to their subject of expertise. 89 (65.92%) respondents were not aware of the drug spectrum. 101 (74.81%) respondents felt that drug spectrum is a useful activity on intranet and would benefit the medical faculty of various departments in our institution. Only 45 (33.33%) knew about the intranet periodical drug spectrum, but most of the respondents (33.33%) could explain the meaning of the word drug spectrum according to their understanding, but actually never knew about the online intranet journal drug spectrum. Few inappropriate perceptions of the drug spectrum given by different respondents are mentioned below: • The range or gamut of actions that a drug can perform • The susceptibility of various microbes and pathogens to a particular drug. Very few respondents were actually aware of the drug spectrum activity and have responded as: • Intranet periodical from Pharmacology Department • Drug information bulletin over intranet. When enquired about content preferences in the drug spectrum, 65 (48%) respondents desired to know about new drug information, 60 (44.44%) about new drug approved by Drug Controller General of India (DCGI), 54 (40%) respondents wanted information on drugs withdrawn from the market and 52 (38.51%) respondents preferred controversies regarding certain drugs. Nine respondents felt that easy access to drug spectrum is its biggest advantage. Also, seven of them felt that information given in the drug spectrum would indirectly benefit the patients. Twelve respondents felt that drug spectrum is an easily accessible source of drug information. Two of them felt that drug spectrum would be beneficial for them in their clinical practice. As certain departments do not have intranet access, six respondents felt though it is useful there is no access to intranet and hence they are unable to obtain the benefits of the drug spectrum. Four of them have felt that the contents of the drug spectrum are inadequate. Four of the respondents have felt that the contents of the drug spectrum could be misused. Two felt that drug spectrum contained less information on the pediatric population and hence considered it to be a limitation. Two felt that contents of the drug spectrum could be controversial. Two of them felt that drug spectrum should be accessible on their mobiles, and two of them felt that print copies should be made available. When asked about the suggestions to improve drug spectrum we received the following comments: DISCUSSION Our study was conducted with the aim that the faculty and residents should keep abreast with the current trends in the medical knowledge in relation to their respective subjects and hence it mainly addressed: (1) To survey the availability of intranet in campus and (2) Knowledge related to drug spectrum. In today's world computers form a part of almost everybody's lives. With constant advances, technology is also becoming cheaper and accessible. Internet connections of various types are available. People have their own internet connections and hence can access the desired information anytime they want. Many institutions also have Wi-Fi connections in their premises which is beneficial and cost effective way of providing internet connections to their employees. People can access information at any place through the internet facilities on their mobile phones. Hence, advanced technology has made the access to information just a click away. Intranet is a modern means of sharing and disseminating information within a particular institution. Hospitals are rapidly adopting the technology of the intranet. According to a survey by Price Waterhouse Coopers and Zinn Enterprises, the proportion of large hospitals with an intranet rose from less than half of the respondents in 1999 to nearly three-quarters in 2000. Use of intranet is not only cost effective but also valuable in disseminating information to various employees of a particular institution. [4,5] The intranet site also gives departments an opportunity to share their successes and tout important accomplishments. [4,5] The Department of Pharmacology and Therapeutics had started drug spectrum a quarterly web based publication for the medical faculty and residents of various speciality departments. Our study showed that the majority of the respondents is in the habit of reading online journals related to their subject. In the field of medicine, it is important to keep pace with the ever growing science as it is important for maintaining high quality patient care. Both the faculty and the resident doctors have to keep abreast with the current medical trends in a tertiary care hospital like ours. But clinicians' especially resident doctors are busy in their daily clinical routine which includes patient care and administrative duties and hence might not find time to sit in a place and read in print journals. Technology too, in today's times have advanced, and majority of the respondents having their own laptops and mobiles prefer to read online journals. Majority respondents still choose to read in print journals related to their subject of expertise. Those would probably be the senior doctors who have developed the habit of reading in print journals since that time computers and internet were either not present or were not in much use. Certain respondents choose not to read online journals probably because those doctors would either prefer to read in print hard copies or rely on the information obtained from friends, seniors and textbooks. Many choose not to read in print journals in the subject of their expertise as those would possibly be the doctors who prefer reading online journals. However, for many articles the readers can only access the abstract, but not the full text as they are paid websites. Here drug spectrum can fill the gap by providing information at the doorstep of doctors. Majority of the faculty as well as resident doctors are aware that their respective departments have computers. Very few have responded negatively to the question which could probably mean that they are not aware of the presence of the computers in their departments. This could be because geographically most of the departments are widespread and doctors from the speciality and super-speciality clinical departments and certain paraclinical departments like pathology and microbiology are engaged in patient care and various other academic and administrative work. There is a wide range in the number of computers a particular department owns [ Table 2]. The range obtained could be because the respondents could possibly have included laptops of the faculty and resident doctors, as well as the desktops present in that particular department. Majority of the departments in our institution have an internet access and as the majority of the respondents also own a laptop it would be easier for them to access sites related to their subjects and also read online journals. As compared to the internet connection less number of departments had access to intranet. Intranet should be accessible to all the departments so that the faculty and residents can take the benefit of different activities happening in the institution which can indirectly increase the readability of online journals and interaction across the departments. Lack of time and lack of intranet connectivity in some of the clinical departments desisted majority of our study population from accessing and gathering information from the intranet. Respondents of our study had certain content preferences for drug spectrum. Emphasis on including the preferences of the respondents would probably increase the readability of the drug spectrum. The most commonly opted information types were new drugs, newer approvals (DCGI), drugs withdrawn from the market and controversies regarding certain drugs. Clinicians have to keep abreast with the recent development with regards to the field of expertise. Taking into account the busy schedule of the clinician's, intranet could form a user friendly, fast, reliable, and an economic tool for disseminating drug information and drug spectrum is just a click away. With time drug therapies have changed. e.g. around more than 20 years back nitrates were not the part of treatment of angina pectoris. Currently, nitrates form the mainstay treatment for angina pectoris. Till around the year 2000, Drug spectrum, therefore, is an essential tool to pass information to the clinicians and academicians of an institute who treat innumerable patients every single day and help them to save precious time. It is our department who actually finds, selects, edits and uploads such important drug information on the intranet quarterly so that clinicians can save time of finding particular information from a vast pool present on the internet. Thus, the time saved can be optimally utilized for patient care. The respondents felt that easy access to drug spectrum is its biggest advantage. Furthermore, some of them felt that the well searched, crisp articles would help them keep abreast with the recent knowledge and help them treat the patients and hence it would prove beneficial to the patients. Few of them felt that drug spectrum would be beneficial for them in their clinical practice. Few of them felt the contents to be inadequate. To increase the readability of the drug spectrum, the editors of the journal have to make sure that contents are in accordance with the features mostly recommended by the respondents in question 13 [Appendix 1]. Drug Spectrum is freely available for the medical and the paramedical staff to read. The articles in our journal are contributed by doctors and hence misuse could be probably done by the junior medical and the paramedical staff who could misinterpret the contents of the articles if appropriate guidance is not available. Most of the faculty and residents from the Department of Pediatrics felt that drug spectrum contained less information related to a pediatric population. It is of utmost important for clinicians to keep abreast with the recent information in the subject of Pediatrics. Almost all the drug related information in drug spectrum pertained to the adult population and hence the pediatricians felt the contents to be inadequate. Few of the respondents felt the contents of the drug spectrum to be controversial. On certain articles where there is a disagreement as it may happen most of the time while the mind critically evaluates a scientific article, the reader can always recheck the facts of the material they feel controversial of the article they are interested in. Some respondents desired print copies of the journal. Again, our study emphasizes on the regular use of intranet for reading our journal. Few respondents felt that drug spectrum should be accessible on their mobiles. This could only be possible with a certain technological modifications. To begin with, first the faculty and the residents should be able to access drug spectrum regularly on the intranet, later the other modes of accessibility can be thought. Furthermore, once intranet is established, then it can serve as a reliable tool of disseminating authentic scientific, medical, as well as institutional information. LIMITATIONS As it is a single center study, the results may not be extrapolated to other institutions across India or the wider world. Do you know what "drug spectrum" is? 13 In case such an activity is existing what features would you recommend? a) New drug information b) Black box warning for any drugs c) Drugs withdrawn from market d) List of available drugs in Indian market e) Newer approvals (DCGI) f) Controversy regarding some drug g) Therapeutic guidelines for any disease h) Nobel laureate information i) News j) All of the above k) Any other 14 Is it useful for you to have such an activity on intranet? Advantages of drug spectrum as an informative tool in intranet Drawbacks of drug spectrum as an informative tool in intranet Suggestions to improve drug spectrum General comments DCGI=Drug Controller General of India
2018-04-03T02:43:53.501Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "118ac2abb166a22de2b21327983ad549ded04819", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/2229-3485.167093", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "58f30a3b18532ab09721afb99a1af01d70cb9925", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
69889710
pes2o/s2orc
v3-fos-license
A hybrid algorithm for routing optimization of AGVs with multi-task assignment To solve the multi-task assigement and routing optimizating problem of AGVs, this paper proposed a hybrid algorithm. Based on analysis of a practice case, it divided the working process of AGVs into two stages: loading and unloading. In the first stage, the shorts paths between loading site and unloading site were obtained via Dijkstra method. Then in next stage, a pre-processing method was applied to simplify the path network. Then the unloading travel distance was optimized by Hungarian algorithm. Through the two-section processing method, the optimal of routing design could be obtained. At last, the result of a practice case showed that the method could be used to solve the routing optimization of AGVs with multi-task assignment problems. Introduction Automated Guided Vehicle (AGV) is a kind of automatic transportation equipment, which is a branch of mobile robot science. With excellent performance on mobility, flexibility and intelligence, it can dock manufacture system easily to fulfil the need of material transportation. Thus, AGV has been applied to heavy industry, manufacture, terminal, et. al., and has very bright developing prospects. Nowadays, many researchers have focused on the development, routing design, assignment, scheduling, and so on. Guan et al. [1] proposed a variable neighborhood niche genetic algorithm (VNS/NGA) by studying and analysing the existing path design methods. Xia et al. [2] built a model based on the viewable link map in a static environment, and used the GA to carry out path planning and improvement on AGV, which can quickly search the optimal path of AGV. Xiao et al. [3] adopted a multi-attribute task scheduling principle, considered the traffic subsystem and the processing subsystem separately, and proposed a strategy of avoiding deadlock. In order to minimize the total distance of no load and combine the characteristics of flexible manufacturing system (FMS), Fazlollahtabar H et al. [4] considered time window constraints and AGV loading, and provided a genetic algorithm to solve AGV path and scheduling problems. Song et al. [5] put forward an optimization plan for the distribution system by using industrial engineering method and simulated the logistics distribution system of the engine assembly line. Based on the characteristics of simulation analysis and logistics distribution system, Wang et al. [6], combined with heuristic algorithm to obtain a solution that is suitable for the actual needs of the assembly line, at the same time, they proposed a vehicle dynamic scheduling method through adjusting the control strategy and dynamic simulation of decision parameters. Through AGVs system is flexible and organizational, some important problems like: hardly routing optimization and high no-load ratio still effect the application of AGVs. In modern logistics system, it very common that many transportation requirements occur simultaneously and each AGV need cooperate with others so as to completing the tasks. For example, in a common situation of a factory, several machine tools need different materials simultaneously. But with the capacity limits of AGV and incompatibility of materials, we cannot use one AGV to complete the task. Several AGV are required to work together. Thus, under this situation of multiple tasks occurring, how to make a reasonable schedule of AGVs to short their transportation time and improve the working efficiency is an important problem which also is the aim of this work. This paper proposes a hybrid algorithm to solve multiple tasks assign and rout optimization problems of a group of AGVs with the aim to improve their utilization. Assumption The real AGV schedule system is very complex, thus we do some assumptions as follows:  After unloading, materials enter into the input buffer of a workstation and wait for the processing of the machine; and enter the output buffer of the workstation after finishing each processing procedure.  The materials in the input buffer are waiting for processing under the rule of FIFO, and the materials in the output buffer can leave the workstation in any order.  AGV can only load one unit of material at a time, and moves uniformly along a planned path.  Regardless of loading and unloading time; if multiple AGVs are at the same point, they must queue up.  AGVs can automatically avoid obstacles during operation.  Idle AGV stays at an unloading point, which will not affect other AGVs. Analysis In the automatic guided vehicle system, it is assumed that the AGV will park at the unloading point after finishing task. Considering the dynamic change of the position of the idle AGVs at each time, and the number of tasks and idle AGVs are not equal often. Therefore, we divide the AGV scheduling process into two stages. In the first stage, the tasks are assigned to the idle AGVs closest to the task according to the unloading point location D s and the loading point location P i of the transport tasks, and the AGVs no-load running path is designed to make the no-load distance shortest; in another stage, according to the task information from the loading point P i to the unloading point D j , we design the load running path of AGV to ensure the total load distance to be the shortest. For the problem of AGV path network design, it is generally assumed that the system path network is given but the direction is unknown. It means that the network is undirected. In this paper, the path of AGV is designed based on the network shown in figure The load flow between workstations is shown in table 1 below. Each non-zero number represents a task. There are 6 tasks to be assigned at this time. For example, 835 means transporting 835 units of material from loading point P 2 to unloading point D 1 . Table 1. Load flow between workstations. No. Since the loading points and unloading points are given by the tasks, the shortest path of each task can be obtained by Dijkstra method. But in the no-load stage, the parking location and number of idle AGV are constantly changing. In this paper, we improve the efficiency of the handling system by optimizing the path of the no-load stage. Model For the path network G = (V, E), where the node and edge set is V and E respectively, and direction of the initial edge is uncertain. Suppose the manufacturing system has n processing site, forming a set W. Each processing site has a loading point i W  P and an unloading point j W  D , where P and D is the loading point set and the unloading point set respectively. The load flow matrix between the workstations is f. the parameter and variables are listed as follows: L ij : the shortest path length from task load node i to unload node j; t ij : he total running time of AGV; t Uij : the no-load running time; t BL : the remaining blocking time of task loading workbench; t FR : the remaining idle time of task unloading workbench; c ij : the carrying cost per unit distance of AGV load unit flow running; while w ij : the carrying cost per unit distance of AGV no-load running; c 1 , c 2 : the punishment cost due to machine tool blockage and due to machine tool idle; C: the total cost of processing subsystem and transport subsystem . N : The total number of AGVs in the workshop. m: the number of tasks arrives; N j : the number of idle cars at the unloading point j; x si : the number of free cars from the unload point D j to the task load point P i ; x sk : the s task is performed by the k unit AGV. Thus, the objective can be defined as following: Equation (2) is the capacity of AGVs and equation (3), (4) and (5) limit a AGV car can only complete a task, and one task can only be performed by a AGV. Equation (6) represents that the number of tasks from the unloading point D j is no more than the number of idle AGV parking at D j . Equation (7) shows that no-load running time of AGV cannot exceed the remaining block time of the loading workbench. Equation (8) said total run time of AGV cannot exceed the remaining free time of the unloading workbench. Routing network pre-processing Taking the practice case in figure 1 as an example, the pre-process operation is carried out as follows: For the path P 1 -M 1 -D 1 , the inter node M 1 is only connected to one loading node and one unloading node, and the distances are 1 and 2, respectively. When the node M 1 is removed, the distance between P 1 and D 1 is modified to 3, which will not affect the entire network. Similarly, for the two adjacent unloading nodes D 1 and D 3 , there is only one path without passing through other loading and unloading nodes, namely D 1 -M 2 -M 5 -D 3 . This path has two intermediate nodes, M 2 and M 5 . As the path is unique, removing the two intermediate nodes, which will not affect the entire path network too. In this network, the paths of adjacent loading and unloading nodes are not unique, e. g., there are two paths from P 2 to D 2 : P 2 -M 5 -D 2 and P 2 -M 3 -M 6 -D 2 . The length of these two paths is same. Therefore, we can choose any path and remove the intermediate node. In the case that two adjacent nodes have two or more paths and the path length is different, only the shortest path length is kept, and the length is taken as the distance between the two points. Through the above pre-processing, the results of network in figure 1 is shown in figure 2 as below. The pre-processed path network obtained from figure 2 can obtain the initial path matrix W of any two loading and unloading nodes as shown below. Un-load/load shortest path calculating The initial path matrix W obtained above is loaded into MATLAB, and then the program of AGV path optimization is written. With applying Dijkstra algorithm iteratively, the path set of all no-load stage Path1, the path set of load stage Path2 and the shortest distance matrix L 1 and L 2 between each load unloading point can be obtained, as shown below: Multi-task assignment solution Based on the w task set Y s and the idle AGV set Y v , which are finally determined after the task scheduling process, the task assignment can be calculated according to the Hungarian algorithm according. It is assumed that there are four free AGVs at four different unloading sites of four workstations, and the loading sites on four workstations have exactly four handling tasks. Therefore, we only need to set the no-load shortest path matrix L 1 obtained above as the coefficient matrix in the assignment problem, and use Hungarian algorithm to obtain the task assignment. The result matrix S is as follows: For other situations, that is, the number of free AGVs is greater than the number of tasks, or multiple different transport tasks are at the same loading point, or multiple free AGVs are at the same unloading point, we just change the coefficient matrix according to the actual situation. In this practice case shown in figure 1, there are six tasks: P 1 →D 3 , P 2 →D 1 , P 2 →D 4 , P 3 →D 2 , P 3 →D 4 and P 4 →D 1 . Actually, the parking location of the free AGV is determined by the transport task at the previous stage, that is, the current location of the free AGV is unknown. It can be assumed that at the current time, there are six idle AGVs in the system, two at the unloading point D 1 and D 4 , and one at the unloading point D 2 and one at D 3 . The coefficient matrix L'' for multi-task assignment at this moment can be obtained, and also the assign matrix S' can be calculated. Conclusion This paper provides a multiple transport task assignment and routing optimization method of AGVs under multi-task. In order to solve the schedule problem, we divide the scheduling process into two stages: no-load and load. In the load-travel stage, the loading and unloading point are given by the task, and the shortest path of each task can be obtained by Dijkstra method. In the second stage, we improve the transport system efficiency via optimizing the routing of AGVs. Considering the constraints of the processing system, a model is built to minimize the total cost of the transportation system and the processing system. Finally, we provide a hybrid algorithm combining Dijkstra algorithm and Hungarian algorithm to obtain the shortest path matrix of no load and load firstly, and then to get assignment of AGVs to the transport tasks according to the urgent task set. The practical case result also shows that this algorithm is available and efficient.
2019-02-19T14:07:47.290Z
2018-11-06T00:00:00.000
{ "year": 2018, "sha1": "ada1c702d352003230d826c920d37643257b21d0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/189/6/062050", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "21ea5501ba3c0f7081abeecc57d42167c7eedefd", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
229443262
pes2o/s2orc
v3-fos-license
Depression Accelerates Tumor Cell Proliferation Via Regulating Serotonin/miR-144 Axis in NSCLC Mice Non-small cell lung cancer is known as a malignant tumor with low survival rate and poor prognosis. Depression affects various diseases. However, the effect of depression on the progression of NSCLC remains unclear. In our current study, chronic mild stress (CMS) mice was used as depression animal model. Depression prompted the tumor progression in vivo analysis, including increasing tumor indexes and reducing survival rate. Serotonin secretion was observed to be remarkable elevation in both serum and tumor tissue, which was positively related with tumor progression. In vitro assays, serotonin promoted the proliferation of A549 cells. Ad -ditionally, we observed that miR-144 expression was significantly downregulated in serotonin stimulated group. Further loss-of- and gain-of-function assays verified that miR-144 was the downstream factor of serotonin underlying the condition of CMS. Taken to -gether, our research indicated that CMS-induced serotonin secretion accelerates NSCLC proliferation via inhibiting miR-144 expres sion, suggesting the potential therapeutic direction in NSCLC patients. Introduction Non-small cell lung cancer (NSCLC) threatens patients with high morbidity and mortality with low 5-year survival [1]. It has been reported that around 85% patients with lung cancer were diagnosed as NSCLC [2], suggesting its urgent need in promising therapeutic treatment. Various risk factors accelerates the progression of NSCLC, promoting its tumor proliferation, including inflammation [3], oxidation [4], and also, the psychiatric factors, such as depression [5,6]. Depression is generally induced by continuous chronic stress. Long-term exposure to chronic stress causally leads to various physio-and pathological changes from the central nervous system to peripheral organs [7,8]. Ample researches have documented positive relationship between depression and the risk of ischemic or coronary heart disease, causally increasing their morbidity and mortality [9]. Depression is also considered as a chronic, inflammation-related disorder, abnormally altering immune system, oxidative reaction and nitrosative stress [10]. In addition, the link between depression and chronic obstructive pulmonary disease has been well addressed [11], emphasizing the critical role of depression in chronic disease. Currently, depression has been gradually considered as the pro-carcinoma factor, which is the most malignant chronic diseases nowadays. Increasing studies have provided an overall view of depression-related carcinoma progression [12][13][14][15]. However, there still little research on NSCLC with depression exist. More efforts are in requirements with this direction. The mechanisms underlying depression-induced disease progression step in the developing level. Serotonin has been recognized as an important emotion-related factor, participating in various regulation of diseases. Researches have described serotonin as a biomarker and potential target against heart failure [16,17]. Additionally, its regulatory role beyond neurotransmitter has also been discovered. Serotonin attenuates LPS-induced systemic inflammation [18], as well as intestinal inflammation [19]. For carcinoma, serotonin caused a dose-dependent increase in the proliferation of bladder [20] and prostate cancer cells [21]. However, the effect of serotonin on NSCLC remains unclear. Based on these theories, our study was designed to investigate the relationship between depression, serotonin and NSCLC, providing a novel therapeutic direction and strategy against NSCLC. Animals Animal experimental protocols were consented by the Ethic 85-23, revised 1996). NSCLC tumor-bared mice were established as previous study [22]. In brief, six-week old BALB/c nude mice, which were purchased from the Animal Center of the 2 nd Affiliated Hospital of Harbin Medical University (Harbin, China). All mice were housed in a dedicated room (12h dark/light cycle, controlled temperature at 22 ± 1°C, constant humidity at 55 ± 5%) for 1-week acclimatization. According to the experimental design, nude mice bearing A549 tumor xenografts were divided into following groups randomly: controls, CMS, CMS+apocynin, CMS+SSRI, serotonin, serotonin+miR-144, and serotonin+negative control (NC). The volumes of the tumors were measured every week for 3 month. For CMS animals were obtained as a gift from substance-dependent laboratory of Qiqihar Medical University. As shown in previous study [23], CMS includes limited room restraint, forced warm water bath, water/food deprivation, housing in wet saw dust, and reversed day/night cycle, etc. For the protocol of transfection, A549 cells were cultured with serum-free medium for 12 h. miRNAs and lipofectamine 2000 (Invitrogen, Carlsbad, USA) were mixed for 5 minutes prior to transfection. Then, the two mixtures were combined and incubated at room temperature for 15 minutes and finally, the mixture was added to A549 cells. The transfection medium was replaced by regular growth medium after 6h transfection. as previous study showed. Enzyme-linked immunosorbent assay Serum contents of serotonin (serotonin; BOSTER, Wuhan, China) were determined using an ELISA kit following the manufacturer's instructions. Western blot Protein samples (NSCLC tumor tissues) were extracted and dissolved with RIPA buffer (Solarbio, Beijing, China) with protease inhibitors (Sigma, Louis, MO, USA). BCA (Beyotime, Shanghai, China) method was used to quantify the concentration. SDS-PAGE (10% polyacrylamide gels) was used to separate proteins with different molecular weight, and then transferred to nitrocellulose membrane. Non-fat milk (5%) was used for blocking subsequently, and the membranes were incubated with the primary antibodies for serotonin (ab66047, Abcam, USA) and GAPDH (TA-08, Zhongshan Golden Bridge Biotechnology, Beijing, China), which was used as internal control. After overnight incubation, florescence-labeled secondary antibody was incubated in dark. Odyssey Infrared Imaging System (LI-COR, Lincoln, NB, USA) was used to calculate the relative expression level of serotonin compared with the internal control. which GAPDH was used as internal control. Real-time PCR Total RNA was harvested from A549 cells via TRIzol reagent Proliferation assay Proliferation detection of A549 cells was conducted by CCK-8 assay. 100 μL medium containing A549 cells were seeded in a 96-well plate before detection. CCK-8 reagent was added to each well (10 μL/well). Cells were incubated for 4 h in cell-culture circumstance. Then, the optical density (OD) value (450 nm) was determined by an enzyme-linked immunosorbent assay plate reader (Bioreader). Statistical analysis All values were presented as mean ± S: E: M: Statistical comparisons were performed by Student's t-test between two groups or one-way ANOVA for multiple comparisons. p < 0:05 was considered to indicate a significant difference. Data were analyzed using the GraphPad Prism 7.0 software. Correlations between miR-144 and serotonin were assessed by using Pearson, Spearman, and Kendall's rank correlation coefficient analyses [22]. Depression positively related with malignant ending in NSCLC mice Depression plays vital roles on the progression of various diseases, promoting the deterioration, and finally leads to a bad end. To detect the influence of depression on NSCLC, we established NSCLC with CMS, and further investigate the criteria of tumor progression, including tumor size, volume, weight, and survival rate. Apocynin is commonly recognized as an anti-depression drug, Serotonin involved in depression-related NSCLC tumor progression Serotonin is a key regulatory factor on emotion, especially on depression. To gain the insight of the mechanism, we examined the levels of serotonin in serum and tumor tissue homogenate. The results show that serotonin was significantly elevated in CMS group, whereas repressed after Apocynin treatment (Figure 2A-2C). To detect the relationship between serotonin and NSCLC tumor progression, we treated CMS mice with selective serotonin reuptake inhibitor (SSRI). The results show that serotonin inhibition by SSRI ameliorated the condition of NSCLC, including the tumor size ( Figure 2D), tumor volume ( Figure 2E), tumor weight ( Figure 2F), as well as survival rate ( Figure 2G). Relationship analysis results show that serotonin is positively related with the progression of NSCLC ( Figure 2H-2J), suggesting its regulatory role of NSCLC. (B-C) Serotonin levels were detected by real-time PCR and western blot in mice tissue homogenate, respectively. n = 5, **p < 0.01, ***p < 0.001 compared with control group, ###p < 0.001 compared with CMS group. (D-G) Tumor size, volume, weight and survival rate were recorded and calculated in control, CMS and CMS+SSRI groups. n = 5 in each batches, **p < 0.01, ***p < 0.001 compared with control group, #p < 0.05, ##p < 0.01 compared with CMS group. (H-J) Relationship between serum serotonin levels and tumor index (tumor size, volume and weight), X axis represents relative tumor index, and Y axis represents relative serum serotonin levels, n = 10 in each batches. Serotonin promotes the progression of NSCLC via downregulating miR-144 Previous studies have showed that miR-144 is not only participates in the proliferation of NSCLC cells [29,30], but also regulates depression-related physiological processes [31][32][33]. Based on these researches, we further verified the underlying mechanism with the involvement of miR-144. A549 cells were used as in vitro materials. The proliferation of A549 cells was measured as figure 3A and Discussion Our research has observed that 1) Depression promotes the progression of NSCLC, and 2) serotonin, at least in part, participates in depression-induced NSCLC aggravation. 3) miR-144 is the downstream effector of serotonin and 4) serotonin/miR-144 axis is the key regulatory pathway, mediating depression accelerated NSCLC cell proliferation. We have analyzed the multiple pathogenic factor of NSCLC, and uncovered the influence of psychiatric factor, as well as the underlying mechanism in this carcinoma. In this case, this study provides not only the discovery in therapeutic strategy and potential targets of NSCLC, but also initiates the cross-disciplinary direction for NSCLC treatment. Nowadays, lung cancer remains the leading cause of mortality worldwide, especially NSCLC, possessing the most common proportion of cases. Although multiple treatments in development, including such as thoracic surgery, chemotherapy, radiotherapy and targeted drug treatments, the survival rate of 5 year, as well as the prognostic result is still under-satisfied. Tumor cell proliferation contributes to the main malignancy in NSCLC, requiring the effective approach to block. Signalling pathways involved in the proliferation of NSCLC tumor cells includes Wnt pathway [34], NF-κB pathway [35], and E2F2 pathway [36], et al. Researches have also highlighted the importance of miRNAs in this process, and discovered the regulatory effect of Notch-1/miR-137, as well as PDCD4/miR-421 axis on NSCLC proliferation [37,38]. Additionally, miR-124 has been reported to exert its beneficial effect on NSCLC [39,40], recognized as a tumor suppressor and prognostic marker. impairment adversely promote the formation of depression. In turn, depression adversely accelerates the progression of chronic illnesses. The mortality rate in cardiac infarction patients with depression is remarkably elevated compared with normal mental condition patients (26% vs 7%) [41]. Researches have also revealed depression as an independent risk factor of heart failure, threatening the life-span of patients with cardiovascular diseases [42]. Long-term pressure and anxiety from illness, family, and medical cost in NSCLC patients frequently contribute to depression. NSCLC has been identified to be the highest rates of co-morbid depression among all cancer types. Studies focused on NSCLC and depression has showed that NSCLC patients with mutant EGFR is negatively associated with depression [5]. Also, depression is related with worse survival in patients with newly diagnosed NSCLC [43]. Although several researches exist, the underlying reason for this association, together with its mechanism is not entirely clear, raising the requirement for further detection. Serotonin is a biogenic monoamine, characterized as a neuromodulatory factor regulating neoplastic capabilities. Additionally, it also acts as the local mediator in the gut and vasoactive agent in the blood, exerting its biological effects via interacting with receptors, as well as multiple pathways [44]. The biological underpinning of serotonin on depression is becoming increasingly understood. Serotonin participates in the development of neuronal networks, and its dysfunction thereby contributes to brain disorders. The relationship between serotonin and the pathophysiology of depression has been well reviewed [45], however, the biological relevance, triggers and molecular mechanisms are only beginning to be understood. Previous study has revealed the growth inhibition of prostatic carcinoma by serotonin antagonists [46], and serotonin activates MAP kinase and PI3K/Akt signaling pathways in the progression of prostate cancer [47]. In addition, serotonin has been observed to be involved in of small cell lung carcinoma cells, colonic adenocarcinoma, breast carcinoma, and Bladder carcinoma., et al. [44], whereas the relationship between serotonin and NSCLC remains incomprehensive. Thus, our study has initially uncovered this relationship, providing the novel therapeutic direction against NSCLC. Importance of miRNAs has been recognized in various physiopathological processes. Researches of miRNAs focused on carcinoma have been gradually emphasized. The development of gastric cancer has been reported to be regulated by miR-183 via LncRNA MALAT1/miR-183/SIRT1 axis and PI3K/AKT/mTOR signals [48]. MiR-135a-5p promotes lung cancer progression via targeting LOXL4 [49]. MiR-155/miR-143 axis participates in TGF-β1 promoted colorectal cancer immune escape [50]. The involvement of miR-150/β-catenin axis in colorectal cancer progression has also been identified recently [51]. Among these miRNAs, miR-144 has been considered as a key regulator on cervical cancer [52], gastric cancer [53], colorectal cancer [54], as well as NSCLC. Studies have addressed the inhibitory effects of miR-144 on radiosensitivity of NSCLC via regulating ATF2 [30] and cancer proliferation by targeting CDKL1 [55]. According to previous studies, miR-144 is also tightly related with depression. Expression level of miR-144-5p is significantly downregulated in depressive patients [31], suggesting its potential peripheral biomarker for pathologic processes related to depression. Our current study has verified that miR-144 mediates depression-promoted NSCLC progression via inhibiting cell proliferation, indicating the clinical potential of miR-144 in psychiatric-combined carcinoma treatment. Conclusion In this study, we found that depression is positively related with malignancy of NSCLC via elevating serotonin expression. Serotonin/miR-144 axis regulates depression-promoted NSCLC progression via inhibiting cancer cell proliferation property. Future investigations are needed to verify the downstream targets of miR-144 and define the specific serotonin receptor involved in the process. Also, the resource of serotonin and the secretion mechanisms are still in requirement, raising the need of joint contribution cross disciplines.
2020-12-03T09:05:31.018Z
2020-11-28T00:00:00.000
{ "year": 2020, "sha1": "8ade0a354a536b87d0760ac3b0514ae22f6a1629", "oa_license": "CCBY", "oa_url": "https://actascientific.com/ASPS/pdf/ASPS-04-0636.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0efb360254636731a06997a684f8f2ac8981552f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
201065013
pes2o/s2orc
v3-fos-license
Silencing of miR-182 is associated with modulation of tumorigenesis through apoptosis induction in an experimental model of colorectal cancer Background miR-182-5p (miR-182) is an oncogenic microRNA (miRNA) found in different tumor types and one of the most up-regulated miRNA in colorectal cancer (CRC). Although this microRNA is expressed in the early steps of tumor development, its role in driving tumorigenesis is unclear. Methods The effects of miR-182 silencing on transcriptomic profile were investigated using two CRC cell lines characterized by different in vivo biological behavior, the MICOL-14h-tert cell line (dormant upon transfer into immunodeficient hosts) and its tumorigenic variant, MICOL-14tum. Apoptosis was studied by annexin/PI staining and cleaved Caspase-3/PARP analysis. The effect of miR-182 silencing on the tumorigenic potential was addressed in a xenogeneic model of MICOL-14tum transplant. Results Endogenous miR-182 expression was higher in MICOL-14tum than in MICOL-14h-tert cells. Interestingly, miR-182 silencing had a strong impact on gene expression profile, and the positive regulation of apoptotic process was one of the most affected pathways. Accordingly, annexin/PI staining and caspase-3/PARP activation demonstrated that miR-182 treatment significantly increased apoptosis, with a prominent effect in MICOL-14tum cells. Moreover, a significant modulation of the cell cycle profile was exerted by anti-miR-182 treatment only in MICOL-14tum cells, where a significant increase in the fraction of cells in G0/G1 phases was observed. Accordingly, a significant growth reduction and a less aggressive histological aspect were observed in tumor masses generated by in vivo transfer of anti-miR-182-treated MICOL-14tum cells into immunodeficient hosts. Conclusions Altogether, these data indicate that increased miR-182 expression may promote cell proliferation, suppress the apoptotic pathway and ultimately confer aggressive traits on CRC cells. Electronic supplementary material The online version of this article (10.1186/s12885-019-5982-9) contains supplementary material, which is available to authorized users. In reference to CRC development, we identified miR-182-5p (miR-182) as one of the most up-regulated miRNAs in primary tumors compared to normal colon mucosa, thus suggesting its potential impact on target genes de-regulated in CRC [19]. A significant miR-182 increase is observed in the early phases of tumor development and is maintained in the metastatic process [20,21]. Plasma miR-182 concentrations were higher in CRC patients at stage IV than in controls, and significantly decreased 1 month after radical hepatic metastasectomy, indicating that evaluation of circulating miR-182 may integrate the array of non-invasive blood-based monitoring and screening biomarkers [20]. miR-182 has been described as an oncogenic miRNA implicated in the development of various malignant histotypes by several studies (reviewed in [22]). In CRC, available evidence collectively indicates that miR-182 is one of the major players involved in the acquisition of malignant properties and it is associated with pro-proliferative signaling pathways and tumor invasion [23][24][25]. Nevertheless, the mechanisms underlying the ability of miR-182 to promote the tumorigenic process are not yet clarified. To fill this gap, we investigated the impact of miR-182 silencing in two human CRC cell lines endowed with different tumorigenic potential. Analysis of transcriptomic and in vitro readouts of miR-182 silencing indicated that this miRNA counteracts apoptosis and affects cell proliferation. In addition, the in vivo results showed that miR-182 sustains tumor growth by altering tumor cell cycle dynamics and morphology. Cell lines and patients HT-29, Caco2 and LoVo cells were obtained from the American Type Culture Collection (ATCC HTB-38, ATCC HTB-37, ATCC CCL-229). The CG-705, MICOL-S and MICOL-14 h-tert cell lines have been previously described [26] and were kindly provided by Dr. P. Dalerba (Columbia University, NY). Briefly, the CG-705 cell line was derived from a primary tumor of the right colon; MICOL-S cell line was derived from the hepatic metastasis of a primary right colon cancer; the MICOL-14 h-tert cell line was derived from a lymph-node metastasis of a patient with rectal cancer. MICOL-S and MICOL-14 h-tert cell lines have similar in vitro morphology and express the same differentiation markers, but they were derived from individuals with different primary cancer locations, as reported in Table 1 of the above quoted paper [26]. Both cell lines were unstable in vitro (i.e. they undergo growth arrest after a few in vitro passages) and were immortalized by h-TERT cDNA gene transfer. The MICOL-14 h-tert cell line behaves as non-tumorigenic in immunodeficient mice [27]. However, we demonstrated that the subcutaneous (s.c.) injection of MICOL-14 h-tert cell line into non-obese diabetic severe combined immunodeficient (NOD/SCID) mice in combination with angiogenic factors translated into the acquisition of an in vivo tumorigenic phenotype [27,28]. This property was consistently maintained thereafter, and in vivo tumorigenesis experiments confirmed that MICOL-14 h-tert cells behaved as dormant, whereas NOD/SCID mice injected with the tumorigenic variant MICOL-14 tum developed aggressive tumors within 6 weeks (not shown). Authentication of specific genetic fingerprint by short tandem repeat (STR) DNA profile analysis showed that the two cell lines presented exactly the same loci number profile, and confirmed their genetic identity (data not shown); moreover, these cell lines were tested and scored negative for mycoplasma contamination when experiments were performed. All cell lines were grown in RPMI-1640 medium (Invitrogen, Milan, Italy) supplemented with 10% fetal bovine serum (FBS; Gibco, Invitrogen), L-glutamine, Pen/Strep and HEPES, and used within 6 months of thawing and resuscitation. The cells were harvested with trypsin-EDTA in their exponentially growing phase, and maintained in a humidified incubator at 37°C with 5% CO 2 in air. For this study, 5 patients with sporadic stage IV CRC were also selected [19], and their tumor tissue and normal mucosa samples were analyzed by qRT-PCR. The Ethics Committee of the University Hospital of Padova approved the study, and all patients provided written informed consent. RNA extraction, reverse transcription and quantitative RT-PCR analysis RNA was extracted from cells 24, 48 and 72 h after their transfection using Trizol reagent (Thermo Fisher Scientific, MA), according to manufacturer's instructions. RNA concentration and purity were measured with Nanodrop (Bio-Tek Instruments, Winooski, VT) and Agilent (Agilent Technologies, Santa Clara, CA). Reverse transcription and qRT-PCR experiments were conducted as previously described [19] using Taqman Gene Expression Assay (Applied Biosystem by Thermo Fisher Scientific). Expression data were normalized using as a reference RNU44 for miRNAs, and HPRT1 for transcripts. miRNA silencing by transient in vitro transfection Cells were seeded in 6-or 24-well plates in complete RPMI medium for 24 h. The medium was then replaced with Opti-MEM® I Reduced Serum Medium (Thermo Fisher Scientific) and specific hsa-miR-182 mirVana™ miRNA inhibitor (Ambion by Thermo Fisher Scientific) was added to a total of 150 pmol/well; to allow cell transfection, Lipofectamine RNAiMAX transfection reagent (Invitrogen) was mixed with the miRNA inhibitor, according to protocol instructions. The mixture was incubated in the dark for 5 min at room temperature and then added to each well. In parallel, an equal number of cells were treated with an anti-miR-NC (mirVana™ miRNA inhibitor Negative Control #1; Ambion), as a control for data normalization of anti-mir-182-independent transfection effects. Cells plated in the medium used for the transfection, but without treatment, provided an additional control. Moreover, to monitor inhibitor uptake efficiency by flow cytometry analysis, the same number of cells were transfected with a carboxyfluorescein-labeled RNA oligonucleotide (FAM™-labeled Anti-miR™ Negative Control; Ambion). After overnight incubation, the Opti-MEM medium supplemented with miRNA inhibitor or control was replaced with complete RPMI, and miRNA silencing was evaluated by qRT-PCR at different time points. At each time point, cells were also harvested to perform the experiments for miRNA function investigation. In all silencing experiments, transfection efficiency consistently exceeded 80%, and miRNA expression levels were decreased > 70% in transfected cells compared to controls. Apoptosis and cell cycle assay To detect cell death, the Annexin-V-FLUOS staining kit (Roche, Mannheim, Germany) was used according to manufacturer's instructions. For cell cycle analysis, cells were fixed with cold ethanol, stained with anti-human Ki67 (BD Biosciences, Franklin Lakes, NJ, USA) and then incubated for 1 h in a DAPI/RNAse solution. Cytofluorimetric analysis was performed on a FACS Calibur flow cytometer (Becton-Dickinson Immunocytometry Systems, NJ; excitation/emission wavelengths of 488/525 and 488/675 nm for Annexin-V and PI, respectively). In vivo tumorigenesis assay Non obese diabetic/severe combined immune deficiency (NOD/SCID) mice were bred in our SPF animal facility. All procedures involving animals and their care conformed to institutional guidelines that comply with national and international laws and policies (EEC Council Directive 86/609, OJ L 358, 12 December 1987). Before in vivo transfer, the tumorigenic MICOL-14 tum cells were treated with miR-182 inhibitor or anti-miR-NC as a control. For tumor establishment, 7 to 9-week-old mice were s.c. injected into both dorsolateral flanks with exponentially growing untreated or miR-182 silenced MICOL-14 tum cells (0.5 × 10 6 cells in a 100 μl volume containing Matrigel). After 1 week, mirVana™ miR-182 inhibitor in vivo ready (Life Technologies by Thermo Fisher Scientific) or negative control were combined with Invivofectamine 2.0 Reagent (Life Technologies) and used for intratumoral injection to maintain in vivo miRNA silencing. The resulting tumor masses were inspected and measured as previously described [28]. In all experiments, the mice survived until the experimental endpoint, when they were sacrificed by cervical dislocation. Tumors were harvested by dissection, and either snap-frozen or fixed in formalin and embedded in paraffin for further analysis. Isofluran anaesthetic was used prior to injecting mice with tumor cells and before sacrifice. CRC grading and mitotic index evaluation The tumor sections were evaluated by Hematoxylin and Eosin (H&E) staining for CRC grading and mitotic index evaluation. The 2010 WHO scoring for CRC Grading, based upon the percentage of gland formation (> 75%; 35-75% and < 35%, respectively), is as follows: G1 well differentiated cancer, G2 moderately differentiated cancer and G3 poorly differentiated cancer, and is. Main growth patterns were from less to more aggressive: glandular, trabecular and solid. The mitotic index, mirroring the ratio between the number of cells in a population undergoing and not undergoing mitosis, was calculated by counting the number of mitosis in 10 fields at 40X magnification. Gene expression analysis Expression data were generated using the Affymetrix Gen-eChip PrimeView Human Gene Expression Array (Affymetrix by Thermo Fisher Scientific) using total RNA isolated from MICOL-14 h-tert and MICOL-14 tum cells transfected with either anti-miR-182 or anti-miR-NC. Raw data quality control was performed using the R package 'affyQCreport' [30]. Expression matrix reconstruction was obtained by 'affy' package [31] using RMA for data summarization and normalization. Transcript-level annotation of probesets, based on Ensembl (release 88), was obtained with R package 'primeviewcdf'. Differential expression tests were conducted using Limma package [32], setting significance threshold to 0.05 for p-value, adjusted using FDR method for multiple testing correction. Pathway enrichment analysis of differentially expressed genes was conducted using DAVID (Database for Annotation, Visualization and Integration Discovery, release 6.8) [33]. Significant GO terms, PIR keywords, and KEGG and Reactome pathways were selected considering adjusted p-values (Benjamini-Hochberg) at most 0.05. Experimentally validated and predicted miR-182 target transcripts were downloaded from MirTarBase (release 6.0) [34] and from TargetScanHuman (release 7.1) [35], respectively. Statistical analysis Results were expressed as mean values ± SD. Two-tail Student's t-test was performed on parametric groups. Values were considered significant at *p ≤ 0.05 and **p ≤ 0.01. All analyses were performed with SigmaPlot (Systat Software Inc. San Jose, CA). Results miR-182 is up-regulated in CRC cell lines and can be efficiently silenced in tumorigenic and non-tumorigenic cell lines miR-182 expression levels were evaluated by qRT-PCR in normal colon mucosa samples as a reference, and in a panel of seven CRC cell lines. Significant miR-182 upregulation was observed in all the analyzed cancer cell lines (Fig. 1A), strengthening the evidence that increased miR-182 expression is a shared feature of CRC [19]. Highest miR-182 expression levels were measured in MICOL-14 tum cells followed by parental MICOL-14 h-tert cells. Based on these results, we focused subsequent experiments of miR-182 silencing in MICOL-14 tum and MICOL-14 h-tert cells, as a model of two cell lines which share the STR DNA profile but differ in key phenotypic properties such as the ability to generate tumors in immunodeficient recipients. Treatment with anti-miR-182 effectively inhibited miR-182 expression in both cell lines. In particular, 24 h after treatment, the miR-182 expression resulted significantly repressed by a factor of 0.55 (p = 0.0034) and 0.17 (p = 0.0008) in MICOL-14 h-tert and MICOL-14 tum , respectively. Silencing was maintained at all the time points considered and lasted for over 72 h in both cell lines (Fig. 1b). miR-182 silencing strongly increases apoptosis and affects cell cycle We next wondered whether miR-182 silencing could affect some key properties of MICOL-14 h-tert and MICOL-14 tum cells lines, such as apoptosis and cell cycle dynamics. Judging from annexin/PI staining, miR-182 inhibition was associated with a significant increase in apoptosis in both cell lines, compared to untreated cells (NT) and control anti-miR-NC treated cells (Fig. 2a). At 24 h post-treatment, the increase in apoptosis was comparable in MICOL-14 h-tert and MICOL-14 tum cells, whereas at later time points (48 and 72 h), apoptosis levels were significantly increased in the tumorigenic cell line compared to the dormant counterpart. Western blot analysis of cleaved PARP and Caspase-3 proteins, performed 48 h post-treatment, confirmed the above results. Indeed, as shown in Fig. 2b, a decrease in total PARP and an eventual increase in cleaved PARP was observed in both MICOL-14 h-tert and MICOL-14 tum cells, compared to the cells treated with control anti-miR-NC. However, the ratio between total and cleaved PARP was lower in MICOL-14 tum cells, indicating that the complex machinery regulating apoptotic phenomena was preferentially affected by miR-182 silencing in the tumorigenic cell line. The involvement of miR-182 in cell cycle progression was supported by proliferation rate analysis. While MICOL-14 h-tert cells only disclosed minimal changes in cell cycle profile after anti-miR-182 treatment (Fig. 2c), a significant increase in the fraction of cells in G0/G1 phases was observed in MICOL-14 tum cells, associated with a corresponding decrease in the S and G2 phases (Fig. 2c). These data indicated that miR-182 inhibition in MICOL-14 tum cells may modulate cell proliferation rate and strongly induce apoptosis. miR-182 silencing significantly affects gene expression profile of MICOL-14 h-tert and MICOL-14 tum cells To explore the complex biological processes involved in the above-described functional changes, transcript and gene expression profiling was performed on MICOL-14 h-tert and MICOL-14 tum 24 h after treatment with anti-miR-182 or anti-miR-NC. Four replicates for cell type and condition were tested. Expression profiles of 49,293 probesets, corresponding to 41,532 transcripts and to 19, 942 individual genes, in the 16 samples considered, were acquired. Unsupervised Principal Component Analysis (PCA) of transcript expression profiles showed that samples separated first for cell line, indicating that the two cell lines display highly different expression profiles, and then by treatment, underlying the effect of miR-182 inhibition on expression profiles of both lines (Fig. 3a). Accordingly, expression data informed on differential expression between the dormant and the tumorigenic cell lines and, more importantly, on expression changes determined by miR-182 silencing in each cell line. Comparing anti-miR-182 vs anti-miR-NC, significant differential expression was detected in both cell lines (Fig. 3b), with a more marked impact of miR-182 silencing in MICOL-14 tum (3472 differentially expressed transcripts from 1382 genes, 40% up-regulated), than in MICOL-14 h-tert cells (669 transcripts from 243 genes, 73% up-regulated). Genes differentially expressed after miR-182 silencing are expected to include both direct miRNA targets, likely enriched with those up-regulated after miRNA silencing, and indirectly regulated genes due to miR-182 impact on transcriptional and post-transcriptional regulators in complex regulatory circuits. According to our data, 759 genes had transcripts (1825 in total) significantly up-regulated after miR-182 Table S1). Of the 158 genes with transcripts differentially expressed after miR-182 inhibition in both cell lines, a vast majority (153) showed expression changes in the same direction in the two cell lines, prevalently (103) up-regulation. Functional Gene Ontology (GO) terms and significantly enriched pathways were detected considering genes differentially expressed after miR-182 inhibition in each cell line Additional files 2 and 3: Tables S2-S3) and in both cell lines (Table 1). According to in vitro data on the impact of miR-182 silencing on the apoptotic process, "positive regulation of apoptotic process" was the most enriched biological process among genes differentially expressed in both cell lines after miR-182 inhibition. Moreover, an enrichment of p53 signaling and FoxO signaling pathways, both multifunctional processes in the cross-talk with apoptosis regulation through common genes and proteins [36], was also observed. The significant upregulation after miR-182 silencing of miR-182 predicted target transcripts of HIST1H2BH, NABP1, RND3, and TRIO genes (all encoding proteins with potential role in DNA-damage response and invasion) was confirmed by transcript-specific qRT-PCR assay ( Fig. 4a-b, and Additional file 4: Table S4). In particular, the NABP1 gene, which is involved in the GO "DNA repair" pathway taking part in the apoptotic process, was significantly enriched in the anti-miR-182-treated tumorigenic cell line. Interestingly, a significant NABP1 expression decrease was observed in a pool of primary CRC samples, in which increased miR-182 levels were previously assessed [21], compared to matched normal colon mucosa (Fig. 4c). 1b, and data not shown), 1 week after cell transfer an intra-tumor injection of anti-miR-182 was performed to buttress in vivo miR-182 silencing (Fig. 5a). The mice inoculated with control MICOL-14 tum cells developed significantly larger tumors, compared to mice injected with anti-miR-182-treated cells (Fig. 5b). Interestingly, miR-182 inhibition was associated with a significant reduction in tumor size 3 weeks after injection (p = 1.56 × 10 − 5 ), and 5 weeks later the volume of tumor masses was still significantly different ( Fig. 5b; p = 0.037). Notably, miR-182 inhibition was associated with evident histological and morphological changes in the tumor tissue harvested from immunodeficient recipients (Fig. 5c). In fact, the tumor masses generated by MICOL-14 tum control cells consistently showed moderately to poorly differentiated adenocarcinoma with bulky appearance, trabecular-solid pattern, minimal fibrosis and pushing borders. In contrast, the tumor masses developed after inoculation of anti-miR-182treated MICOL-14 tum cells showed mainly moderately differentiated adenocarcinoma with mild fibrosis within (Fig. 5c). Moreover, the average mitotic index of tumor masses was significantly higher in control mice than in animals injected with anti-miR-182-treated cells (Fig. 5d), indicating that miR-182 inhibition also impairs cell proliferation in vivo. Discussion miR-182 deregulation has been reported in several human cancer types, including CRC. We previously observed that miR-182 overexpression is already present in the transition from normal colonic mucosa to tubular adenoma and is stably maintained in primary CRC tumor and liver metastases. This seems to indicate that the miR-182 upregulation occurs in early premalignant development and is associated with the maintenance of the malignant phenotype [19]. Furthermore, we also demonstrated that high expression levels of miR-182 do not characterize mucosa samples from patients with inflammatory bowel disease, thus suggesting that its deregulation is not a mere consequence of the chronic inflammatory process [21]. Interestingly, in a large functional miRNA screening, Cekaite et al. found that miR- ITGB3BP, TUBB2A, EID2B, CLK1, HIST2H4A, TCEAL1, CAMKK2, NFATC2IP, FUBP1, SFSWAP, CCNE1, ZNF181, BLZF1, CLK4, ANKRD11, NSMCE2, AKIRIN1 182 gene, a component of miRNA cluster miR-183-96-182 located in 7q32 genomic region, is amplified in 26% of primary CRC and 30% of liver metastases [25]. In the same large-scale analysis, a link between reduced apoptosis and deregulation of a combined set of miRNAs, namely miR-9, − 31, and − 182, was also reported in two independent CRC cell lines, suggesting that miR-182 is involved in CRC development and progression by promoting cell survival. Thus, the impact of miR-182 on apoptosis, proliferation and invasion as well as on chemo-resistance has recently been addressed in search for a link between its high expression and the acquisition of functional properties favorable to tumor development [37][38][39]. In the present study, the impact of miR-182 silencing on the biological properties of MICOL-14 h-tert and MICOL-14 tum cell lines was investigated in vitro and in vivo, demonstrating that miR down-regulation strongly increases apoptosis and affects cell cycle dynamics in both cell lines, with a more pronounced and long-lasting effect in the tumorigenic cell line compared to the dormant counterpart. Evidence that anti-miR-182 treatment impairs the tumorigenic potential of the MICOL-14 tum cell line after the xenogenic transplant in immunodeficient mice was also provided. However, miR-182 silencing was associated with a delay in the generation of tumors by the MICOL-14 tum cell line and did not abrogate its tumorigenic potential. Reactivation of miR-182 a few weeks after silencing in some transduced cells, and their eventual outgrowth, or the presence within the transferred population of a few cells with ineffective silencing could explain this finding. miRNAs are highly pleiotropic and a single miRNA can influence many genes. Thereby deregulation of a single miRNA can deeply affect cellular phenotypes. Indeed, tumor masses generated by miR-182 silenced MICOL-14 tum cells showed histological features compatible with less aggressive carcinomas, compared to untreated tumors. This could suggest that miR-182 may play a role in apoptosis as well as in other processes, including cell survival and differentiation. On the other hand, gene expression profiling showed that miR-182 silencing affects the expression of a large number of genes in both MICOL-14 h-tert and MICOL-14 tum cells, with a stronger impact in the tumorigenic cell line. The two cell lines were endowed with different gene expression profiles and in response to anti-miR-182 treatment, behaved differently. Nevertheless, 158 genes were differentially expressed in both cell lines and pointed to three significantly enriched pathways correlated with cellular survival: "positive regulation of apoptotic process", "p53 signaling" and "FoxO signaling". These pathways shared two interesting components of the Gadd gene family, Gadd 45A/B. Gadd protein expression can be induced, in a p53-dependent or -independent way, by DNA damage and other stress signals associated with growth arrest and apoptosis [40]. These proteins have been implicated in a variety of responses to cell injury, including the control of cell cycle checkpoints, apoptosis, and DNA repair. We confirmed by qRT-PCR assay the significant upregulation after miR-182 silencing of two genes, HIST1H2BH and NABP1. HIST1H2BH is a member of a large histone gene family, histones H2A, H2B, H3 and H4. Two heterodimers of H2A/H2B and one H3/H4 tetramer, associated with DNA, form the compact structure of chromatin in nucleosome. Interestingly, H2A/H2B plays an important role in processes that allow for transcription, DNA replication and DNA repair [41]. NABP1, also known as SSBP2, encodes a component of the single-strand DNA binding complex, whose role in the maintenance of genomic stability has only recently emerged [42]. NABP1 influences diverse endpoints in the cellular DNA damage response, including cell cycle checkpoint activation. We demonstrated in a pool of primary CRC samples the significant decrease of NABP1 mRNA levels in tumor tissue compared to normal mucosa, strengthening observations on gene expression. Our findings are in line with data by Krishnan et al. in breast cancer [37], and specifically support the idea that, in CRC as well, miR-182-mediated deregulation of the DNA damage response pathway could translate into impaired DNA repair with downstream effects on genetic stability and cellular transformation. Conclusions Altogether, our data highlight the relevance of miR-182 dysregulation in CRC tumorigenesis and provide evidence that this miRNA controls apoptosis and proliferation, clearly pointing to specific components of apoptosis and DNA repair processes highly represented in the network of miR-182 validated or predicted target genes. Additional files Additional file 1: Table S1. Table S4. MiR-182 predicted target transcripts for which differentially expression in MICOL-14 h-tert and/or MICOL-14 tum cells after treatment was confirmed by RT-PCR. The table showed the transcripts and the correspondinggenes, probesets and Taqman Assay ID used for experimental qRT-PCR validation. For each probeset and cell line, the expression variation observed according to Primeview Microarray data analysis is shown as LogFC of the anti-miR-182 vs anti-miR-NC comparison; values corresponding to a stastistically significant differential expression are in bold. (DOCX 19 kb) Abbreviations CRC: Colorectal cancer; miRNA: microRNA; NOD/SCID: Non obese diabetic/ severe combined immune deficiency; s.c.: Subcutaneous
2019-08-20T14:57:35.928Z
2019-07-26T00:00:00.000
{ "year": 2019, "sha1": "018781e5636a883376a181c89e3b8fe6a5a8bb37", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-019-5982-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a79d16987e6cd294744d61e276500348dd1dbe81", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
251416528
pes2o/s2orc
v3-fos-license
Data-driven self-optimization of processes in the presence of the model-plant mismatch : In this paper, a self-optimization algorithm is developed to find both the optimal operating point and the path from the current condition to the optimal point. Being a model-based strategy, a generalized locally weighted probabilistic principal component regression (PPCR) model that is robust to outliers and can handle missing data, is developed to model the plant. To account for the model-plant mismatch, a penalty term in the form of a robust Gaussian process regression is incorporated into the optimization process. A non-linearity index is utilized to control the accuracy of the local model. Finally, the exploration in optimization for diversity through the acquisition functions is studied. The performance of the proposed algorithm is demonstrated on a simulation case study of a deethanizer column. INTRODUCTION Increasing productivity, safety, and efficiency have always been the main goals of industrial plants. The objective of plant optimization is to reduce resource wastage and remove bottlenecks while accomplishing the objective of the plant and meeting all plant constraints, including operational, economic, and safety constraints. Due to the reduction in the availability of the raw materials (Manhart et al., 2019), the increase in the market demand for the products because of escalation in the world's population (Mehta et al., 2020), and the environmental concerns like global warming as a result of the emission of the greenhouse gases (GHG) (Zhang et al., 2021), plant optimization has gained more popularity. One of the approaches to do plant optimization is based on the model. A model is generally obtained through two different approaches, i) first principles and ii) data-driven (Wiebe et al., 2018;Chen et al., 2013). In the first principle model-based approach, the plant is modeled by deriving the governing equations from the fundamental laws, which needs an in-depth understanding of the plant (Pani and Mohanta, 2011). On the other hand, in data-driven modelbased approach, a model is built based on the historical data. The closer the developed model is to the plant, the more accurate results can be obtained by solving the optimization problem. However, due to the differences between the model and plant (model-plant mismatch) and the disturbances that may occur during the data collection, the optimal point obtained by solving the opti-1 This work was supported by Natural Sciences and Engineering Research Council of Canada. Corresponding author: Biao Huang (biao.huang@ualberta.ca) mization problem will be different from the true optimal point (de Avila Ferreira et al., 2018). To account for the model-plant mismatch in process optimization, the scheme of modifier adaptation was proposed in Oliveira-Silva et al. (2021). In this scheme, the error between the developed model and the plant is incorporated in the objective function while performing the optimization by using the information and measurements collected from the plant. Marchetti et al. (2009) provided a theorem that demonstrates the equivalence of KKT conditions between the plant and the model with the inclusion of the modifier adapters. They suggested the use of gradients of the objective function and constraints calculated from plant measurements as a functional form of modifier adapter. Although the calculation of gradients from noisy plant measurements can be challenging, it is demonstrated to be a reasonably reliable and effective approach. To overcome the challenges with the calculation of gradients, several methods such as nested modifier adapters (Navia et al., 2015), recursive modifier adapters (Marchetti et al., 2010), and derivative-free modifier adapters (Gao et al., 2016) are proposed. Recently, de Avila Ferreira et al. (2018) proposed using Gaussian process regression (GPR) as a modifier adapter. In this work, the historical data and real measurements obtained from the plant are used to train the GP; thereby a nonlinear model that accounts for the model-plant mismatch is obtained. del Rio Chanona et al. (2019) proposed a trust-region framework and the Gaussian process modifier adapters to control the optimization region and to avoid the possibility of violation of constraints. However, the convergence to a local optimal is still a problem in all the aforementioned methods. One of the approaches to overcome this challenge is considering uncertainty when solving the optimization problem (del Rio Chanona et al., 2021). With the development of reinforcement learning, selfreflective objective is gaining popularity (Kiran et al., 2021). In this approach, the accuracy and reliability of the optimization are improved by consideration of uncertainty. Although many studies have focused on increasing the accuracy of the modifier adaptation, the potential of reinforcement learning has not been studied extensively in the modifier adaptation and optimization problems in general. One of the concepts that can help to increase the accuracy of the optimization is acquisition functions that are used in Bayesian optimization and provide the balance between exploration(trying something new) and exploitation(keep doing what has been done) (del Rio Chanona et al., 2021). In all the aforementioned studies, the modifier adaptation scheme is used along with the first principle models, which essentially requires an in-depth understanding of the process and hence, is not always feasible. In addition to finding the optimal point, an efficient way to steer the process to the optimal point is of paramount importance. Trustregion-based real-time optimization (RTO) is one of the solutions finding an efficient path to the optimal point (Liu and Chen, 2004). However, the application of the datadriven RTO has not been well studied (Powell et al., 2020). In view of the aforementioned points, a novel selfoptimization algorithm is developed in this work that can find both the plant optimal point and the efficient way to shift the current operating condition to the optimal one. The proposed algorithm considers a generalized weighted PPCR model due to its ability to deal with missing and outlier data in both input and output variables, (Memarian et al., 2021;Yuan et al., 2017). Since weighted PPCR is a linear model, and the plant is nonlinear in general, a non-linearity index is used to help the local data-driven model to determine its accuracy. The non-linearity index measures the mismatch between the locally weighted PPCR model and the nonlinear GPR model. Then, this non-linearity index is used to determine the trust range of the generalized locally weighted PPCR model; thereby, the accuracy measure of the model is obtained. In addition, the GPR is used as a modifier adapter to account for the model-plant mismatch. Finally, an acquisition function is adopted to study the exploration during the optimization process. The remainder of this paper is organized as follows: Section 2 provides the data-driven self-optimization in the presence of model-plant mismatch and the study of acquisition functions for exploration. The efficiency of the algorithm is illustrated through a simulation on a deethanizer column to demonstrate its applicability and feasibility in section 3, and the conclusions are drawn in section 4. DATA-DRIVEN SELF-OPTIMIZATION OF PROCESSES IN THE PRESENCE OF THE MODEL-PLANT MISMATCH In this section, the data-driven self-optimization of processes in the presence of the model-plant mismatch is presented. The proposed approach utilizes a generalized locally weighted PPCR model that can handle the missing data in both input and output variables along with outliers. Further, due to its weighted local model property, it can efficiently handle the nonlinearity and/or multi-modal nature of plants (Memarian et al., 2021;Yuan et al., 2017). A robust Gaussian process regression model is used to determine the model-plant mismatch between the weighted PPCR model and the plant. To balance the exploitation and exploration in the optimization, the lower confidence bound, described in del Rio Chanona et al. (2021), is used as an acquisition function both in the objective and constraint functions. The details are provided in the rest of this section. Generalized weighted PPCR model formulation One of the important steps while solving an optimization problem is to build a suitable model that can describe the plant with sufficient accuracy. Data-driven modeling is one of the approaches that can help achieving this objective. In the proposed self-optimization algorithm, a generalized weighted PPCR model is used as a data-driven model to describe the plant (Memarian et al., 2021;Yuan et al., 2017). The generalized weighted PPCR model is one of the simplest models in dealing with uncertainties in the plant's datasets. The generative equation for the generalized locally weighted PPCR model is presented in Eq. (1). where, x i ∈ R m×1 and y j ∈ R r×1 denote the input and output data, respectively. P ∈ R m×q and C ∈ R r×q are the weighting matrices, and t i ∈ R q×1 is a vector of latent variables defined in Eq. (2). The variables e i ∈ R m×1 and f j ∈ R r×1 denote the noise measurement in input and output, respectively, which are assumed to follow a mixture of two Gaussian distributions given in Eqs. (3) and (4) to account for both outliers and regular noises. The mean values of the input and output variables are denoted as µ x and µ y , respectively. n is the total number of observations where n 1 denotes the number of labeled ones. Due to the nonlinear and/or multi-modal nature of the plants, developing a single PPCR model to capture the entire plant is not suitable. Thus, to improve the accuracy in modeling, exponential weights are calculated based on Euclidean distance to select the most relevant data points for building the model. The weights are calculated based on Eq. (5). where φ is a tuning parameter that defines how the weights are spread across the neighborhood of the testing data to develop the locally weighted PPCR model, and d i is the Euclidean distance. Further details of the locally weighted PPCR model can be found in Yuan et al. (2017). The model is developed under the framework of expectation maximization (EM) algorithm. In the E − step, the expectation of the log-likelihood function, Q − f unction, presented in Eq. (6) is derived. where θ = {P , C, µ x , µ y , σ x , σ y , δ x , δ y , ρ x , ρ y } is the set of model parameters which need to be estimated, defined in Memarian et al. (2021). O and M are the set of observed and missing data, respectively. The Q−f unction presented in Eq. (6) is calculated by incorporating Eqs. (2)-(5) and following the Bayes rule. The parameters are estimated through the solution of partial derivatives of the Q − f unction given in Eq.(6) in the M-step of the algorithm. EM algorithm is an iterative procedure that the E −step and M −step are iterated until the convergence. In the rest of this paper, the generalized locally weighted PPCR model is denoted as G P P CR . Data-driven self-optimization algorithm formulation Since the plant conditions change over time, the historical data that is used to build the model may not be able to accurately describe the current condition of the plant. Therefore, a model-plant mismatch exists between the weighted PPCR model built from the historical data and the current condition of the plant. To account for this model-plant mismatch, de Avila Ferreira et al. (2018) proposed to use the Gaussian process regression (GPR). In our problem, the objective of this GPR model is to build a model by considering the difference between the values of the objective function that are calculated from the plant data (real-time measurements) and the estimation from the locally weighted PPCR model. A similar approach is also used for the variables that have constraint, and the resultant set of equations are shown in Eq. (7). δG i = G P i − G P P CR i ∼ GP(µ δGi , σ 2 δGi ), i = 0, · · · , n g (7) where n g is the total number of constraints, where, in Eq. (7), the difference between the variable measurements and their predictions from the locally weighted PPCR is calculated and the mean error is determined through the Gaussian process regression (GPR) model. Hence, the following optimization problem should be solved. δGi is the estimated mean of the GP regression that accounts for the term of model-plant mismatch in iteration k. The mean values used in Eq. (8) are those values that are estimated from Eq. (7) which models the model-plant mismatch and correct the bias in both the objective function and constraints in the optimization. u is the manipulated variable that can be defined based on the process. As discussed in section 2.1, the effective amount of data points relevant to the current operating condition is determined by tuning the parameter φ i.e., by decreasing φ, fewer data points will effectively contribute to the model construction. If the current operating point is in a highly nonlinear region, building the locally weighted PPCR (linear) model for a given φ might not be valid. Thus, by decreasing the parameter φ, fewer data points that are closer to the current data point will receive more significant weights; hence, a generalized locally weighted PPCR model in a smaller region will be built. On the other hand, when the weighted PPCR model approximates the nonlinear plant well, we can increase area and have more data point receiving higher weight. Hence, a nonlinearity index is proposed to define the range of data to be effectively used, and based on the index, the parameter φ can be tuned. The non-linearity index calculates the performance ratio between the nonlinear model (GP regression model) built from the historical data and the linear locally weighted PPCR model, shown in Eq. (9). After calculating the non-linearity index from Eq. (9), similar to the concept of trust-region optimization (del Rio Chanona et al., 2019), three different thresholds are determined to tune φ. These three thresholds are 0 < η 1 ≤ η 2 < η 3 ≤ 1. The shrinking and expansion actions to change φ are 0 < t 1 < 1 < t 2 where t 1 and t 2 are shrinking and expansion values, respectively. It has to be noted that these parameters should be tuned before starting the algorithm. The effective region of the locally weighted PPCR model is updated based on the following steps: (1) If G P i (u k+1 * ) > 0 for some i = 1, · · · , n g or ρ k+1 < η 2 then φ := t 1 × φ (2) Else if ρ k+1 > η 3 then φ := min{t 2 × φ, φ max } (3) Else φ := φ where φ max is the maximum allowable value that φ can take. Based on the value of ρ, the decision will be made on whether to repeat the optimization, or the obtained optimal point can be used for the next iteration. The decision criterion is as following: (1) If G P i (u k+1 * ) > 0 for some i = 1, · · · , n g or ρ k+1 < η 1 then u k+1 := u k (2) Else u k+1 := u k+1 * Based on the aforementioned procedure, the number of effective data points, which will be used for the optimization, will be changed and adjusted based on the performance of the previous iteration. The steps of the proposed algorithm are provided in Algorithm 1. However, one of the drawbacks of algorithm 1 is that the solution obtained from the optimization can get into the local optimum. To circumvent this problem and pursue the optimization explore, the acquisition function from reinforcement learning and Bayesian optimization is used. del Rio Chanona et al. (2021) proposed using the acquisition functions in the objective function. However, in our proposed method, acquisition functions are used both in objective and constraint functions. Therefore, the LCB Algorithm 1 Data-driven self-optimization algorithm Input: Historical data (input and output); initial (query) point, x q ; maximum value for φ max and an initial value for φ; non-linearity threshold parameters 0 < η 1 ≤ η 2 < η 3 ≤ 1; expansion and shrinking parameters t 1 and t 2 ; objective and n g constraint functions of the optimization problem Repeat: for k = 0, 1, · · · 1: Build the generalized weighted PPCR model for the given x q and the historical data 2: Train GP regression modifiers based on the weighted PPCR estimates and the real-time measurements of the plant 3: Solve the modified optimization problem provided in Eq. (8) and obtain u k+1 4: Calculate the non-linearity index ρ k+1 5: Update the value of φ based on the value of ρ k+1 6: Based on the developed criterion decide to accept the new operating point or to repeat the optimization problem in step 3. 7: x q ← u k+1 or x q ← u k based on the previous step's result acquisition function is used (Srinivas et al., 2012) and the modified optimization problem is given in Eq. (10): where the variances estimated from the GPR in Eq. (7) are used to move the optimization search to a newer region and may therefore escape the local optimum points by relaxing the constraints. The negative sign before β is consistent with the optimization formulation as the goal is to minimize the objective function. Introducing the LCB acquisition function in the constraints helps to relax these functions while solving the optimization problem. However, if it is needed to tighten the constraints, the UCB acquisition function can be used. With the introduction of acquisition functions, the optimization problem provided in Eq. (10) is solved in the step 3 of Algorithm 1, and the rest of the steps remain the same. CASE STUDY In this section, the performance of the proposed algorithm is demonstrated by a simulation of a deethanizer column through the Aspen HYSYS V.9 (Belhocine et al., 2020). Simulation Example: Deethanizer column The deethanizer column is a continuous operating distillation column used for extracting ethane as a distillate from a mixed feed that contains light hydrocarbons. Deethanizer column is one of the most important units in refineries and is usually located ahead of other units in the plant. In Fig. 1, the typical deethanizer column is shown. The objective of the deethanizer column in the refinery plants is to separate C 3 + components from the upstream feed. Fig. 1. Schematic of the deethanizer plant (Belhocine et al., 2020) The main objective of optimization is to minimize the operational cost of the unit, which depends on the energy consumption in the reboilers, the condensers and the pumps. To minimize the energy consumption, the temperature and the feed rate of the input stream need to be regulated. Hence, the objective function is defined as: where F f eed and F bottom are the flow rate of the feed and bottom product, respectively. T f eed is the feed temperature, and X ethane,bottom is the molar fraction of the ethane in the bottom product. Q reb , Q cond , and Q pump are the terms corresponding to the energy consumption of the reboiler, condenser, and pump, respectively. f (.) = 0 is the PPCR model that relates input and output variables to each other. The first two constraints i.e., T f eed , F f eed which are defined in Eq. (11) are the operational constraints, and the F bottom and X ethane,bottom are the planning constraints. In such a setting, 15% of the input and 35% of the output data are missed, and 10% of the data is replaced with outliers to represent the possible errors in data collection through sensors. By solving the optimization of Eq. (11) with the optimization module of Aspen HYSYS, the minimum energy consumption is found to be 1.082×10 8 [W ] and the decision variables are found to be T f eed = 16.3 and F f eed = 10485. The operating region and the actual solution to the optimization of Eq. (11) are presented in Fig. 2 To demonstrate the efficacy of the proposed method in steering the plant to its optimal point, two different initializations (current operating points, COPs) are considered. In Fig. 3(a), the locations of these COPs are shown in Fig. 3(b), where the path and the final solution obtained by the proposed method for each COP are provided. Based on the results demonstrated in Fig. 3, the proposed algorithm is able to find the optimal path and solution, and steer the plant to the desired point. It is well known that signal to noise ratio (SNR) can affect the performance of the modeling and optimization. To study the effect of measurement noise on the proposed algorithm, eight . Initial points and the solutions obtained by the proposed data-driven self-optimization algorithm different noise levels are considered for this study whose optimal points are shown in Fig. 3. Two different initial operating points are considered. As it can be seen from Figure 4(a), solutions corresponding to noisier data (lower SNR) are getting trapped in local optimum points. To obtain a better solution through the discovery of the new path by searching a wider optimization region, the exploration as described in Eq. (10) is applied, and the results are shown in Figure 4(b). From the results of Figure 4, it can be seen that the inclusion of exploration in the optimization as explained in Eq. (10), helps in better convergence to the optimal point, and avoid being trapped in the local optimum. In Figure 4(b), more points with different noise levels are getting closer to the plant minimum compared to the points shown in Figure 4(a). CONCLUSION In this work, a data-driven self-optimization of the process in the presence of model-plant mismatch is proposed to find the plant optimum along with the path to reach the optimum. The objective of the proposed algorithm is to automate the procedure of finding optimal operating points of a process. It models the plant with a generalized locally weighted PPCR model and the Gaussian process regression model is utilized to identify the model-plant mismatch. A non-linearity index is proposed to adjust the weighted PPCR model to ensure its accuracy at a sufficient level. Finally, to make a balance between exploitation and exploration, the acquisition function is used in the optimization. The performance of the proposed algorithm is demonstrated on the simulated deethanizer column. Based on the results obtained from the case studies, it can be concluded that the proposed algorithm is able to move the plant towards the plant optimum.
2022-08-09T15:22:57.176Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2eef165c37d5bf93c313be4a3f4173fdb1cd3aba", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ifacol.2022.07.498", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "deeb49fe5f97496cb59ed874c632c408539fdf8b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
628439
pes2o/s2orc
v3-fos-license
Editors’ Summary for the Special Issue: Proceedings of the 24th International Workshop on Matrices and Statistics Abstract: Let T = {z1, z2, . . . , zn} be a finite multiset of real numbers, where z1 ≤ z2 ≤ · · · ≤ zn. The purpose of this article is to study the different properties of MIN and MAX matrices of the set T with min(zi , zj) and max(zi , zj) as their ij entries, respectively. We are going to do this by interpreting these matrices as so-called meet and join matrices and by applying some known results for meet and join matrices. Once the theorems are found with the aid of advanced methods, we also consider whether it would be possible to prove these same results by using elementary matrix methods only. In many cases the answer is positive. Introduction Antieigenvalue analysis [9] is an operator trigonometry concerned with those vectors, called antieigenvectors, which are most-turned by a matrix or a linear operator A. This is in contrast to the conventional eigenvalue analysis, which is concerned with those vectors, called eigenvectors, which are not turned at all by A. Antieigenvalue theory may be usefully thought of as a variational theory, extending the variational Rayleigh-Ritz theory which characterizes eigenvectors, to an enlarged theory also characterizing antieigenvectors. Two key entities in the antieigenvalue theory are the first antieigenvalue where 0 < λ 1 λ 2 . . . λn are the eigenvalues of A, and where x 1 is any norm-one eigenvector from the λ 1 -eigenspace and xn is any norm-one eigenvector from the λn-eigenspace. The antieigenvectors x± in (1.4) have also been normalized to be of norm-one. For n × n symmetric positive definite A the expressions in (1.1) and (1.2) have useful explicit valuations as µ 1 = 2 √︀ λ 1 λn λ 1 + λn , ν 1 = λn − λ 1 λn + λ 1 . (1.5) For further elaboration of the general antieigenvalue theory and its applications to numerical analysis, wavelets, statistics, quantum mechanics, finance, and optimization, I refer to [9]. In sections 2, 3, and 4 respectively, I will present three new domains of application for antieigenvalue analysis: continuum mechanics, economics, and number theory. New results and insights for each will be obtained. Conclusions and comments are given in section 5. This paper is an elaboration of my lecture and extended abstract [12] at the 24th IWMS at Haikou, Hainan, China. Continuum Mechanics My discussion here leans heavily on the recent paper [7] which establishes a continuum mechanical model for the stability of granular material heaps. The paper [7] explores the notion of (maximum) angle of repose for granular materials. On the other hand, my theory of antieigenvalues [9] has as one of its essential ingredients the notion of (maximum) operator turning angle. Here is how to connect the two theories. Following [7], the equilibrium equations for a granular pile of local slope θ are The stress tensor Σ can be written in singular value decomposition where σ 1 σ 2 > 0 are the principal stresses and where ψ gives the principal directions. By considering a plane within the material and the normal and tangential stresses upon it in terms of the coefficient of friction of the material and a corresponding angle δ of internal friction, it is deduced in [7] that the largest sustainable angle of repose θ is given by Some assumptions in this modeling of the discrete by the continuous have of course been made. Among those are a linear dependence on the vertical z direction and a stress-free condition at the pile's surface at z = 0. Let us see and recast this continuum mechanical model for stable granular material piles into my antieigenvalue theory. The key is to remember that the unique ϵ for which the minimum in (1.2) is attained in known [9] to be ϵm = 2 σ1+σ2 for a 2×2 matrix with singular values σ 1 and σ 2 . Then straightforward calculations confirm from (2.2) and (2. This confirms that my maximum turning angle of the stress tensor matrix Σ gives exactly the largest sustainable angle of repose of the granular material heap. Many physical examples and history of the appropriateness of such models may be found in [7]. Continuing, use of the law of sliding friction and a consideration of normal and tangential stresses plus some nice geometry (see figures in [7]) concludes that for the granular pile to stand up, it must obey the following constraint, stated here in two equivalent forms: Here δ is the material angle of internal friction defined by µ = tan δ where µ is the coefficient of friction of the material. Note that this µ is from the notation of [7] and is not my first antieigenvalue µ 1 . However, as a side-result of the following analysis, it happens that the two are related in terms of the stress operator turning angle by µ = µ 1 / sin ϕ(Σ). Integrating the equilibrium equations (2.1), assuming a stress free boundary condition at the base of the pile, and assuming a linearized vertical dependency given by σxx = λρgz cos θ, where λ is a linearization proportionality constant, one arrives [7] at the interesting expression To find the steepest angle θ which this analysis permits, the right-hand side of (2.6) is minimized with respect to λ, giving the extremum condition In particular, this means that the critical angle of repose θ for the granular pile is the angle δ of material internal friction. This equality in the second equivalent part of (2.5), namely, that is a special instance of a rather general working proposition which I obtained in [8], see also [9, pp. 55-56]. That general rule, or theorem if you like, is that if you encounter some entity in any theory which happens to be related to the standard matrix condition number κ = σ 1 /σ 2 according to then there obtains a three-way connection between your theory, the Kantorovich-Wielandt theory treated in [8], and my operator trigonometry [9]. We may apply that working rule here. To do so, note that since the left side of (2.8) is exactly the condition number κ of the stress tensor Σ, we may solve for sin δ to obtain Thus there obtains the three-way relationship sin δ = cos β(Σ 1/2 ) = sin ϕ(Σ). (2.11) In (2.11) I have used the notation β(A) for the Kantorovich-Wielandt condition number angle generally defined by cot(β/2) = κ. Note that, in particular, just from (2.8) and this working rule of [8] we could have obtained the equality of the granular angle of friction δ to our maximum turning angle ϕ(Σ) that we obtained above. Next, let us look at the Law of Sliding Friction utilized in [7]. The requirement there is that where µ = tan δ is the material internal friction constant. Using some results of [7], we may write (2.12) as, with some obvious algebra, Here α is the normal angle to an immersed plane, see [7, Fig. 2.2]. Using above results on equality of angles, (2.13) becomes (2.14) Assuming the angles are all properly acute so that we can drop the absolute value bars, and canceling out sin ϕ(Σ) and using the conventional trigonometric identity sin( Finally, let us note that from (1.4) and (2.2) the antieigenvectors of symmetric stress matrix Σ are Thus the two maximally turnable vectors nicely capture the principal stresses along with their principal directions. Let us summarize our results as follows. In particular, the maximum value in (2.15) is obtained when the angle 2(α−ψ)−ϕ(Σ) = π/2. That situation is represented by the vertical dotted line in Fig. 2.2 in [7]. As the angles ϕ(Σ) and ψ are predetermined by the material stress tensor, this criticality in (2.15) occurs only when the immersed plane of normal angle α has such angle α = π/4 + ψ + ϕ(Σ)/2. We could summarize this analysis by saying that this limiting normal angle depends on the eigenvectors (through ψ) and now on the antieigenvectors (through ϕ(Σ)). Economics I will go into less detail in this section because more detail may be obtained from the forthcoming paper [11]. The main message is that the renowned Sharpe ratio of the Capital Asset Pricing Model (CAPM) may be seen in terms of my antieigenvalue theory. This observation was announced in the book [9, p.182]. The further details are worked out in [11]. Briefly, the fundamental link between the investment theory and my antieigenvalue theory lies in the fact [9, p. 188] that the first antieigenvalue µ 1 in (1.5) may be seen as a ratio of means: geometric mean arithmetic mean . (3.1) I refer you to the large standard book [14] for the uses of Sharpe's ratio in investment theory. Also I will specifically refer to the excellent book [1] for aspects of the CAPM as it is used in portfolio design theory and in high frequency trading. The Capital Asset Pricing model assumes the Efficient Market Hypothesis and then tells you to measure the return-to-risk of your portfolio against the market. From the assumption that the full market has optimized the return-to-risk, your Sharpe ratio where E[r] is the average return over a number of periods and σ[r] is the corresponding standard deviation, will not be greater than that of the whole (e.g, think indexing) market's Sharpe ratio. See especially[1, Fig. 5.2, p. 55], to picture Sharpe ratios as mean-variance slopes. Here I am just dropping the risk-free return rate R f from numerators, as it is effectively zero these days anyway. Suppose now we look at the last two years of annualized returns r 1 and r 2 . We may form the usual (arithmetic) Sharpe ratio S AM = r1+r2 2σ and also a (geometric) Sharpe ratio S GM = √ r1r2 σ and upon dividing the latter by the former we arrive at which is my first antieigenvalue µ 1 as seen from (1.5). Comparison to actual market data [4] favors [11] the use of different variances in S AM and S GM to further sharpen my new return-to-risk trigonometric investment theory. In particular, a new general concept of growth-to-risk angles is proposed [11]. Going beyond [11], as noted there my use of geometric mean denominators relates to currently important economic issues concerning realized volatilities [2]. Perhaps such could be pursued in a later investigation. Obviously one could set up a more extensive operator-trigonometric theory based upon a whole matrix of annualized returns r 1 , r 2 , . . . , rn from which one could compare financial rewards linked directly to the higher antieigenvalues and corresponding higher antieigenvector and internal critical angles of the general antieigenvalue theory [9]. Separately from [11], in three recent papers [5,6,10] we have developed a Time Operator theory for financial markets. Time Operators originated in quantum mechanics and have been adapted to statistical mechanics and to stationary stochastic processes. Essential to our application of Time operators to financial markets are variance estimates. Our theory is worked out for non-stationary Bernoulli processes in [5]. Among five well-known volatility estimators discussed in [16], we found the most suitable for our analysis of the Greek financial market during elections to be the Rogers-Satchel estimator. See [5] and [16] for further details. In [6] we then extend our model to Markov processes with specific examples taken from US GNP data and Dow-Jones closing prices. As all symmetric and Hermitian matrices have a natural spectral theory and therefore now a natural antispectral theory of critical turning angles, such could surely be worked out for Time operator theory as it is now applied to finance and economics. This has already been done for wavelets [9,Chapter 5]. The Time operator Age of a process is a new statistical index which assesses the average level of innovations during the observation period. As such, it is a new measure of the complexity of the market. Number Theory Now some new connections of the antieigenvalue theory to number theory which I only recently discovered. I was quite surprised as the origins of the two theories are completely different. Given two arbitrary relatively prime positive integers m and n, with m > n, one of them being even, the other odd, then the numbers a = 2mn, b = m 2 − n 2 , c = m 2 + n 2 (4.1) form a primitive Pythagorean triple: This sufficient condition is also necessary. For more details see [13]. This construction and characterization of Pythagorean triples is often called Euclid's Formula. To connect that theory to my antieigenvalue theory and its matrix trigonometry, I may now form the matrix (and its similarity class of matrices with the same eigenvalues m 2 and n 2 ) We may propose to call these matrices Am,n, Pythagorean Triple Matrices. Their maximum turning angles may be called special Pythagorean turning angles ϕm,n(A). Their corresponding normalized Pythagorean antieigenvectors are We know of course there are an infinite number of these Pythagorean angles, which are now embedded within my antieigenvalue operator trigonometry. One could build a whole matrix theory of them. This new connection of my antieigenvalue analysis to the Pythagorean triple number theory may be seen to have other interesting manifestations. Here is another one, couched in the terminology of algebraic geometry. Let x = (︀ m n , 0 )︀ be a point on the x-axis. It's stereographic projection onto the unit circle becomes, now seen operator-theoretically, the point The stereographic point of view comes from a treatment of spinors and twistors [15]. Here is a precise example. Let m = 2 and n = 1. Then taking a = m 2 − n 2 = 3 as the shorter side, b = 2mn = 4 as the longer side, and thus hypotenuse c = m 2 + n 2 = 5, the Pythagorean triple matrix has from (1.4) the antieigenvector (we ignore the other one, x−) If we normalize (4.11) by dividing by |m + in| 2 we obtain the complex exponential expression e iϕ(Am,n) = sin ϕ(Am,n) + i cos ϕ(Am,n). (4.12) but with the roles of the cosine and sine switched from the usual complex analysis. And of course (4.12) is not an analytic function because those are angles from the operator trigonometry. We conclude this section with a fourth new connection. The Archimedes method for calculating π is to inscribe and circumscribe the circle with regular n-gons and go to the limit, e.g. see [3, p 338]. To be specific, consider a circle of radius 1 2 and let an be the length of the circumscribing 6 · 2 n gon and bn be the length of the corresponding inscribing 6 · 2 n gon. Then one has the well-known iterative relations [3, We immediately may now form the ratios b n+1 a n+1 = 2 √ an bn an + bn = cos ϕ(An) (4.14) as the first antieigenvalues of the matrix sequence we may normalize and recognize that the basic AGM process as it's called in [3], now corresponds to an operator trigonometric process M(1, cos ϕ(An)) in which the operators An slowly "untwist" themselves as their first antieigenvalues µ n 1 = cos ϕ(An) converge to 1 and their operator maximum turning angles converge to zero. Conclusions and Comments Beyond the continuum model for granular materials shown in Section 2 here to be fundamentally operatortrigonometric, I would expect that the antieigenvalue analysis may be profitably applied to a wide range of stress-strain tensors as they occur within continuum mechanics. If one takes a strength-of-materials course (I did, long ago) or a more general constitutive equations course, one soon perceives that a large number of the partial differential equations describing such elastic or fluid phenomena are derived by starting with a small rectangular material element and "pushing" it to a parallelepiped. The fundamental connection of my antieigenvalue analysis to the Sharpe ratio of investment theory exposed in Section 3 came to me about twenty years ago but I did not work out the particulars until I decided to present those results at the 22 nd IWMS meeting in Toronto in 2013. Fortunately the blind refereeing process for [11] actually brought a very positive review from one of the nation's experts on high frequency trading. I am hopeful that my advocacy of new reward-to-risk growth angles might in the future lead to fruitful actual implementations within financial markets. The link to Pythagorean triples and number theory given here for the first time was quite unexpected. Let me remind and emphasize that when I first originated the antieigenvalue theory almost fifty years ago, I was coming from semi-group perturbation theory which had led me to a question of when an operator product BA would remain (real) positive, given positive
2017-07-15T07:33:40.103Z
2016-01-12T00:00:00.000
{ "year": 2016, "sha1": "2d1cc722a4df0deaec6666209b06335f1898cdba", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1515/spma-2016-0031", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0248b2a90e683616468c338f7c52106a0ea9f1ed", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
5003481
pes2o/s2orc
v3-fos-license
Association of Antituberculosis Treatment and Lower Risk of Hyperlipidemia in Taiwanese Patients: A Population-Based Case–Control Study The association between anti-tuberculosis (TB) treatments and the risk of developing hyperlipidemia remains unclear. Data were obtained from the Longitudinal Health Insurance Database 2000 (LHID2000). The case group included patients newly diagnosed with hyperlipidemia (n=16,054) between 2006 and 2011 selected from the LHID2000. A four-fold number of hyperlipidemia-free cases (n=64,216) were matched with case patients by age, sex, and index year to create the control group. Univariable and multivariable unconditional logistic regression analyses were conducted to estimate the odds ratios (ORs) and 95% confidence intervals (CIs) for the association between hyperlipidemia and anti-TB medication use. Patients that used isoniazid (INH) were significantly associated with a decreased risk of hyperlipidemia (OR=0.71, 95%CI=0.570.88). After adjustment for age, sex, urbanization level, and income as well as ethambutol, pyrazinamide, streptomycin, and anti-human immunodeficiency virus drug medications, a dose-dependent risk of hyperlipidemia was observed in the INH, rifampin (RIF), and INH and RIF groups with the ORs progressively decreasing as the cumulative dose increased. In the Taiwanese patients who used anti-TB medications, INH and RIF use was associated with a decreased risk of drugs (INH, RIF) (5). They are also used in combination with other medications to treat coinfections (6). However, a variety of adverse reactions to these drugs have been reported. The most well-known toxic effect is hepatotoxicity (7). Using these drugs in combination may increase the risk of hepatocellular carcinoma (HCC) in patients with liver cirrhosis (8). Combinatorial therapy for INH and RIF induces reactive oxygen species (ROS) overproduction and hepatocyte damage (9). Chronic anti-TB treatments thus may intensify the imbalance of redox status and promote oxidative stress because of lipid deposition (10,11). However, a comprehensive literature review has suggested that information pertaining to blood lipids and these anti-TB medications remains limited. We thus conducted a large, nationwide case-control study by using data from the Longitudinal Health Insurance Database (LHID) maintained by the National Health Research Institutes (NHRI) of Taiwan to assess the risk of hyperlipidemia associated with using INH and RIF among Taiwanese patients. Materials and Methods Data source. Datasets were obtained from the reimbursement database of the Taiwan National Health Insurance (NHI) program, a single-payer universal insurance system (8). This insurance system covers more than 99% of the 23.74 million residents of Taiwan (http://www.nhi.gov.tw/english/index.aspx). Claims data from the LHID, which was established by the NHRI and consists of the claims data of 1,000,000 patients randomly sampled from the population of all NHI beneficiaries, were used. The distributions of sex, age, and health care costs are not significantly different between the cohorts in the LHID and all insurance enrollees, as reported by the NHRI. Data files were anonymized and linked with encrypted identification numbers to protect the privacy of the patients. Diagnostic codes are in the format of the International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM). Ethics statement. The NHIRD encrypts patient's personal information to protect privacy (indexes included patients' name, personal ID, living address) and provides researchers with anonymous identification numbers associated with relevant claim information, including patients' sex, dates of birth, medical services utilized, and prescriptions. Patient consent is not required for accessing the NHIRD as described in detail previously (8). This study was approved by the Institutional Review Board of China Medical University Hospital (CMUH104-REC2-115). Our IRB specifically waived the consent requirement. Sampled patients. For this retrospective case-control study, patients newly diagnosed with hyperlipidemia (ICD-9-CM code 272) between January 1, 2006 and December 31, 2011 were identified in the LHID as the case group and a comparison group of participants without hyperlipidemia. Hyperlipidemia involves abnormally elevated levels of one or more lipids and/or lipoproteins in the blood (e.g., triglyceride (TG) or cholesterol) and pathological lipid qualities such as elevated levels of low-density lipoprotein (LDL), which is the most common form of dyslipidemia. In this study, we included patients with a diagnosis of hyperlipidemia (ICD-9-CM codes 272.0, 272. 1, 272.2, 272.3, and 272.4). The date of hyperlipidemia diagnosis was defined as the index date. We selected the non-hyperlipidemia patients from the LHID by 1:4 matching with the hyperlipidemia patient on a propensity score (12). The propensity score was calculated by a logistic regression to estimate the probability of the disease status given the baseline variables including gender, age, urbanization level, income, diabetes, hypertension, obesity, coronary artery disease, cirrhosis, and HIV infection. Figure 1 shows a flow chart of the selection procedure of the study participants. In addition, the level of urbanization was divided into 4 levels based on the NHRI report (Level 1 was the highest level of urbanization and Level 7 was the lowest). Because only a few people lived in Levels 5-7, we grouped the least urbanized populations into Level 5). Cities, districts, and townships within which subjects were registered for insurance purposes were grouped into four levels of urbanization based on the population density (people/km 2 ). Level 1 indicates the most urbanized area, and level 4 indicates the least urbanized. Antituberculosis medication exposure and comorbidities. TB diagnoses were peer reviewed and the patients that had used the medication, INH, RIF, ethambutol, pyrazinamide, and streptomycin, before the index date were classified as exposed to anti-TB medication. Medications were separated into six classes: INH, RIF, both INH and RIF, ethambutol, pyrazinamide, and streptomycin. According to total treatment duration (in days) and the quantity of anti-TB medication, the cumulative dose of each type of anti-TB medication was calculated for each user. Several well-known risk factors for hyperlipidemia were also selected, namely diabetes (ICD-9-CM 250), hypertension (ICD-9-CM 401-405), obesity (ICD-9-CM 278), coronary artery disease (ICD-9-CM 410-414), cirrhosis (ICD-9-CM 571), and HIV infection (ICD-9-CM 795.71, V08, 042, 079.53), as comorbidities. Owing to restrictions in the database set of HIV drugs used in Taiwan, the use of only a few anti-HIV drugs (lamivudine, tenofovir disoproxil fumarate, didanosine, ritonavir, and saquinavir) was compared between the patients with hyperlipidemia and the controls. In addition, related studies that have applied the same diagnosis method and ICD-9 coding criteria have been published (13,14). Statistical analysis. The proportional distributions of sex, age (20-49, 50-64, 65-74, ≥75 years), urbanization level (level 1 was the highest, and level 4 was the lowest), and income (<15,840, 15,200, and ≥19,200 NTD) as well as the anti-TB and anti-HIV drug medication use and comorbidities of the treated group were compared with those of the control group. The standardized difference was used to quantify differences in mean or prevalence between the hyperlipidemia and non-hyperlipidemia groups for continuous or categorical variables, respectively (15). A value of standardized mean differences equal to 0.01 or less, indicates a negligible difference between means of hyperlipidemia and nonhyperlipidemia groups. Univariable and multivariable unconditional logistic regression analyses were conducted to estimate the odds ratios (ORs) and 95% confidence intervals (CIs) for the association between hyperlipidemia and anti-TB medication use. Adjusted ORs were also determined after we controlled for age, sex, anti-HIV drug use, and comorbidities, namely diabetes, hypertension, obesity, coronary artery disease, cirrhosis, and HIV infection. In additional analyses, the effect of the anti-TB medication cumulative duration and dosage were also estimated using logistic regression. All analyses were in vivo 32: 47-54 (2018) conducted using SAS statistical software (Version 9.4 for Windows; SAS Institute, Inc., Cary, NC, USA), and all statistical tests were performed at a two-tailed significance level of 0.05. Informed consent. The NHIRD encrypts patient personal information to protect privacy and provides researchers with anonymous identification numbers associated with relevant claims information, including sex, date of birth, medical services received, and prescriptions. Therefore, patient consent is not required to access the NHIRD. This study was approved to fulfill the condition for exemption by the Institutional Review Board (IRB) of China Medical University (CMUH104-REC2-115). The IRB also specifically waived the consent requirement. Results Characteristics of the included patients. The case group comprised 16,054 patients with newly diagnosed hyperlipidemia, and the control group comprised 64,216 people without hyperlipidemia (Table I). Our study had more male than female patients (49.7% vs. 51.1%), and were younger than 64 years of age (77% vs. 79.3%). Compared with the control group, patients with hyperlipidemia tended to live in urban areas (58.5% vs. 60.5% at urbanization levels 1 and 2) and had incomes of between 15,840-25,200 NTD (47.6% vs. 45.8%). The mean ages of the hyperlipidemia patients and controls were 53.5 (±14.3) and 53.0 (±13.9) years. INH use was significantly lower in the hyperlipidemia group than in the control group (0.59% vs. 0.76%). Compared with the control group, the hyperlipidemia group was more likely to have diabetes, hypertension, coronary artery disease, cirrhosis, and HIV infection. Association of hyperlipidemia with anti-tuberculosis users and covariates. Table II shows the crude and adjusted ORs of hyperlipidemia by anti-TB medication and comorbidities. Univariable unconditional logistic analysis revealed that compared with non-INH use, the OR of hyperlipidemia was 0.77 for INH use (95%CI=0.61-0.96). After adjustment for (1) age, sex, urbanization level, and income, (2) and anti-HIV drug medications, and (3) comorbidities of diabetes, hypertension, coronary artery disease, and cirrhosis, INH (OR=0.71, 95%CI0.57-0.88) was significantly associated with a decreased risk of hyperlipidemia. The adjusted OR of hyperlipidemia increased with income and was 1.23 for those with the lowest income compared with those with highest income (95%CI=1.17-1.29). The patients living in the lower urbanized areas had a higher adjusted OR of hyperlipidemia compared with those living in the most urbanized areas. Anti-HIV drug use (OR=0.39, 95%CI=0.21-0.72) was significantly associated with a decreased risk of hyperlipidemia. Comorbidities of coronary artery disease and cirrhosis were associated with a significantly increased risk of hyperlipidemia. Association of hyperlipidemia with cumulative duration and dosage of anti-tuberculosis medication use. We estimated the risk of hyperlipidemia according to the cumulative duration and dosage of anti-TB medication use (Table III) Discussion We conducted a population-based case-control study by using data from the Taiwan NHI database to investigate the use of anti-TB medications, INH and RIF, and their in vivo 32: 47-54 (2018) 50 Data are presented as the number of subjects in each group, with percentages given in parentheses. § Urbanization level was categorized by the population density of the residential area into four levels, with level 1 as the most urbanized and level 4 as the least urbanized. correlation with hyperlipidemia, with adjustments for hyperlipidemia-related comorbidities. More than 99% of Taiwan residents' complete medicinal records have been included in the NHI database since 1996 (16). We collected a large sample and detected statistical differences between anti-TB medication users and nonusers by using this comprehensive health surveillance research system. A significant 29% decrease in hyperlipidemia incidence was observed in the INH users compared with the nonusers. In addition, these significant decreases were more apparent in those who had used INH and RIF for a cumulative duration of >6 months (adjusted ORs of 0.58 and 0.68, compared to the non-users, respectively). A dose-dependent decreased risk of hyperlipidemia was observed in the INH and RIF users, with the ORs progressively decreasing as the cumulative dose was increased. These results suggest that INH and RIF users have a reduced risk of hyperlipidemia. In Taiwan, approximately 45% of prescriptions are not treated in accordance with standard protocols, and secondline drugs are prescribed (5). Because of medical treatment variations among individuals, clinical physicians may integrate their clinical experience, expertise, professional knowledge, and patient conditions to provide suitable treatment. INH, RIF, or a combination of INH and RIF (Rifinah) are the most commonly used anti-TB chemotherapeutic treatments (5). INH and RIF are the most effective drugs for preventing drug resistance, followed by ethambutol and pyrazinamide. Long-term use or higher doses of INH and RIF have been considered for treating multidrugresistant (MDR)-TB and extensively drug-resistant (XDR)-TB. According to the WHO, 3.7% of new TB patients and 20% of previously treated TB patients are estimated to have MDR-TB (1). However, several reports have found that these drugs may be associated with the side effects of liver function abnormality or even hepatocellular carcinoma (HCC) in patients with liver cirrhosis (7,8). In most cases, acute liver failure is associated with INH and RIF treatment within 10 days of the beginning of therapy (7). The hepatotoxicity of INH is caused by its metabolization to acetyl hydrazine and hydrazine by cytochrome P450 (CYP450) (17). The toxicity is enhanced by RIF, a strong CYP450 inducer, particularly of CYP3A4, when RIF is coadministered. Hepatic cell death may be caused by the depletion of glutathione by the high reactivity of hydrazine and can lead to chronic liver disease or even HCC. However, few cases of liver failure induced by treatment with INH and RIF have been reported. The liver is critical in lipid metabolism. It absorbs serumfree fatty acids and manufactures, stores, and transports lipid metabolites (18). Lipids, mainly TGs, occur in hepatocytes and represent a hallmark feature of the pathogenesis of nonalcoholic fatty liver disease (18). The clinical implication of hyperlipidemia or the combination of lipid abnormalities, including high serum LDL-cholesterol, TG, and low serum high density lipoprotein-cholesterol levels. Hyperlipidemia is crucial in the progression of cardiovascular diseases (CVDs), including coronary artery disease, atherosclerosis, deep vein thrombosis, and cerebrovascular disease (19). Currently, the association among lipids, anti-TB medications, and CVD risk remains unclear. Several reports have indicated that chronic liver disease and liver cancer may affect blood fat levels (20,21). One study found that, in 40 liver cancer cases, serum TG levels were decreased by 28.8%. This phenomenon may be explained by the relationship between cytokines and lipids in vivo 32: 47-54 (2018) 52 (22). Tumor cells produce cytokines, such as tumor necrosis factor α, interleukins 1 and 6, and interferon α (23), which control lipid storage (24). In addition, hepatic associated diseases lead to blockage of cholesterol esterification and evacuation, thus changing plasma cholesterol levels (25). Animals' studies have shown that rats exposed to INH and RIF combination treatment at 50 mg/kg wt/day of each drug, which is a higher dosage than the therapeutic dose for human use, had significantly increased liver and serum cholesterol levels compared with an untreated group. In addition, liver TGs, but not serum TGs, increased significantly in the treated group (26). Moreover, serum phospholipids were increased only in the treated group, and the liver phospholipid content significantly decreased. It may originate from decreases in liver lipoprotein synthesis, resulting in lipid mobilization impairment and thus the accumulation of TG in the liver. Some reports have shown that higher liver and serum cholesterol levels were observed in patients exposed to longterm INH and RIF combination treatment; these increased levels may have been attributable to the inhibition of bile secretion, leading to cholesterol accumulation in the liver (27). Additionally, RIF has been reported to cause hyperlipidemia in TB patients (28). These studies have thus revealed that animals treated with a combination of INH and RIF have lipid profile alterations, and liver lipid and serum lipid levels are not fully interchangeable according to the observations from these studies. Activation of human pregnane X receptor (PXR; NR1I2) induces human CYP3A4 in drug metabolism and inhibits cholesterol 7α-hydroxylase (CYP7A1) in bile acid synthesis (29). RIF, a strong PXR inducer, inhibits bile acid synthesis and has been used for cholestatic disease treatment (30). Activation of PXR by RIF accelerates PXR interaction with the hepatocyte nuclear factor 4α (HNF4α, NR2A1) and blocks strong interaction with a positive controller, peroxisome proliferator-activated receptor coactivator 1alpha with HNF4α (29). Consequently, RIF inhibits bile acid synthesis by inhibiting CYP7A1 gene transcription as well as bile acid synthesis and thus prevents cholestasis. INH is the preferred drug for treating latent TB infection (LTBI). Historically, LTBI treatment has been considered a prophylaxis; in other words, INH alone is administered for 9 months to reduce the risk of active TB. A previous study showed that INH treatment was significantly associated with a decrease of total cholesterol and ApoB levels (31). This indicates that LTBI treatment may affect lipid metabolism. Regardless of INH hepatotoxicity, INH might be prescribed less frequently to people with known hyperlipidemia. Thus, a stronger association was observed with INH than with RIF. This is the first study examining an Asian population to use a population-based nationwide database to evaluate the effect of INH and RIF use on hyperlipidemia development. However, our study had several limitations. First, information regarding lifestyle, such as behavior, smoking status, alcohol consumption, environmental exposure, body weight or body mass index, and family history of hyperlipidemia, is unavailable in the LHIRD. Some of these factors may be contributory and confounding because they may increase the risk of hyperlipidemia. Second, we were unable to contact the patients directly to obtain additional information because the data were anonymous. The claims data in the LHIRD are used primarily for administrative billing purposes. Third, according to the WHO, the incidence of MDR-TB and XDR-TB is increasing globally (1) because of limitations in the clinical laboratory diagnostic methodology and the absence of exact laboratory data to confirm TB pathogen types. Physicians might thus increase the treatment dose or duration of first-line anti-TB medications to treat the disease (5). Resistance conditions differ among patients, and physicians might have individual preferences regarding the prescription of medications. In this study, we evaluated >6 months of exposure to anti-TB medications to minimize bias associated with these factors. Fourth, the LHIRD lacks complete information on the Child-Pugh classification of liver disease, which might correlate highly with the risk of hyperlipidemia. Fifth, from our database, we could not differentiate the active or latent phase of TB infection. Monotherapy can be used only for latent infection. Once active disease is present, multi-drugs regimen, and generally three or four drugs, must be used simultaneously (5). In addition, asymptomatic hyperlipidemia is usually underdiagnosed. Finally, no data on dietary effects were available in this study; dietary intake types may contribute to the control of lipids in the body. As the treatment of hyperlipidemia should follow the guidelines given by the National Cholesterol Education Program, some efforts should be made on several risk factors that are strongly related to CVD (e.g., smoking, hypertension, diabetes mellitus, lifestyle, dietary intervention, physical activity, or medications). INH and RIF do not seem to be clinical medications that are strongly correlated with hyperlipidemia. In conclusion, our study results may provide epidemiologic evidence to support the possible association between longterm and high-dose use of INH, RIF, and combined INH and RIF and a decreased risk of hyperlipidemia. Other risk factors and ethnic factors should be considered to evaluate the decreased risk of hyperlipidemia under these anti-TB treatments and their effects on CVD. In addition, large population-based studies are required to confirm our observations and clarify the possible mechanisms by which these anti-TB medications reduce blood lipid levels. Conflicts of Interest The Authors have declared that no competing interests exist.
2018-04-03T05:54:01.882Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "600030ffde1e33b0ed5d6d1828a771c754d71802", "oa_license": null, "oa_url": "http://iv.iiarjournals.org/content/32/1/47.full.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "835ac70a83ffb77b784098c9771523bbd16ccb27", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
198017086
pes2o/s2orc
v3-fos-license
Food Ordering Management using Recommendations The proposed food ordering management system enables the customer to order the food by selecting the food items from e-menu by registering on the web application or intranet of the institute. The system is useful for a canteen which faces lot of rush during the break time and also if the work in the canteen is manual such as taking food orders at the counter and subsequently calculating the cost. Also, there is dissatisfaction among customers due to delay in orders and orders not being attended for long.These issues are addressed and solved in the proposed system. In this project, we have proposed a system that can simplify most of the manual work in the canteen, from taking orders to calculating bills. Customers can order their food from anywhere in the institution using the website, making it a hassle-free task. The placed order will be displayed on the display screen and the staff will keep the order ready for the customer. Additionally, by making use of Apriori algorithm, recommendations will be provided to the customer. The proposed system will help the administrator of the system to have a clear idea, when and which food items are preferred more on a day-to-day basis. Keywords— Apriori algorithm, Dataset, Food ordering system, Internet, Recommendations, Smart phone. INTRODUCTION The basic problem in the food services available at canteens at various institutes and organizations are that, they are not realizing the efficiencies thatwould result from better applications of technology in their daily operations. In canteens, ordering of food and calculation of bill is still a problem. The problem also arises when approximation of all the stock required to be bought has to be handled based on how much food was ordered and what will be ordered the most. There are many reasons leading to delays in services such as taking orders and serving which leads to dissatisfaction among customers. The project focusses on developing a user-friendly food ordering management system for the customers as well as the administrator. The proposed system will provide facilities to the administrator such as updating the menu, based on the recommendations given by Apriori algorithm and customer-based functionalities which includes placing orders by referring to the recommendations. Ordering of food will be lot easier. Past work In the food recommendation system using clustered database [2], the data is clustered after getting the input. Cluster is a set of similar items. Using cluster database speed of the system is increased and a lot of time is saved by reducing the number of comparisons. In this system Kmeans is used for clustering the items. It is efficient if the amount of data is large. Here ingredients were listed using vectoring. In an automated food ordering system [3], which will keep track of user orders smartly. This food orderingsystem will allow the user to make order or make custom food by one click. This is an android application. The font end was developed using JAVA, Android and at the backend MySQL was used. The Zigbee based e-menu ordering system [4], is useful for all kinds of restaurants and is affordable. The system has a smarter user interface for placing orders and billing. The system includes graphical representation of menu such that it is user friendly and understandable by illiterate people also. It is low cost alternative to bigger touch panels. The proposed automated system [5] deals with automation of restaurants, with wireless touch-panel based menu systems. The orders are taken from customers using the digitized menu. Full menu of eatable items is displayed onto the touch panel for selection. Customer orders placed through the touch panels are received in the kitchen without any involvement of waiters. Zigbee was used to have wireless link of touch panels from kitchen to restaurant tables. PIC microcontroller was used for coding of menu on touch panel. The hardware implementation was done on PCB layout. Their proposed system would also take care of all paper work i.e. data handling. The proposed automated system [6], aimed at minimizing the number of employees at the counter, elimination of System flow analysis The proposed system will be used by three types of users, mainly the customers, the kitchen or canteen staff and the administrator. Thus, the processes of the entire system can be divided into the three modules (as shown in fig 3.1) namely the admin module, the kitchen or canteen staff module and the customer module. The food ordering management system will enable the customer to view the e-menu along with the recommendations, after viewing which, customers can place their order. Once the order is confirmed, bill will be generated along with a token. The order data along with the token generated will be buffered and displayed onto the screen near the canteen staff. The canteen staff can view the order, prepare and serve it. The order details will be sent to the admin module for further processing. From the above figure Fig 3.2 it can be seen that all the order details will act as an input to the Apriori algorithm and the output of the algorithm are the recommendations (as shown in fig 3.3) that are used for several purposes such as to determine most frequently ordered food item, update inventory and update menu. IV. IMPLEMENTATION We have developed a web-based application for our system. The implementation of the system is done using PHP, HTML, CSS, jQuery, Ajax, Bootstrap, JavaScript and the datasets are stored in MySQL database. The hardware required for our application includes Android Smart phone and a desktop or laptop with browser and internet connection. In our application Apriori plays an important role. We have considered six months order details of a canteen as an input to the Apriori algorithm and we obtain recommendations as shown in fig 4.1. The recommendations are the most frequently ordered food items, which the admin could use to update the menu and increase his profit. CONCLUSION Even though the existing system uses certain technologies in their food ordering system, the customer queue is not managed properly. The system proposed in this project eliminates most of the manual work and has no issues regarding customer queue, as the food is ordered online through web application. The proposed system eliminates calculation errors of bills and also provides many facilities to the admin module which includes all the required analysis of orders, profit values and stock. The proposed system uses Apriori algorithm for providing recommendation to the customers. This also makes the system more efficient as the admin has a clear idea about which food item was ordered the most. This will help him provide a better menu for the customer which will result in increase in profit. The future enhancement of the proposed system could be adding online payment system.
2019-07-22T22:31:19.386Z
2019-06-13T00:00:00.000
{ "year": 2019, "sha1": "0bc90b09e0751f87dd3d301929cbe4b02897384e", "oa_license": "CCBY", "oa_url": "https://ijaers.com/uploads/issue_files/17IJAERS-JUN-2019-1-FoodOrdering.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8502c8c7d27e157d5a25220d081c3fd2d9c0166a", "s2fieldsofstudy": [ "Business", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Business" ] }
119308795
pes2o/s2orc
v3-fos-license
Transport Anomalies Associated with the Pseudogap From a Preformed Pair Perspective Transport studies seem to be one of the strongest lines of support for a preformed pair approach to the pseudogap. In this paper we provide a fresh, physically transparent look at two important quantities: the diamagnetic susceptibility and conductivity. We use a three dimensional preformed pair framework which has had some success in the cold Fermi gases and in the process we reconcile recently observed inconsistencies. Specifically, while the preformed pairs in our theory give a large contribution to the diamagnetic susceptibility, the imaginary part of the conductivity is suppressed to zero much closer to $T_c$, as is observed experimentally. One of the biggest challenges in understanding the high temperature superconductors revolves around the origin of the ubiquitous pseudogap. Because this normal state gap has d-wave like features compatible with the superconducting order parameter, this suggests that the pseudogap is related to some form of "precursor pairing" which would generalize the behavior in conventional BCS superconductors, (where pairing and condensation take place at precisely the same temperature). On the otherhand, there are many reports [1,2] suggesting that the pseudogap onset temperature is associated with a broken symmetry and, thus, another order parameter. It is widely believed that because the pseudogap has clear signatures in generalized transport, these measurements may help with the centrally important question of distinguishing the two scenarios. In this paper we provide a fresh, transparent look at transport in the presence of a pseudogap where the latter is associated with pre-formed pairs deriving from a stronger than BCS attractive interaction. We are thereby able to reconcile inconsistencies with cuprate experiments. Importantly, there is no more theoretical flexibility here than in standard BCS theory so that predictions are concrete and testable. Our goal is to address the observed conflict between transport experiments [3,4] and a variety of precursor superconductivity scenarios before reaching the definitive conclusion that the pseudogap derives from a nonsuperconducting order parameter. We argue here that it is necessary to investigate one more precursor superconductivity approach. Most importantly, this particular scenario, based on a stronger than BCS attraction, has been realized experimentally-in atomic Fermi gases [5] which also appear to exhibit a pseudogap [6][7][8]. We argue it should also be applicable to those superconductors (such as the cuprates) with anomalously high pairing onset temperature T * , and small pair size. Similar ideas were introduced by Geshkenbein, Ioffe and Larkin [9]. In contrast to previous work here we discuss trans- * These two authors contributed equally to this work. port both above and below the transition T c and we pay central attention to the important conductivity sum rule constraint. In view of the strong evidence for three dimensional (d) critical behavior [10][11][12] we do not restrict consideration to strictly 2d systems. The inconsistencies which we aim to reconcile pertain to the behavior of the complex conductivity σ = σ 1 + iσ 2 and the diamagnetic susceptiblity χ. [The widely discussed Nernst effect was examined in earlier work [13].] Below the transition, σ 2 directly relates to the superfluid density. If, above T c , σ 2 were interpreted to reflect a remnant of the superfluid density (as expected in a simple fluctuation ( [14] or more mesoscopic phase fluctuation theory [15,16]), this would suggest a close relationship between σ 2 and the normal state diamagnetic susceptibility χ, which is not observed [3]. Problematic for a slightly different precursor scenario (the normal state vortex picture [17]) is the unexpectedly small (by two orders of magnitude [4]) value of the ratio of the real part of the conductivity σ 1 to χ in the normal state. Our physical picture for the way in which transport is affected by preformed pairs is relatively simple to understand. In the presence of stronger than BCS attraction there are both fermionic and metastable Cooper pair degrees of freedom. The latter can be viewed as noncondensed pairs, or pair-correlated fermions. It can be seen from simple Boltzmann arguments [14] that bosons provide very large transport responses, provided they are in proximity to condensation. The Bose-Einstein distribution function which is then peaked at small wavevectors, is in stark contrast to its fermionic, Pauli principle restricted counterpart; it leads to a much stronger bosonic response to external field perturbations. Importantly, if one associates the pseudogap with long lived and meta-stable pairs in three dimensional systems, these enhancements, in transport can be shown to persist [13] to temperatures nearer to T * >> T c , as one sees in a variety of different transport experiments. This should be distinguished from conventional fluctuation effects [16], which contribute in the critical regime very close to T c . In the usual BCS-like, purely fermionic Hamiltonian only fermions possess a hopping kinetic energy and and thereby directly contribute to transport. The contribution to transport from pair correlated fermions enters indirectly by liberating these fermions through a break-up of the pairs. Technically, we can associate this coupling to fermionic transport as via the well known Aslamazov-Larkin diagram, importantly modified to include the self consistently determined fermionic pairing gap. A stronger than BCS attractive interaction can be accomodated by a simple extension of Gor'kov theory. This leads to non-condensed pair effects [8] above and below T c . Important here is the general form of the superconducting electromagnetic response which consists of three distinct contributions: (1) superfluid acceleration, (2) quasi-particle scattering and (3) pair breaking and pair forming. These all appear in conventional BCS superconductors, but at T = 0 this last effect is only present when there is disorder. However, in the presence of stronger than BCS attraction, and at T = 0, non-condensed pairs provide an alternate way to decrease the superfluid density, and the pair breaking and pair forming contributions will be concomitantly more prominent [18]. Without any detailed calculations we are now in a position to predict results associated with the Thz conductivity and the diamagnetism, which will be supported by later microscopic theory. We now show how σ 1 (ω ≈ 0) is depressed by the presence of a pseudogap ∆ pg , how σ 2 (ω) over a range of ω is also depressed while χ is greatly enhanced. Fig.1(a) shows how the normal state σ 1 (ω) (and in the inset σ 2 (ω)) behaves as a function of frequency. The red dashed curves are the results of conventional Drude theory. What happens when an above T c pseudogap is present is shown by the black curves. The curves are normalized by σ 0 , the normal state value of the conductivity at ω = 0 in Drude theory. Both theories (with or without the pseudogap) are consistent with the f-sum rule, and thus have the same fermionic carrier number n m xx . In the dc regime, with a pseudogap present, there are fewer fermions available to contribute to transport. Their number is reduced by the pseudogap. However, once the frequency is sufficient to break the pairs into individual fermions, the conductivity rises above that of the Drude model. One can see that the effect of the pseudogap is to transfer the spectral weight from low frequencies to higher energies (ω ≈ 2∆, where ∆ is the pairing gap, and ∆ ≡ ∆ pg above T c ). In this way one finds an extra "mid-infrared" contribution to the conductivity which is, as observed [19] strongly tied to the presence of a pseudogap. The behavior of σ 2 (ω), shown in the inset, is rather similarly constrained. On general principles, σ 2 must vanish at strictly zero frequency -as long as the system is normal. Thus both the red and black curves show that σ 2 (ω ≡ 0) = 0. Here one can see that the low frequency behavior is also suppressed by the presence of a pseudogap because of the gap-induced decrease in the number of carriers. Similarly, the second peak (around 2∆) in σ 1 (ω) leads, via a Kramers Kronig transform to a slight depression in σ 2 (ω) in this frequency range. Hence as shown in the inset, σ 2 (ω) is significally reduced relative to the Drude result and tends overall to increase with ω. There is virtually no sign of a ω −1 upturn in σ 2 which would reflect a remnant of the superfluid density above T c . This presumably is a fluctuation effect which pertains to the narrow critical regime. In Fig.1(b) we present similar comparisons of the behavior of the orbital susceptibility above T c in a nongapped normal state (red dashed curve) and in the presence of a pseudogap (black curve). The curves are normalized by χ 0 , the absolute value of the diamagnetic susceptibility for ∆ pg = 0 at T = T c . One can see that in the absence of a pseudogap only a very weak Landau diamagnetism appears. However, the figure shows that in the presence of a pseudogap the diamagnetic contribution is significantly enhanced. This diamagnetism originates from the large electromagnetic response associated with bosonic degrees of freedom; the breaking of pairs allows this diamagnetism to be reflected in the fermionic response. It should be noted that Van Hove effects enhance this diamagnetism, as does d-wave pairing which leads to an excess of low energy fermionic excitations. Moreover, this diamagnetism is not restricted to two dimensional models. All of this leads to a simple anti-correlation between the dc conductivity and the diamagnetic susceptibilty in the normal state, which is shown in Fig.1(c). Here we plot on the left and right hand axes the zero frequency conductivity as a function of varying pseudogap energy scale ∆ pg and the orbital (diamagnetic) susceptibility with varying ∆ pg respectively. The former is depressed as the pairing gap increases whereas the latter is enhanced. These same conclusions (which are qualitatively compatible with experiment [3,4,17] derive from microscopic theory. Here the linear response of the electromagnetic current J to a small vector potential A is characterized by the tensor P where ↔ n n /m is the normal fluid density and P xx is the diagonal component of the paramagnetic current, P ↔ , along the x-direction. Similarly, (n/m) xx is the diagonal component of n ↔ /m along the x-direction. We stress that only the fermionic density (and mass) appears on the right hand side of Eq. (1). The sum rule establishes a strong connection between transport and the fermionic kinetic energy, so that many body interactions only serve to redistribute the spectral weight. Thus, for example, even though meta-stable pairs are present, their contribution to transport is indirect and appears when such pairs can be decomposed. This version of the f-sum rule applies to any many body Hamiltonian which contains an arbitrary two body interaction and a kinetic energy associated with fermions. Throughout, we work in the transverse gauge. As a consequence all effects of the order parameter collective modes (which are longitudinal) do not enter. The complex conductivity is microscopically defined in terms of P ↔ and n ↔ /m: Above T c , the linear diamagnetic response is similarly related to P xx and (n/m) xx . It is given by ReP xx (q, ω = 0) + (n/m) xx q 2 y qx=qz=0 (3) In the superfluid phase the tensors P ↔ and n ↔ /m no longer cancel when q → 0, reflecting the Meissner effect. We stress that Equations (1)-(3) are completely general. We now turn to more microscopic calculations. Previous papers [8,20] have described how the parameters ∆(T ), ∆ pg (T ), and µ are self consistently obtained and how one accomodates a variety of dopings, by effectively fitting the attractive interaction to match T * and T c . Our figures correspond to moderate underdoping. A nearest neighbor tightbinding dispersion ξ p = −2t[cos(k x a) + cos(k y a)] − µ with t = 300meV is used throughout, and for simplicity, we took a simple τ ∝ T −2 power law (associated with the Fermi arcs [21]) for the transport lifetime. Very few of our results depended on this assumption which was made in earlier work [18]. A general finding is that, while ∆ pg decreases monotonically from T c to T * , with decreasing T below T c , ∆ pg decreases while ∆ sc rises, reflecting the fact that finite momentum pairs are converted to the q = 0 condensate, while maintaining an overall nearly constant ∆(T ). We have previously derived microscopic representations of P ↔ and n ↔ /m [18,20,22]. Importantly, our gauge invariant electromagnetic response function analytically satisfies the transverse f-sum rule. One can derive these contributions in a variety of ways but the most straightforward involves inclusion of generalized Maki-Thompson and Aslamazov-Larkin diagrams. The latter are considered to be effectively equivalent to Boltzmann (or time dependent Ginsburg-Landau-like) approaches to bosonic transport. Here the stronger-than-BCS attraction enters in an important way in order to insure that the pairing gap energy scale ∆ is explicitly incorporated. The paramagnetic tensor current-current correlation function P where ω has a small imaginary part and f the Fermi function. Here E p ≡ ξ 2 p + ∆ 2 (T ), where ξ p = ǫ p − µ, and ∆ sc (∆ pg ) is the gap component of the condensed (noncondensed) pairs, with ∆ = ∆ 2 sc + ∆ 2 pg . All transport expressions in this paper reduce to those of strict BCS theory when the attraction is weak and ∆ pg = 0. Here we define E ± = E p±q/2 and ξ ± = ξ p±q/2 . Importantly, the terms on the first line in Eq.(4) represent the pair breaking and pair forming contributions. The second line is associated with fermionic scattering. Also important to the electromagnetic response is the number density n ↔ /m which can be rewritten as Eqs. (2)-(3) yield analytic expressions for σ(ω) and χ dia . Fig.2 displays our more quantitative results for σ 1 and σ 2 as both functions of ω and T . The layout is designed to duplicate figures from Ref. 3 and the general trends are similar. Thus one sees from Fig.2(a) and its inset that well above T c , the real part of the conductivity is almost frequency independent. The imaginary part is small in this regime. At the lowest temperatures σ 1 contains much reduced spectral weight while the frequency dependence of σ 2 ∝ ω −1 ; both of these reflect the characteristic behavior of a superfluid. Here as in the experimental studies [3], we focus primarily on the temperature dependent plots in Figs.2(b), (d) and the inset to (c). One sees that σ 1 shows a slow decrease as the temperature is raised above T c . Somewhat below T c , σ 1 exhibits a peak which occurs at progressively lower temperatures as the probe frequency is decreased. Roughly at T c we find that σ 2 shows a sharp upturn at low ω. The region of finite σ 2 above the transition can be seen from the inset in Fig.2(d) and it is clearly very small in the pseudogap state. The inset of Fig.2(d) shows an expanded view of σ 2 (T ) near T c . In agreement with experiment, the nesting of the σ 2 versus T curves switches orders above T c . This important point reflects the fact that σ 2 (ω) is generally increasing with increasing ω above T c as seen in the inset in Fig. 1(a) and in exper-iment. This is in contrast to the behavior expected of a fluctuation contribution where a ω −1 dependence would occur. However, in slightly different plots, the counterpart experimental studies reveal a small 10-15K range where this fluctuation contribution is visible. This effect would not be present in a mean field approach. As speculated in Ref. 3, one should distinguish these near T c critical fluctuations from preformed pairs which persist to much higher temperatures. These effects are made clearer by plotting the "phase stiffness" which is proportional to the quantity ωσ 2 and is shown in Fig.2(c). Deep in the superconducting state there is no ω dependence to ωσ 2 (ω), while at higher T this dependence becomes apparent. In the inset to (c), the temperature dependence of ωσ 2 (T ) is displayed. We see that above T c , ωσ 2 is never strictly constant, as would be expected from fluctuation contributions. In experiment the onset of finite frequency spreading of the curves at T ≤ T c has been attributed to Kosterlitz-Thouless physics [23]. Finally, we turn to the diamagnetic response. Figure 3 shows χ dia as a function of temperature for four different dopings. Independently of the particular parameters that are used, it is seen that the magnitude of χ dia is enhanced even at temperatures well above T c . We should not associate this diamagnetism with short range Meissner currents, as might be appropriate to alternative phase fluctuation [15] or normal state vortex scenarios [4]. Rather here, the diamagnetism arises from the large contribution of non-condensed pairs which are in proximity to condensation [13,14]. This has a similarity to low d fluctuation effects, but arises in the 3d systems here from stronger than BCS attraction, which stabilizes these pair degrees of freedom. Since the kinetic energy ultimately resides in the fermionic system, it is not surprising that we find it is the pair breaking terms which provide the conduit for communicating enhanced bosonic transport contributions to the fermionic transport channel. Because we are working at effectively zero magnetic field, we have not addressed diamagnetism associated with non-linear response, although this appears to be very anomalous experimentally [17]. At the leading order level we have shown here that there is a profound connection between the complex conductivity and this orbital magnetism. Importantly, while the preformed pairs in our theory give a large contribution to the diamagnetic susceptibility, as is observed experimentally, the imaginary part of the conductivity is suppressed to zero much closer to T c , as observed. We end with the following observations. Our theoretical approach, has virtually no flexibility; it was set up [20,22] before there was much experimental interest in these transport measurements. In accord with experiment, we find: (i) that pseudogap effects lead to an enhanced diamagnetism above T c , (ii) that the imaginary conductivity σ 2 (ω) is reduced to zero in a very narrow range of T above T c , and (iii) that the real conductivity σ 1 (ω ≈ 0) is suppressed as the pseudogap becomes larger. This last point is in turn associated with a transfer of low ω conductivity spectral weight into the mid-infrared region. This work is supported by NSF-MRSEC Grant 0820054. We thank P. Scherpelz and A. A. Varlamov for useful conversations.
2011-12-21T17:11:44.000Z
2011-12-21T00:00:00.000
{ "year": 2011, "sha1": "8f6aa467704bb4318bc44d906d4320ecbfc4389f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8f6aa467704bb4318bc44d906d4320ecbfc4389f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258438911
pes2o/s2orc
v3-fos-license
Metastasis to the penis from a bladder carcinoma invading only the lamina propria: case report and description of the morphological aspects Introduction: Data shows that bladder cancer (BC) takes the seventh place as the most commonly diagnosed when it comes to the male population. Whereas when both genders considered, it moves down the tenth place. Although 75% of patients with BC present with the disease confined to the mucosa or submucosa, rarely secondary metastasis to the penis occurs. Case presentation: A 73-year-old male was referred for gross hematuria in May 2018. A cystoscopy was performed detecting a bladder tumor. The resection of the tumor revealed an invasive high-grade (HG) papillary transitional carcinoma of the bladder with nest variants and lamina propria invasion. The histological examination of the second-look resection disclosed the same tumor characteristics. The patient was scheduled for bacillus Calmette–Guérin (BCG) instillations. Meanwhile, he was diagnosed and treated for a primitive lung acinar adenocarcinoma. Seven months after the first diagnosis, the patient progressed to cT4 at the level of the bladder. He underwent four cycles of Methotrexate, Vinblastine, Doxorubicin (Adriamycin) and Cisplatin (MVAC) chemotherapy followed by a cystoprostatectomy. The histological result was fibrosis and ypT0pN0 classification. Due to pain and solid mass in the penis, a total penectomy was performed and the histological result showed a transitional carcinoma suggesting a metastasis of the urothelial carcinoma of the bladder. Three months following the penectomy, a positron emission tomography/computed tomography (PET/CT) scan results showed multiple metastases and positive lymph nodes. Hence, Pembrolizumab treatment was started, providing very good clinical and radiological evolution. At the time of publishing, the patient is alive, and the radiological exams show stability of the disease. Conclusions: The detailed descriptions of all histological variants of carcinoma of the bladder in the specimen has great importance and significant impact on the management of the disease.  Introduction Data shows that bladder cancer (BC) takes the seventh place as the most commonly diagnosed when it comes to the male population. Whereas when both genders considered, it moves down the tenth place. About 75% of patients who have BC could be grouped to patients with stage Ta, carcinoma in situ (CIS) or disease confined to the mucosa and patients at stage T1, i.e., submucosa. Due to their large incidence and lower risk of cancer specified death when compared to T2/T4 tumor stage, patients with Ta, T1 and CIS typically survive for a longer time [1][2][3]. Penile cancer is uncommon in industrialized countries, with a low incidence of the disease in about 1/100 000 men in Europe and USA [4,5]. The prevalence of the disease is associated with human papillomavirus (HPV)-related carcinogenesis in one third of the patients [6]. Aim The aim of our case report was to present a rare manifestation and unexpected evolution of a non-muscle invasive BC, as well as the importance of the multimodal treatment solutions in a challenging case.  Case presentation We present the case of a 73-year-old male, who initially was referred for gross hematuria in May 2018. The patient was smoker (about 20 cigarettes/day since he was 18, dyslipidemia but no alcohol consumption), with heavy cardiovascular antecedents. His brother had been diagnosed with high-grade (HG) invasive papillary transitional carcinoma of the bladder [pT1 HG according to the 8 th edition of the Union for International Cancer Control (UICC) Classification of Malignant Tumors] few years earlier. After meticulous anamnesis, a cystoscopy was performed under local anesthesia, which detected a bladder lesion situated at the dome of the bladder that seemed to be muscle invasive bladder tumor. The results of thoraco-abdominal computed tomography (CT) scan confirmed the presence of the tumor. On the CT scan images, the bladder tumor seemed to exceed beyond the bladder contour ( Figure 1). Figure 1 -Anterior bladder mass with heterogeneous enhancement measuring 35×36 mm, extending beyond the bladder contour and with slight infiltration of peripheral fat: yellow arrow, computed tomography (CT) scan image. A transurethral resection of this lesion ( Figure 2) was performed under general anesthesia and the patient was discharged on the second operating day. The histological exam revealed an invasive HG papillary transitional carcinoma of the bladder, with lamina propria invasion and nests variant (pT1 HG according to the UICC 8 th edition). Multiple fragments of the muscularis propria were identified, but none was invaded by the tumor (Figure 3, A and B). Giving the histological results of the bladder tumor resection and according to the European Association of Urology (EAU) Guidelines [13], a second-look transurethral resection of the bladder tumor and a second abdominal CT scan were carried out. The second-look transurethral resection was performed at the site of the previous resection and the histological examination of the tissue revealed the same invasive HG papillary transitional carcinoma of the bladder, with lamina propria invasion and nests of highly pleomorphic cells, but no muscle invasion (pT1 HG according to the UICC 8 th edition) ( Figure 4). After multidisciplinary team discussion (MTD), it was decided to follow a 3-year scheme of bacillus Calmette-Guérin (BCG) instillation. After four instillations, the patient presented intensive frequency, nocturia and urgency probably caused by the instillation itself but also due to the reduced bladder capacity, which was measured after the secondlook transurethral resection at about 50 mL. During the follow-up, a thoraco-abdominal CT scan was performed and a nodule in the upper right lobe of the lung was discovered ( Figure 5). Figure 5 -Speculated lesion of the upper right lobe of the lung: yellow arrow, CT scan image. The positron emission tomography (PET)/CT scan showed a high metabolic activity at the level of the lung's nodule ( Figure 6). The pulmonary biopsy revealed an adenocarcinoma with morphological and immunohistochemistry (IHC) aspects of a primitive lung acinar adenocarcinoma. Programmed death-ligand 1 (PD-L1) tumor expression was inferior to 1%. In December 2018, the patient was admitted in the Urology Unit Care for lower urinary tract symptoms (LUTS) and alteration of the general condition. The biological analyses and imagistic study revealed an acute renal failure, with left ureterohydronephrosis associated due to the bladder tumor compression as well as urinary infection. An abdominal CT scan was performed, and the tumor initially situated at the anterior bladder wall was measured at about 78×77×78 mm and staging of cT4. An extension of the tumor to the proximal sigmoid could not be excluded (Figure 8, A and B). Once again, the case was discussed in the MTD, the patient benefit from four cycles of Methotrexate, Vinblastine, Doxorubicin (Adriamycin), Cisplatin (MVAC) chemotherapy between January 2019 and February 2019. An abdominal CT scan was performed to evaluate the efficiency of the chemotherapy and it showed an impressive regression of the bladder tumor and complete remission of the hydronephrosis (Figure 9, A and B). In March 2019, at almost 10 months after the diagnosis of invasive HG papillary transitional carcinoma of the bladder with lamina propria invasion (pT1 HG), the patient underwent a radical cystoprostatectomy, with a Bricker urinary diversion. The postoperative follow-up was complicated by a rectal fistula and postoperative ileus. Nevertheless, surgical treatment was not needed for these complications. The patient fully recovered was discharged after 34 days. The histological examination of the sample showed inflammatory granulation tissue but no residual viable neoplastic tissue and no lymph nodes (LNs) involvement (0/17). The staging was ypT0pN0, according to the UICC 8 th edition. Eight months following the cystoprostatectomy, the patient presented with swelling and pain in the right-side cavernous body. A magnetic resonance imaging (MRI) of the pelvis was performed and a lesion of about 30×20×20 mm was spotted at the base of the penis (Figure 10, A and B). Another suspect lesion was described under the right iliopubic branch of the coxal bone that invades the anterior and inferior part of the acetabulum (Figure 11, A and B). The decision of total penectomy and perineal urethrostomy was taken in the MTD. The surgery was performed under general anesthesia and the outcome was without any particularities. Seven days after surgery, the patient fully recovered and was discharged from the hospital. The histology of the tumor was of a transitional carcinoma suggesting a metastasis of the urothelial carcinoma of the bladder. The tumor invaded both cavernous bodies and the fatty tissue, but not the corpus spongiosum or the urethra. The surgical section was in contact with the tumor and vascular and nerve emboli were objectified. According to the UICC 8 th edition, the staging was pM1bR2. IHC investigations confirmed a urothelial phenotype, with no arguments in favor of a lung metastasis: a diffuse and intense positivity was observed with cytokeratin 7 (CK7) and high molecular weight cytokeratin (HMWCK, clone CK34βE12) whereas thyroid transcription factor-1 (TTF-1) and Napsin A were not expressed by the tumor cells ( Figure 12, A-C). Figure 12 -Pathological examination of the penectomy specimen revealed a high-grade carcinoma whose morphological aspects were very similar to the urothelial carcinoma of the bladder, with fused and irregular nests composed of highly pleomorphic tumor cells (A), which expressed, in immunohistochemistry, HMWCKs (B) as well as CK7 (C). HE staining: (A) ×200. Anti-HMWCK antibody (clone CK34βE12) immunomarking: (B) ×200. Anti-CK7 antibody immunomarking: (C) ×200. CK7: Cytokeratin 7; HMWCK: High molecular weight cytokeratin. A PET/CT scan was performed three months after the penectomy showing multiple retroperitoneal, right external iliac and bilateral inguinal positive LNs. Metastasis of the obturator foramen and of the right peritoneal flank were observed as well (Figure 13, A and B). Figure 13 -PET/CT scan images: (A) Common iliac lymph nodes (LNs); (B) Multiple right external iliac and bilateral inguinal positive LNs. Given the mentioned results of the PET/CT scan, we started a treatment based on Pembrolizumab. We also performed genetic research for the fibroblast growth factor receptor (FGFR) mutation. At the time of publishing, the patient is following the immunotherapy treatment and he has no physical complaints. In our case, probably due to the close follow-up and new therapies (immunotherapy), there was no evidence of evolution of the disease at 34 months after starting the immunotherapy and at 52 months from the initial diagnosis.  Discussions In 2016, Oake & Drover described the first case of metastasis to the penis that spares the urethra from a transitional cell carcinoma of the bladder invading the prostate [14]. Secondary carcinoma of the pelvis is a rare condition and a late manifestation of the carcinoma elsewhere in the body. The overall survival (OS) of these patients is between 3.9 and 9.8 months [15]. In their study comprising eight patients, Zhu et al. noted that penile metastasis was discovered after 26.4 months since the diagnosis of the primary. In the same study, the mean time since the diagnosis of penile metastasis and until the death occur was 11.4 months [16]. The first case of penis metastasis was described by Eberth in 1870. He published the case of a patient presenting a carcinoma of the rectum with metastasis in the erectile tissue of the penis [17]. About 500 cases of penis metastasis were described in the literature. The primary tumor foci for penile metastasis were represented in about 70% of the cases by pelvic organs metastasis. The invasive carcinoma of the bladder (T2-T4) was the most frequent involved, with 28.6% of the cases, followed by the prostate (27.9%) and rectosigmoid tumors (12.2%) [15]. The lung carcinoma, digestive tract tumors, ureteral tumors or lymphoma rarely metastasize to the penis. The metastasis to the penis occurs mostly in the cavernous body, but the spongiosum body, the urethra, the gland, or the skin can be the site of metastasis [17,18]. The initial clinical presentation of the penile metastasis are a mass, induration, or nodules (51%). Priapism is observed in 27% of patients as well as hemorrhage, hematuria, incontinence and irritative or obstructive symptoms in about 27% of cases as well. Less frequent is pain (17%), urinary retention (13%) or skin lesions (11%) of cases [19]. At the time of the diagnostic, most of the patients have disseminated metastasis and their prognostic is dismal. Primary metastasis to the penis is very rare [20]. Histopathological analysis must serve as the foundation for the diagnosis. This is owing to the presence of a pagetoid pattern of infiltration, especially in transitional cell carcinoma of the urinary system, which should be distinguished from the primary Paget's disease of the penis [21]. However, distinction from primary tumor of penis should be made when talking about the disease. The primary tumors of the penis typically manifest as superficial skin lesions, whereas secondary tumors are located in deeper sites. When compared primary and secondary tumor occur at different levels. It has been shown an OS of 47 weeks in patients with penile metastasis from prostate cancer, which means that the treatment of the primary tumor is most important in terms of prognostic [15,22]. Although the presentation of primary tumor is essentially superficial, Di Gregorio et al. described a case of penile tumor invading the cavernous and spongy body of the penis and responsible of urinary retention, without any superficial presence of the tumor [23]. When setting the diagnosis for metastasis penile cancers, diseases such as Bowen's disease, erythroplasia of Queyrat, verrucous carcinoma, squamous cell carcinoma, melanoma, sarcoma that classify as premalignant and malignant primary disease and some infectious disease as well as benign disorders, should be considered [24]. Paquin & Roland described the mechanism of the spread of the metastasis in 1956 [25]. The metastatic cells from the primary cancer migrate in the targeted organs by venous, arterial, or lymphatic spread as well as directly by contiguous invasion. Being a distal appendage of the vascular flow, the penis can receive the metastatic cells from the other organs via numerous ways: direct extension, retrograde venous spread, retrograde lymphatic spread, secondary embolization spread, tertiary embolization spread, arterial spread and paradoxical spread [25]. When talking about the mechanism of developing of penile metastasis from a BC, the retrograde venous pathway is the most often encountered [9,25,26].  Conclusions The description of all variants of papillary transitional carcinoma of the bladder in the specimen is of a great importance and has serious impact in the management of the disease. The metastasis to the penis is a rare condition, but the differential diagnosis is important to offer the right treatment to a patient, which can be local excision of the tumor, partial or complete penectomy, external beam radiation therapy, brachytherapy, chemotherapy, immunotherapy, or a combination of treatments. Although the OS of these patients is between 3.9 and 9.8 months, in our case, probably due to the close follow-up and new therapies (immunotherapy), at the moment of publication the patient is still alive and there is no evidence of evolution of the disease at almost four years and six months after the initial diagnosis.
2023-05-03T06:17:30.831Z
2023-03-31T00:00:00.000
{ "year": 2023, "sha1": "a51af530c8adaf50e948819d499e50fd6c2df81e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.47162/rjme.64.1.11", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "582b05a59de25b2f87e7c3436677e1452ed787c2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9815235
pes2o/s2orc
v3-fos-license
Exact boundary controllability for 1-D quasilinear hyperbolic systems with a vanishing characteristic speed The general theory on exact boundary controllability for general first order quasilinear hyperbolic systems requires that the characteristic speeds of system do not vanish. This paper deals with exact boundary controllability, when this is not the case. Some important models are also shown as applications of the main result. The strategy uses the return method, which allows in certain situations to recover non zero characteristic speeds. holds for some special hyperbolic models with vanishing characteristic speed as Saint-Venant equations (or shallow water equations), see Gugat [12]. For what concerns the system of isentropic gas dynamics (which contains the Saint-Venant model), a more general boundary controllability result for (non constant) BV solutions was obtained by the second author in [11]. In this paper, we will discuss exact boundary controllability for a general hyperbolic system which admits a vanishing characteristic speed. Consider the following first order quasilinear hyperbolic system where u = (u 1 , · · · , u n ) tr (t, x) is the state of the system in some nonempty open set Ω ⊂ R n and the n × n matrix A belongs to C 2 (Ω; R n×n ). Remark 1.1. Theorem 1.1 can be regarded as a local boundary controllability result because one can drive any initial data ϕ to any desired data ψ near u = u * without using any internal controls. However, since the characteristic speed λ m may change its sign during the control period, it is difficult to describe the exact distribution of boundary controls. To overcome this difficulty, we consider the system without boundary conditions (which is consequently underdetermined), and aim at finding the solution u itself. In the conservative case (where A(u) is a Jacobian matrix Df (u)), the solution that we determine can enter the general theory of initial-boundary problems for systems of conservation laws, see in particular Amadori [1] and Amadori and Colombo [2]. The main idea to prove Theorem 1.1 is to use a constructive approach and the return method [7]. In our framework the method consists in constructing a trajectory u ∈ C 2 ([0, T ]× [0, L]; R n ) of the system (1.1), close to u * such that 14) and that the linearized equation around u is controllable. Note indeed that the linearized equation around u * is not controllable. Based on this, we can construct a solution u ∈ C 1 ([0, T ] × [0, L]; R n ) to the system (1.1) which connects the initial and final data (which have to be sufficiently close to u * ). As a matter of fact, we will quite not use the linearized equation. Instead, we use an argument of perturbation of the trajectory u and then reduce the original control problem to a boundary control problem without vanishing characteristic speeds, which has been solved by Li and Rao [20]. In the framework of systems of conservation laws, the return method has also been used in [6,8,11,17], see also [3]. For other applications of the return method, see [9] and the references therein. Without loss of generality, we may assume the equilibrium u * to be 0, replacing u by u − u * as the unknown in the system (1.1) if necessary. For the convenience of statement, we denote by C various positive constants in the whole paper which may change from one line to another. The organization of this paper is as follows: in Section 2 we construct the special trajectory 2 Construction of the trajectory u Definition 2.1. Let j ∈ {1, · · · , n} and u 0 ∈ Ω. Let s ∈ [−ε 0 , ε 0 ] → U j (s) ∈ Ω be the orbit of the eigenvector field r j starting at u 0 (or rarefaction curves): where ε 0 > 0 is a small constant. Let Φ j (s, ·) be the corresponding flow map when s varies, i.e., Our first proposition concerns simple waves which one can use to modify the state in Proof: Without loss of generality, we may assume that j ∈ {1, · · · , m − 1} (the case where j ∈ {m + 1, · · · , n} can be treated similarly by symmetry in x, that is, replacing x by L − x if necessary). In view of (1.2) and (2.3), there exist ε 1 > 0 and η > 0 small enough such that Let ε ∈ (0, ε 1 ] and u − , u + ∈ Ω be such that (2.4) holds. By Definition 2.1, it is easy to see that (2.10) Let β ∈ C ∞ 0 ((0, 1); R) be such that Then we let From the above, the ordinary differential equation In the following, we will denote by C k (R) the space of functions of class C k whose derivatives up to order k are bounded on R (and the norm · C k (R) is in fact the norm · W k,∞ (R) ). Then by (2.10), (2.12), (2.14) and (2.16), we obtain that Now we focus on the Cauchy problem of (2.5) on R with the initial condition It is classical that there exists a unique C 2 solution to the Cauchy problem (2.5) and (2.19) in small time; see for instance [16, p. 55]. Let us prove that: for the fixed time T > 0, if ε is sufficiently small, the Cauchy problem (2.5), (2.19) admits a unique solution u ∈ To show that, it suffices to obtain a uniform a priori estimate of the solution in C 1 (see [16,Theorem 4.2.5,p. 55]). In order to obtain such an a priori estimate, we assume that the Cauchy problem (2.5), (2.19) admits already a solution u ∈ C 2 ([0, T 0 ] × R; R n ) for some For any i ∈ {1, · · · , n} and any point (t, x) ∈ [0, T 0 ] × R, we can define the i − th we know that v i , w i (i = 1, · · · , n) satisfy the following (see [16, p. 47ff] and [18]): denotes the derivative along the i-th characteristic, and where β ikl , γ ikl ∈ C 1 (Ω; R n ) satisfy in particular we derive from the fact The next proposition prove that one can approximate the trajectory given by (1.7) by a trajectory composed of simple waves. Proof of Theorem 1.1 In order to conclude the proof, we will use a perturbation argument together with a result by Li and Rao [20]. First, we have the following perturbation result. Proposition 3.1. Consider K ⊂ Ω a nonempty compact subset. Let T > 0. For any there exist ν 0 > 0 and C > 0 such that for any ν ∈ (0, ν 0 ) and any ψ ∈ C 1 (R; Ω) satisfying is defined on [0, T ] × R and satisfies For that, let us make the difference of (3.1) and (3.4), we get By Gronwall's inequality we deduce that Differentiating (3.7) with respect to x and observing that u is of class C 2 , we can use the same Gronwall argument to infer (3.6) and that the maximal solution is defined on [0, T ]. Remark 3.1. We could use only a C 1 regularity assumption onũ provided that thisũ has the particular structure given by Proposition 2.1. While the estimate (3.9) should be replaced by a weaken one (but sufficient for the proof of Theorem 1.1): Proof of Theorem 1.1: Again, we may assume the equilibrium u * to be 0, otherwise we can replace u by u − u * as the unknown in the system (1.1). Let U * := (ρ * , 0, S * ) where ρ * > 0, S * ∈ R, then it is easy to check that and the hypothesis (H1) is satisfied as: Therefore, we can apply Theorem 1.1 to obtain boundary controllability for (4.14) near the equilibrium U * . We begin with a few notations. If ε is small enough, then by (2.42), we can deduce that S k = 1, (A.11) that is, z k is defined on the whole time interval [0, 1], for all k ∈ N, and z k W 1,∞ (0,1;R n ) ≤ Cε, ∀k ∈ N. (A.12) By the Arzelà-Ascoli Theorem, there exists a subsequence {z k l } ∞ l=1 ⊂ {z k } ∞ k=1 and z ∞ ∈ C 0 ([0, 1]; R n ) such that z k l converges to z ∞ in C 0 ([0, 1]; R n ) as l tends to ∞. Now it is straightforward to pass to the limit in (A.8) (even, the limit is unique). The conclusion follows.
2009-02-15T19:58:49.000Z
2009-02-15T00:00:00.000
{ "year": 2009, "sha1": "73c259083dae2afb77c46d727776b14dd829f247", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0902.2568", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "93ae0cc67659794372fb3fa9490e7cf91ce0ccbc", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
214069503
pes2o/s2orc
v3-fos-license
NAChRDB: A Web Resource of Structure-Function Annotations to Unravel the Allostery of Nicotinic Acetylcholine Receptors Summary Due to their paramount importance, near-ubiquitous presence, and complex nature, nicotinic acetylcholine receptors (nAChRs) have remained the focus of intensive research for over 50 years. The vast amount of knowledge accumulated on the topic has become extremely difficult to navigate. NAChRDB addresses this challenge by providing web-based, real-time access to curated residue-level functional annotations of neuromuscular nAChRs with interactive 3D visualization and sequence alignment. NAChRDB provides systematic access to experimental observations and predictions from computational studies reported in the literature or performed specifically to complement current knowledge, which allows new findings to be interpreted in a more holistic context, both from a structural and a functional perspective. NAChRDB aims to serve as an invaluable resource for identifying gaps in knowledge and for guiding discovery through structural and molecular biology experiments, especially when exploring the allosteric mechanisms underlying neuromuscular nAChR function and pathology. Availability and Implementation NAChRDB is freely available online, with a self-explanatory interface and useful tool tips (https://crocodile.ncbr.muni.cz/Apps/NAChRDB/). No installation or user registration is required. NAChRDB content is stored in .json format, queried using Python, and rendered in browser using Javascript and WebGL (LiteMol). NAChRDB is highly responsive and accessible through any modern Internet browser on desktop and mobile devices. Contact jkoca@ceitec.cz Supplementary information Supplementary data are available online. Introduction The nicotinic acetylcholine receptor (nAChR) is an evolutionarily ancient allosteric membrane protein mediating synaptic transmission (Changeux, 2018). This prototypic member of the pentameric ligand-gated ion-channel family is involved in many physiological and pathological processes including neurological diseases and addictions. NAChRs convert the chemical signal of acetylcholine into ion current by allowing sodium and calcium ions to enter the cell and potassium ions to exit the cell. This fast conversion of chemical signal to ion current relies heavily on allosteric regulation, which links the agonist-binding pocket in the extracellular domain to a gating mechanism lying approximately 60 Å away in a central channel spanning the entire length of the molecule. Hundreds of nAChR-binding compounds have been found to modulate nerve impulse transmission by regulating this coupling of agonist binding and channel gating (Gündisch and Eibl, 2011;Reyes-Parada and Iturriaga-Vasquez, 2016). Not surprisingly, nAChRs have remained the focus of intensive research for more than 50 years (Brown, 2019). The effort to understand structure-function relationships in nAChRs has resulted in huge amounts of structural and functional information. However, the cumulative knowledge on nAChRs is not systematically accessible, with thousands upon thousands of experimental and/or computational studies reporting data on different receptor types, using different residue numbering schemes, and different terminology. Furthermore, because nAChRs are very large and difficult to crystalize, structural studies have focused on individual parts of the molecule, which makes it challenging to compile the current knowledge in an efficient manner and to promote further discoveries (Changeux, 2012;Halliwell, 2007). Finally, in the absence of comprehensive and unified structural annotation, it is almost impossible to identify gaps in knowledge or areas of controversy. We developed NAChRDB to address such limitations by providing web-based, real-time access to curated residue-level functional annotations of neuromuscular nAChRs, with interactive 3D visualization and sequence alignment. Data Sources and Database Coverage Functional annotations were collected from a semi-systematic literature review of Medline through PubMed ( see Supplementary Materials ). Upon manually scanning over 3000 papers on neuromuscular nAChRs, we selected 117 studies published between 1982 and 2019 and providing functional annotations of specific residues or parts of the protein. Currently, NAChRDB contains approximately 2000 unique annotation records describing the role of specific residues, as inferred based on experimental observations or computational predictions reported for nAChRs of organisms from 6 genera. These studies, which cover 41 experimental or computational methods, were conducted at 92 institutions from 25 countries. Additionally, we conducted two computational studies to complement current knowledge, which resulted in the addition of 741 annotation records. NAChRDB thus provides a comprehensive view on structure-function relationships in the neuromuscular nAChR ( Fig. 1 ). . Whereas experiments involving electrophysiological measurements, mutagenesis, or radioactive ligands account for most studies covered by NAChRDB, many residue-level annotations were also obtained from studies based on less commonly used methods, such as docking or rate-equilibrium free energy relationship. The structures and sequences of relevant nAChRs were obtained from the Protein Data Bank and from Uniprot, respectively. Sequence alignments were performed using Clustal W (Thompson et al., 1994). Residues potentially involved in charge transfer networks facilitating channel gating were predicted using a modified charge profile analysis (Ionescu et al., 2012). Briefly, the following steps were employed: (i) conformers were generated based on normal mode analysis using elNemo (Suhre and Sanejouand, 2004); atomic charges for each conformer were computed using AtomicChargeCalculator (Ionescu et al. , 2015) with several different settings; residue charges were then compared across conformers; outliers were reported based on robust statistical analysis ( see Supplementary Materials ). Channel lining residues were predicted using ChannelsDB (Pravda et al., 2018) ( see Supplementary Materials ). All predictions were added to NAChRDB as annotation records, with full reference to the source of information. In-house scripts were employed for data processing, but all annotations were curated manually. (1) a 3D viewer widget providing interactive visualization of nAChR 3D models, complete with (2) direct reporting of annotation records for a selected residue and (3) 3D representation settings; (4) a search section providing extensive, PubMed-like search functionality for querying structure, function, and literature-related fields; (5) a sequence alignment viewer that also enables direct reporting of annotation records available for a selected residue. The Results tab (B) summarizes the search results in an interactive table. All results can be downloaded in .csv and .json format. The Submit tab (C) allows users to report annotations, thus contributing to maintaining NAChRDB up to date. NAChRDB content is stored in .json format, queried using Python, and rendered in browser using Javascript and LiteMol, a WebGL-based technology for real-time in-browser rendering of large-scale macromolecular structures (Sehnal et al. , 2017) ( Fig. 2 ). NAChRDB is freely available online ( https://crocodile.ncbr.muni.cz/Apps/NAChRDB/ ), with no requirement for installation or user registration, with self-explanatory interface and useful tool tips. NAChRDB is highly responsive and accessible through any modern Internet browser on desktop and mobile devices. Case studies • A simple text search in NAChRDB helps to quickly gauge the potential role of specific residues, as well as to identify opportunities for further study. For example, certain residues found to link different domains, bind certain drug molecules, contribute to channel gating, form N-glycosylation sites, or regulate agonist binding were never studied in human neuromuscular nAChR (Fig. 3A) . • When conducting new computational studies, NAChRDB helps to put the computational predictions into context, as well as to identify gaps in current knowledge and thus opportunities for further study. For example, among the residues predicted to line the channel's inner surface in neuromuscular Torpedo nAChR, 28.5% have been studied to date; among the residues predicted to contribute to protein-wide charge transfer networks in Torpedo nAChR that facilitate channel gating, 18.5% have been studied (Fig. 3B) . • Additionally, NAChRDB helps to identify potential areas of ambiguity or controversy in the current knowledge. For example, much remains to be clarified regarding the relationship between channel opening kinetics and voltage dependency, especially in nAChRs where key ASP residues are mutated to non-charged residues (Fig. 3C) . Armed with such information, it is easy to identify understudied areas and design further experiments that can cover the current gaps and provide a more meaningful contribution to the current knowledge. Many residues highlighted by state-of-the-art computational analyses have not been investigated to date, mainly because experimental studies are expensive and thus typically focus only on areas thought to be of utmost importance. Furthermore, due to reporting bias, negative results are scarce. Thus, in addition to serving as an easy reference for structure-function information, we hope that NAChRDB will help promote the reporting of both positive and negative results, so that the scientific community may form a comprehensive picture of the functioning of nAChRs. Limitations At present, the annotation records in NAChRDB refer mainly to the neuromuscular receptor. Extending NAChRDB to cover neuronal receptors is our current focus. Moreover, users can submit suggestions for new annotations directly via NAChRDB; we will review these suggestions manually and update NAChRDB accordingly. We are also working on a text mining tool that will enable us to automatically expand the coverage of NAChRDB as soon as new information becomes available. ( A) It is very easy to identify gaps in knowledge and design future investigations across different species. Color coding on the 3D model: dark green -αγVAL46 and αδVAL46 were suggested to serve as a key structural link between the extracellular and transmembrane domains of the neuromuscular nAChR in Torpedo marmorata (Miyazawa et al., 2003); teal and pink -αγSER248, αδSER248, βSER254, and δSER262 form a binding site for a channel-blocking non-competitive antagonist of the neuromuscular nAChR in T. marmorata (Hucho et al., 1986); orange -αγTYR127, αδTYR127, αγASP97, αδASP97, αγTHR244, αδTHR244, αγLYS242, and αδLYS242 participate in the channel gating process of neuromuscular nAChR in murines (Purohit and Auerbach, 2007;Zhang and Karlin, 1998); green -mutations of γTRP55 and δTRP57 change the agonist affinity of neuromuscular nAChR in murines (Nayak et al., 2016); red -mutations of δPRO123 disable one ACh binding site in murine neuromuscular nAChR (Gupta et al., 2013); cyan -γASN68, γASN141, δASN70, δASN143, and δASN208 may form N-glycosylation sites in the neuromuscular nAChR of T. californica (Chiara and Cohen, 1997;Strecker et al., 1994);blue -αγASP195, αδASP195, γLEU77, αγPRO194, αδPRO194, αγCYS193, αδCYS193, γLYS10, αγTRP86, αδTRP86, αγPRO88, αδPRO88, γTYR104, γILE80, and δTYR212 may contribute to physostigmine binding sites in the neuromuscular nAChR of T. californica (Hamouda et al., 2013); pale yellow and pink -δTHR274, δPHE232, δCYS236, δARG277, αγSER248, αδSER248, βSER254, βVAL261, and δVAL269 may form propofol binding sites in the neuromuscular nAChR of T. californica (Jayakar et al., 2013). The residue labels include the subunit(s), residue name, and residue number. Residue numbering is given according to the neuromuscular nAChR sequence from the respective organism ( see Supplementary Materials ). (B) Upon conducting computational studies to identify channel lining residues (left) and charge transfer networks (right), one can easily examine the predictions in the context of current knowledge (green), as well as identify areas of further study (blue). (C) Mutation of αASP200 (magenta) to GLN in the mouse neuromuscular nAChR expressed in human kidney cells was reported to have a dramatic effect on channel opening rate (Akk et al., 1996) but a non-significant effect on slope conductance or voltage dependency (Dworakowska et al., 2018). Some mutations of αASP97 (yellow) in mouse neuromuscular nAChR were shown to dramatically change gating kinetics (Chakrapani and Auerbach, 2005;Purohit and Auerbach, 2007), whereas other mutations were found to have only a marginal effect (Chakrapani et al., 2003). The 3D model of T. marmorata nAChR (PDB ID: 4AQ9) (Unwin and Fujiyoshi, 2012) was used for graphical representation in all panels. Conclusions NAChRDB helps to quickly summarize the knowledge about specific parts of neuromuscular nAChRs, as well as to detect conflicting results reported for the same or homologous residues, and even to identify the parts of the protein not studied to date. NAChRDB provides systematic access to experimental observations and predictions from computational studies reported in the literature or performed specifically to complement current knowledge. New findings can thus be interpreted in a more holistic context, both from a structural and a functional perspective. Ultimately, NAChRDB aims to serve as an invaluable resource for guiding discovery through structural and molecular biology experiments, especially when exploring the allosteric mechanisms underlying nAChR function and pathology. In addition, NAChRDB can serve as a key starting point for the unification of the state-of-art knowledge in the broad field of pentameric ligand-gated ion channels. Funding This research was supported by the Ministry of Education, Youth and Sports of the Czech Republic (project CEITEC 2020: LQ1601). Conflict of Interest: none declared.
2020-01-16T09:09:17.721Z
2020-01-09T00:00:00.000
{ "year": 2020, "sha1": "51bbda27345826e7494c5b115f74360012137b32", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc8444218?pdf=render", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "ebfc39275b6ba97e98ba06ee53d3ee19fdffdc52", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Computer Science" ] }
236303673
pes2o/s2orc
v3-fos-license
Moderate Consumption of Healthy Nordic Foods is Associated with Reduced Mortality in the Norwegian Women and Cancer Study: a Prospective Cohort Study Background High adherence to healthy Nordic diets may enhance longevity. However, optimal intake levels of healthy Nordic foods are not known. Hence, in a large prospective cohort of women in Norway we examined all-cause mortality in relation to intake of ve food groups that are part of a healthy Nordic diet: Nordic fruits and vegetables, fatty sh, lean sh, wholegrain products, and low-fat dairy products. Methods A total of 87 899 women who completed a food frequency questionnaire between 1996 and 2004 were followed for mortality until the end of 2018. Cox proportional hazards regression models were used to examine the associations between consumption of the Nordic food groups and all-cause mortality. The food groups were examined as categorical exposures, and all but wholegrain products also as continuous exposures in restricted cubic spline models. Results A total of 9 168 women died during the 20-year follow-up. Nordic fruits and vegetables, fatty sh and low-fat dairy products were not linearly associated with mortality (p < 0.05). The optimal intake levels and hazard ratios (HR) and 95% condence intervals (CI) associated with these intakes were approximately 200 grams/day of Nordic fruits and vegetables (HR 0.84 (95% CI: 0.77–0.90)), 10–20 grams/day of fatty sh (HR 0.98 (95% CI: 0.92–1.03)) and 200 grams/day of low-fat dairy products (HR 0.94 (95% CI: 0.89–0.99)) compared to no consumption. High consumption of fatty sh ( ≥ 70 grams/day) was associated with increased mortality. Intake of wholegrain products of >120 grams/day was associated with lower mortality (HR 0.92 (95% CI: 0.85–0.99)) compared to < 60 grams per day. Lean sh consumption was not associated with mortality. After stratication by smoking status, the observed association for Nordic fruits and vegetables was only signicant in ever smokers with the optimal intake level at 250 grams/day (HR 0.78 (95% CI: 0.71-0.86)). rank women for fairly good for macronutrients (19). Another validation study that compared the relation between sh consumption registered by the FFQ and fatty acids composition in serum phospholipids concluded of in high-consuming populations was in serum phospholipids (20). In a study of the reproducibility of the FFQ, there were some indications of seasonal reporting bias, but the overall results in line with what has been found in studies on similar self-administered FFQs developed to assess habitual diet were tested with a Schoenfeld residuals observed protective effect of wholegrain products on mortality in the present analysis by meta-analyses of prospective cohort studies from the and The present results showed no further benets of 180 grams of wholegrain products per day. In the meta-analysis Aune et al. reductions in risk for whole observed up to an intake of 225 grams per day, but they found a non-linear association with all-cause mortality and a steeper reduction in risk at lower intake levels to our results, a study on wholegrain eaters by et al. in the meta-analyses found an inverse association between a calculated wholegrain score and statistical for meaningful discussions about study design and statistical analysis. Information on vital status and cancer incidence was obtained by linkage to the National Population Registry and the Cancer Registry of Norway, using the unique 11-digit identity number assigned to all Norwegian citizens. Participants completed a mailed self-administered questionnaire including questions about anthropometric, sociodemographic, reproductive and lifestyle factors. Most of the questionnaires included four pages of food frequency questions. The questionnaire used has previously been published elsewhere (17). Follow-up questionnaires were mailed approximately every sixth year after recruitment. A study on external validity of the NOWAC found no major source of selection bias (18). Study participants The baseline for this paper is partly the rst NOWAC mailing from 1996 to 1997 and 2003 to 2004 (response rate of 57% and 48%, respectively), and partly the second mailing (follow-up questionnaire) from 1998 to 1999 to those enrolled in 1991 to 1992 who had not been given the food frequency questions at enrolment (response rate of 81%). In total 101 316 women aged 41-76 at baseline were considered eligible for inclusion. Women who had emigrated (n = 3) and women with no follow-up (n = 13) were excluded. We further excluded women with implausible daily energy intake (< 2 500 kJ (n = 1 033) or > 15 000 kJ (n = 141)), and women with missing information on the following variables: Body mass index (BMI) (n = 2 272), physical activity (n = 8 548) and smoking habits (n = 1 407), leaving a total number of 87 899 women for the present analysis. Dietary assessment Diet was assessed using a semi-quantitative food frequency questionnaire (FFQ). The FFQ was designed to measure the typical diet during the past year with special emphasis on sh consumption. The response options were given with four to seven frequency categories ranging from never/seldom to six or more per week. Questions about portion size were included for some food items as natural units, such as number of carrots, or household units, such as tablespoons. The FFQ used in NOWAC has been validated in several studies. Hjartåker et al. reported that the FFQ's ability, when compared to information from repeated 24hour dietary recalls, was good to rank women for foods eaten frequently and fairly good for macronutrients (19). Another validation study that compared the relation between sh consumption registered by the FFQ and fatty acids composition in serum phospholipids concluded that habitual intake of sh in highconsuming populations was re ected in serum phospholipids (20). In a study of the reproducibility of the FFQ, there were some indications of seasonal reporting bias, but the overall results were in line with what has been found in studies on similar self-administered FFQs developed to assess habitual diet (21). The Norwegian Weight and Measurement Table with standardised portion sizes and weights was used to convert the consumption of food items to grams (22), and information about the nutrient content in foods was obtained from the Norwegian Food Composition Database (23). The calculations of daily intake of food items, energy and nutrients were made using a statistical program for SAS (SAS Institute Inc., Cary, NC, USA) developed at the Department of Community Medicine, University of Tromsø, for the NOWAC cohort. Missing frequency values were treated as no consumption, and missing portion sizes were set to the smallest portion size asked for. Exposures We included ve Nordic food groups that can be extrapolated to our dietary guidelines in the analysis. The questions in the FFQ that formed the basis for the construction of the food groups has been described previously (24). Based on the criteria set by Olsen et al., we have included fruits and vegetables produced in the Nordic climate without the use of external energy and that were available from the FFQ: broccoli/cauli ower, cabbage, carrots, swede, mixed vegetables (commonly a frozen mix of carrots, broccoli and cauli ower) and apples/pears (7). Consumption of Nordic fruits and vegetables was divided into four categories (grams/day): < 100, 100-199, 200-299, ≥ 300. We analysed lean and fatty sh separately because they are speci ed in our dietary guidelines, and are sources of speci c essential nutrients such as vitamin D and omega-3 fatty acids from fatty sh, and iodine from lean sh (14). Fatty sh was classi ed as sh with ≥ 4% fat in the meat (salmon, trout, herring, mackerel), and was categorised in four categories (grams/day): < 5, 5-14, 15-29, ≥ 30. Lean sh was classi ed as sh containing < 4% fat in the meat (cod, haddock, plaice) but excluding products like sh cakes, sh balls, sh spread and stew, and was categorised in four categories (grams/day): < 15, 15-29, 30-44, ≥ 45. Low-fat dairy products, comprised of semi-skimmed milk (≤ 1.5%), skimmed milk (0.1% fat) and yoghurt (≤ 3.4 % fat). It was chosen to include this food group as it is part of the Baltic Sea Diet Score, and because it is the main source of iodine in the Norwegian diet. Consumption was categorised into four categories: non-consumers, ≤ 200, 201-400, > 400 grams/day. Wholegrain products included wholegrain bread and breakfast cereals and was categorised into four categories (grams/day): < 60, 60-119, 120-179, ≥ 180. Confounders Covariates included in the analysis were chosen based on literature and selected with the use of Directed Acyclic Graphs (DAGs). DAGs are a tool that can help in selecting confounding factors to include in the statistical analysis when the purpose is to study causal relationships (25). The selection of confounding factors through DAGs are based on the assumption of causation between included variables in the DAG, hence wrong assumptions could lead to misspeci ed models. The strength is however that possible colliders are also identi ed in a DAG, reducing the risk of introducing bias in the statistical models. The following confounders were included in the analysis: physical activity, body mass index (BMI), smoking status and intake of energy, alcohol and processed meat. Physical activity Physical activity level was included based on self-report on a ten-point scale estimating physical activity at home, at work, exercising and walking. A validation study showed that self-reporting was able to rank women according to their level of physical activity (26). Physical activity was categorised as low (1)(2)(3)(4) points), medium (5-6 points) or high (7-10 points). BMI BMI was calculated based on self-reported height and weight (kg/m 2 ), and was categorised in four categories: < 20, 20-24.9, 25-29.9, ≥ 30 kg/m 2 . Selfreported weight and height has been found to provide valid ranking of BMI in NOWAC (27). Smoking status The smoking variable was computed by combining information on smoking status (never, former and current), with age at smoking initiation for those who have ever smoked. For current smokers who started smoking before the age of 20 we also included information about pack-years (number of cigarettes smoked per day, divided by 20, multiplied by number of years smoked). Twenty or more pack-years was de ned as heavy smoking, and 0-19 pack-years was de ned as moderate. Further adjustments for pack-years did not change the confounding effect of smoking. Smoking exposure was then divided into six categories: never smoker, current heavy smoker early starter (age at start smoking < 20), current moderate smoker early starter, current smoker late starter (age at start smoking ≥ 20), former smoker early starter, former smoker late starter. Intake of energy, alcohol, and processed meat The calculations of daily intake of nutrients, food items and energy has been described in the dietary assessment segment above. More speci cally, the calculated total energy intake was based on approximately 85 food frequency questions that cover the habitual diet of the women. Energy intake was computed as a continuous variable (kJ per day) excluding energy from alcohol. Intake of alcohol was calculated based on three questions about intake of alcoholic beverages and was computed as a categorical variable to get a group of non-consumers, and categories representing lower and higher intake (grams/day): non-consumers, 0-5, > 5. Outcome The women were followed from return of the FFQ and until death or censoring, which was the date of emigration or end of follow-up on 31 December 2018. Statistical methods Population characteristics and dietary factors by healthy Nordic food group categories were analysed using χ² tests for categorical covariates and Kruskal-Wallis tests for continuous covariates. Distribution of covariates is presented across consumption categories of the Nordic food groups as mean (and standard deviation) for age, as median intake (and 10th-90th percentile) for energy, and percentages (%) for the covariates expressed categorically. Spearman's rank-order correlation was used to test the association between the Nordic food groups and is presented as the correlation coe cient (r s ). Cox proportional hazards regression models with age as the underlying time scale were used to examine the associations between consumption of the ve Nordic food groups and all-cause mortality. Estimates from the Cox regression models are presented as age-adjusted and multivariable-adjusted estimates. The Nordic food groups were mutually adjusted for in the multivariable-adjusted model, and the results were also adjusted for: physical activity, BMI group, smoking status, alcohol intake, estimated intake of energy and processed meat. Both models examined the Nordic food groups expressed as categorical exposures, and four of the Nordic food groups were further examined in the multivariable-adjusted model as continuous exposures in restricted cubic splines. The wholegrain products variable, which is only based on two FFQ frequency questions, was not examined in restricted cubic splines, as the distribution of values could not be approximated to a continuous variable. Number of knots in the restricted cubic splines was determined by testing and comparing models with three, four and ve knots by the Akaike and Bayesian information criteria. This test was chosen because unlike the likelihood-ratio test and Wald testing procedures, the models do not have to be nested to compare how well the different models t the data. Models with the smallest AIC value were judged to t the data better, resulting in three knots at xed percentiles (10, 50, 90) of the distribution (29). The p-value for non-linearity in the restricted cubic spline analysis was calculated by performing Wald testing, which tests the null hypothesis that the coe cient of the second spline is equal to zero. Proportional-hazards assumptions were tested with a Schoenfeld residuals test. All models were strati ed by subcohorts (n = 5), which were constructed by grouping together the FFQs that are most similar regarding the food frequency questions included, and which were completed closest together in time, as the data were collected over a period of almost ten years. We explored potential interactions between the Nordic food groups and smoking habits by adding a product term in the mutually adjusted models. If a statistically signi cant interaction effect was observed, we performed analyses strati ed by never and ever smokers. Results During a median of 20 (range 0.2-23) years of follow-up, 9 168 women died, mainly from cancer (n = 4 719) and cardiovascular diseases (n = 1 668). Table 1 shows the total population distribution and number of deaths by consumption categories of the Nordic food groups, and the median intake within the categories. The correlation matrix (Table 2) shows that lean and fatty sh were most strongly correlated of the ve food groups, but the correlation was still quite low (r s = 0.21). Table 3 gives the distribution of included covariates across consumption categories of the Nordic food groups. The oldest women were in the highconsumption group of lean sh, and the greatest age span across intake categories was within this food group (ranging from 51.0 years old in low consumers to 53.5 in high consumers). Similar tendencies were seen within the food group fatty sh, where mean age ranged from 51.0 in low consumers to 53.2 years in high consumers. Within the other Nordic food groups the age differences across intake categories were minimal. We see a general tendency of women in the high-consuming categories within the Nordic food groups being more physically active, and more likely to be never smokers except among high consumers of lean and fatty sh. Across all food groups, energy intake was higher in the higher-consumption categories. The proportions of women reporting overweight (BMI 25.0-29.9 kg/m 2 ) and obesity (BMI ≥ 30 kg/m 2 ) were higher among high consumers of Nordic fruits and vegetables and lean sh, whereas the opposite was observed within the wholegrain products group. Due to the high number of participants, even marginal differences in the distribution of covariates across consumption categories of the healthy Nordic food groups were statistically signi cant (p < 0.05). The restricted cubic spline regression showed a signi cant J-shaped association for the food groups Nordic fruits and vegetables (Fig. 1A), low-fat dairy products (Fig. 1B) and fatty sh (Fig. 1C). For Nordic fruits and vegetables, the nadir (the intake level associated with lowest mortality) was observed at 200 grams/day (HR 0.84 (95% CI: 0.77-0.90) compared to no consumption (Fig. 1A). We observed a signi cant interaction between smoking status and Nordic fruits and vegetables, and thus strati ed analyses are also presented. After strati cation by never/ever smokers, the observed association was only signi cant in ever smokers with the nadir at 250 grams/day (HR 0.78 (95% CI: (Fig. 2). Furthermore, consumption of Nordic fruits and vegetables > 500 grams/day increased mortality among never smokers, but there were only 33 deaths registered at this consumption level. For low-fat dairy products the nadir was observed at 200 grams/day (HR 0.94 (95% CI: 0.89-0.99) compared to no consumption, and high consumption (> 800 grams/day) increased mortality (Fig. 1B). For fatty sh the nadir was observed at an intake level of 10-20 grams per day (20 grams/day: HR 0.98 (95% CI: 0.92-1.03)), but this was not signi cantly better than not consuming fatty sh at all (Fig. 1C). Excessive consumption on the other hand was associated with increased mortality from 70 grams/day (HR 1.09 (95% CI: 1.01-1.17)). Consumption of lean sh was neutral in relation to mortality (Fig. 1D). Table 4 gives the results when the intake is categorised into consumption groups. These con rmed the ndings presented in Fig. 1. In addition, we note that the intake of wholegrain products, both of 120-179 grams/day and of ≥ 180 grams/day compared to < 60 grams per day, was associated with lower mortality (HR 0.92 (95% CI: 0.85-0.99)) ( Table 4). In the strati ed analysis, the median consumption of Nordic fruits and vegetables was 173 grams/day in never smokers and 159 grams/day in ever smokers (Table 5). An intake between 100-199 grams/day compared to < 100 grams/day was associated with reduced mortality among never smokers in similar strength as in the unstrati ed analysis (HR 0.90 (95% CI 0.81-0.99). However, for ever smokers intake above 100 grams/day was bene cial (Table 6). To minimise the chance of reverse causation we performed sensitivity analysis, starting follow-up two years after enrolment. Results did not change (Supplementary Figure 1), except that the association with low-fat dairy products and reduced mortality was slightly attenuated in the restricted cubic spline regression model due to a wider con dence interval believed to be caused by loss of cases (Supplementary Figure 1B). As ndings for Nordic fruits and vegetables in part could re ect the in uence of the consumption of other fruits and vegetables (24), we made further adjustments including other fruits and vegetables in the multivariable-adjusted model, but this did not in uence the results (Supplementary Figure 2). Discussion Moderate consumption of Nordic fruits and vegetables and low-fat dairy products were associated with reduced all-cause mortality, while excessive intake of low-fat dairy products was associated with increased morality during follow-up. Intake of wholegrain products estimated to be approximately in line with current recommendations for wholegrains of 70-90 grams/day was associated with reduced mortality, as was higher consumption. Consumption of both lean and fatty sh in line with dietary guidelines was within a non-signi cant bene cial range, but excessive consumption of fatty sh was associated with increased mortality during follow-up. In contrast, lean sh consumption level had no impact on total mortality. Thus, there was a J-shaped trend with Nordic fruits and vegetables, fatty sh and low-fat dairy products and mortality, implicating that risk changes might not be linear with increasing intake of some healthy Nordic food groups. The maximum bene t of consuming Nordic fruits and vegetables was achieved at around 200 grams/day, which is below the recommended intake of all fruits and vegetables of ve servings per day (30)(31)(32)(33). Non-linear inverse associations of fruit and vegetable intake with total mortality have recently been shown in two meta-analyses (34,35). While the maximum bene t was observed at higher consumption levels in both studies ( Miller and colleagues also found in the PURE study that optimal health bene ts of fruit and vegetable consumption could be achieved at a more modest intake level than currently recommended (around three to four servings per day) (36). Potentially, subgroups of fruit and vegetable consumption such as the selected Nordic varieties have distinct health effects due to variations between different fruits and vegetables in nutritional properties (33), but other underlying dietary factors could also play a role in variations between dose-response relationships across populations. The inverse association between Nordic fruits and vegetable consumption and mortality seemed stronger in former and current smokers than in never smokers. Also, the optimal consumption level was estimated to be higher in ever smokers than never smokers. Similar tendencies were reported in the European Prospective Investigation into Cancer and Nutrition which also included a subsample of women from NOWAC (37). In addition, a meta-analysis of prospective cohort studies on the association between consumption of fruits and vegetable and risk of lung cancer found stronger associations with lung cancer among smokers. Potentially antioxidant properties of fruits and vegetables are protective against increased oxidative stress caused by smoking (38). The observed protective effect of wholegrain products on mortality in the present analysis is supported by meta-analyses of prospective cohort studies including populations from the US, Europe and Asia (39,40). The present results showed no further bene ts of consuming > 180 grams of wholegrain products per day. In the meta-analysis by Aune et al. reductions in risk for whole grains were observed up to an intake of 225 grams per day, but they found a non-linear association with all-cause mortality and a steeper reduction in risk at lower intake levels (40). Compared to our results, a study on Norwegian wholegrain eaters by Jacobs et al. included in the meta-analyses found an inverse association between a calculated wholegrain consumption score and mortality, with the highest score being most bene cial. This score was calculated based on slices of bread multiplied by percentages of wholegrain and was thus based on more detailed information on wholegrain consumption than we had access to (41). The impact of dairy intake on mortality has been extensively studied, with contradicting results (42,43). The divergence between studies could be due to variation between different types of dairy products being investigated (i.e., total dairy, speci c categories of dairy such as milk, yoghurt, cheese, low-fat/highfat dairy), but also the quality of the underlying diet in different populations could affect the association between dairy consumption and mortality. For example, in a population with little access to complete proteins and essential nutrients from other animal sources than dairy, consumption could be differently associated to all-cause mortality compared to in a population with access to such nutrients from multiple food sources. Still, even when comparing results on low-fat milk consumption as a speci c dairy category and mortality in Nordic populations, one study nds an increased mortality (44) while another nds no association (45). It is noted that the fat content in yoghurt, which was part of the low-fat dairy products in present study could be up to 3.4 %, and therefore not considered low-fat within the yoghurt subcategory of dairy products. Hence, our results are not directly comparable with these studies. Our analysis showed a non-linear association with low-fat dairy and mortality, much in line with what Ding et al. found for total dairy consumption in three prospective cohort studies in women and men (46). As in the present analysis, several large cohort studies have not been able to show any reduced mortality linked to frequent sh consumption (47,48). In line with our results, Engeset et al. found a non-linear trend with fatty sh consumption and mortality in the European Prospective investigation into Cancer and Nutrition cohort, which included a part of our sample (48). Also, a study on sh consumption and mortality in a cohort of Swedish men and women found a Ushaped association between consumption of sh and all-cause mortality, which was more pronounced in women (49). Further, when they considered lean and fatty sh separately, they found no associations between consumption of lean sh and mortality, but a markedly more pronounced association between fatty sh consumption and mortality. We observed that consumption up to the recommended 200 grams of fatty sh/week (29 grams/day) was within a nonsigni cant bene cial range, but when intake reached 70 grams/day there was a signi cantly increased mortality. In the cohort on Swedish men and women they reported higher mortality amongst women who consumed 80 grams sh per day compared to the median intake level (49). Even though sh is a good source of essential nutrients, it is also a source of environmental contaminants such as dioxins, which are classi ed as carcinogens, and accumulates in the adipose tissue (14,50,51). While lean sh store fat in the liver, fatty sh store it in the llet itself, and such contains more of these substances compared to lean sh. One can speculate if this is related to the observed increased mortality with high consumption of fatty sh but not with lean sh. Nevertheless, our ndings do not support the part of the dietary guideline underlining that at least 200 grams a week should be fatty sh, as this conveys the impression that consuming more than this is better (14). The search for optimal intake levels of foods and an ideal composition of the diet should be emphasised in studies on sustainable healthy regional diets, both for health and to reduce the burden of food production on the environment. However, establishing optimal intake levels of foods for health is not straightforward, given the limitations inherent in FFQs to give precise estimates of actual food intake and that the health effect is dependent on the underlying dietary pattern. For example, even though we found that approximately 200 grams/day of Nordic fruits and vegetables and 120-179 grams/day of wholegrain products (in models mutually adjusted for healthy Nordic food groups and energy intake), is optimal for longevity in this study, substituting processed meat with increased intake of these foods and for example lean sh (which was neutrally associated with mortality) could be bene cial for both health and the environment. Nevertheless, public health messages advocating an "increased" intake of certain foods without pointing to speci c intake levels gives the impression that the more we can eat of these foods the better health will be, and this might not be the case. To identify optimal food composition of a healthy Nordic diet in a public health perspective, substitutional analyses are highly relevant for further research. Strengths and limitations The strengths of this study include a large sample size, a high number of deaths and the long follow-up (median 20 years), providing enough statistical power in the analysis. Linkage to registry is a strength as all deaths are con rmed. Further, the risk of sampling bias is considered low due to the selection of women through the National Registry. Another strength is that a validated questionnaire was used to assess food intake and covariates (19-21, 26, 27). The study is, however, limited by having only one assessment of diet, as dietary habits probably have changed during follow-up. Recalling the habitual diet with the use of FFQ could lead to recall error and misclassi cation of dietary exposures, but this is expected to be non-differential. In addition, the FFQ was not designed to measure all foods that are part of a healthy Nordic diet and hence does not capture all relevant food components such as wild berries and vegetables like kale or distinguish between speci c varieties of Nordic wholegrains such as rye and barley. Furthermore, precise assessment of dietary exposure is di cult and measurement errors are inevitable in nutritional epidemiology. Also, even though we adjusted for possible confounding factors that were unevenly distributed across intake categories of the Nordic food groups, residual confounding due to imprecise assessment of these factors as well as unmeasured factors is likely. In particular, these results must be interpreted with caution as the moderate consumers are probably more representative of what most people eat, while both low and high consumers can be different in many ways (e.g., extreme dieters, vegans, people with allergies). Conclusion Moderate consumption of foods that are part of a healthy Nordic diet is either signi cantly better for or does not compromise longevity, compared to low or high intake, among middle-aged and older women. Consumption of Nordic fruits and vegetables was most bene cial in women that were either current or former smokers, and the optimal intake level seemed to be higher among these women compared to never smokers. These ndings implicate that dietary interventions might be especially important for people with higher mortality due to smoking. Moderate intake of many food groups facilitates a varied diet, which is also part of the dietary guidelines, and this can be good for both health and the environment. Our results indicate that we need to assess linear as well as non-linear associations between food intake and health outcomes.
2021-07-26T00:05:47.494Z
2021-06-11T00:00:00.000
{ "year": 2021, "sha1": "2481bb3a5f6c9af42c0c523ef43a5b928d37378c", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-605255/v1.pdf?c=1631898864000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "0e8fb97ec720deb5323145054ee7f2743db2d8cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3786519
pes2o/s2orc
v3-fos-license
Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets Objective: Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. Materials and Methods: The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Results: Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% (P < 10−20) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., “critical care,” “pneumonia,” “neurologic evaluation”). Discussion: Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Conclusion: Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. trials cannot keep pace with the perpetually expanding breadth of clinical questions, with only $11% of guideline recommendations backed by high-quality evidence. 4 Clinicians are left to synthesize vast streams of information for each individual patient in the context of a medical knowledge base that is both incomplete and yet progressively expanding beyond the cognitive capacity of any individual. 5,6 Medical practice is thus routinely driven by individual expert opinion and anecdotal experience. The meaningful use era of electronic health records (EHRs) 7 presents a potential learning health system solution. [8][9][10][11][12] EHRs generate massive repositories of real-world clinical data that represent the collective experience and wisdom of the broad community of practitioners. Automated clinical summarization mechanisms are essential to organize such a large body of data that would otherwise be impractical to manually categorize and interpret. 13,14 Applied to clinical orders (e.g., labs, medications, imaging), such methods could answer "grand challenges" in clinical decision support 15 to automatically learn decision support content from clinical data sources. The current standard for executable clinical decision support includes human-authored order sets that collect related orders around common processes (e.g., admission and transfusion) or scenarios (e.g., stroke and sepsis). Computerized provider order entry 16 typically occurs on an " a la carte" basis where clinicians search for and enter individual computer orders to trigger subsequent clinical actions (e.g., pharmacy dispensation and nurse administration of a medication or phlebotomy collection and laboratory analysis of blood tests). Clinician memory and intuition can be error prone when making these ordering decisions; thus, health system committees produce order set templates as a common mechanism to distribute standard practices and knowledge (in paper and electronic forms). Clinicians can then search by keyword for common scenarios (e.g., "pneumonia") and hope they find a preconstructed order set that includes relevant orders (e.g., blood cultures, antibiotics, chest X-rays). [17][18][19] While these can already reinforce consistency with best practices, 20-25 automated methods are necessary to achieve scalability beyond what can be conventionally produced through manual definition of clinical content 1 intervention at a time. 26 Probabilistic topic modeling Here we seek to algorithmically learn the thematic structure of clinical data with an application toward anticipating clinical decisions. Unlike a top-down rule-based approach to isolate preconceived clinical concepts from EHRs, 27 this is more consistent with bottom-up identification of patterns from the raw clinical data. 28 Specifically, we develop a latent Dirichlet allocation (LDA) probabilistic topic model [29][30][31][32][33] to infer the underlying "topics" for hospital admissions, which can then inform patient-specific clinical orders. Most prior work in topic modeling focuses on the organization of text documents ranging from newspaper and scientific articles 34 to clinical discharge summaries. 35 More recent work has modeled laboratory results 36 and claims data 37 or used similar low-dimensional representations of heterogeneous clinical data sources for the unsupervised determination of clinical concepts. [38][39][40] Here we focus on learning patterns of clinical orders, as these interventions are the concrete representation of a clinician's decision making, regardless of what may (or may not) be documented in narrative clinical notes and diagnosis codes. In the analogous text analysis context, probabilistic topic modeling conceptualizes documents as collections of words derived from underlying thematic topics that define a probability distribution over topic-relevant words. For example, we may expect our referenced article on the "Scientific Evidence Underlying the American College of Cardiology (ACC)/American Heart Association (AHA)" 2 to be about the abstract topics of "cardiology" and "clinical practice guidelines," weighted by respective conditional probabilities P(Topic Cardiology jDocument EvidenceACC/AHA ) and P(Topic Guidelines j Document EvidenceACC/AHA ). Words we may expect to be prominently associated with the "cardiology" topic would include heart, valve, angina, pacemaker, and aspirin, while the "clinical practice guideline" topic may be associated with words like evidence, recommendation, trials, and meta-analysis. The relative prevalence of each word in each topic is defined by conditional probabilities P(Word i jTopic j ) in a categorical probability distribution. With the article composed as a weighted mixture of multiple topics, the document contents are expected to be generated from a proportional mixture of the words associated with each topic as determined by the conditional probability: In practice, we are not actually interested in generating new documents from predefined word and topic distributions. Instead, we wish to infer the underlying topic and word distributions that generated a collection of existing documents. Such a body of documents can be represented as a word-document matrix where each document is a vector containing the frequencies of every possible word ( Figure 1). Topic modeling methods factor this matrix based on the underlying latent topic structure that links associated words to associated documents. A precise solution to this inverted inference is not generally tractable, requiring iterative optimization solutions such as variational Bayes approximations 29 or Gibbs sampling. 31 This is closely related to other dimensionality-reduction techniques to provide low-rank data approximations, [41][42][43] with the probabilistic LDA framework interpreting the interrelated structure as conditional probabilities P(Word i jTopic j ) and P(Topic j jDocument k ). Once this latent topic structure is learned, it provides a convenient, efficient, and largely interpretable means of information retrieval, classification, and exploration of document data. Clinical data analogy For our clinical context, we draw analogies between words in a document to clinical items occurring for a patient. The key clinical items of interest here are clinical orders, but other structured elements include patient demographics, laboratory results, diagnosis codes, and treatment team assignments. Modeling patient data as such allows us to learn topic models that relate patients to their clinical data. A patient receiving care for multiple complex conditions could then have his or her data separated out into multiple component dimensions (i.e., topics), as an "informative abstractive" approach to clinical summarization. 14 For example, we might use this to describe a patient hospital admission as being "50% about a heart failure exacerbation, 30% about pneumonia, and 20% about mechanical ventilation protocols." Prior work has accomplished similar goals of unsupervised abstraction of latent factors out of clinical records using varying methods. [38][39][40] Based on the distribution of clinical orders associated within such low-dimensional representations, we aim to impute additional clinical orders for decision support. OBJECTIVE Our objective is to evaluate the current real-world standard of care in terms of preauthored hospital order set usage during the first 24 hours of inpatient hospitalizations, build probabilistic topic model representations of clinical data to summarize the principal axes of clinical care underlying those same first 24 hours, and compare the ability of these models to anticipate relevant clinical orders as compared to existing order sets. METHODS We extracted deidentified patient data from the (Epic) EHR for all inpatient hospitalizations at Stanford University Hospital in 2013 via the Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse. 44 The structured data covers patient encounters from their initial (emergency room) presentation until hospital discharge. The dataset includes more than 20 000 patients with > 6.7 million instances of more than 23 000 distinct clinical items. Patients, items, and instances are respectively analogous to documents, words, and word occurrences in an individual document. The space of clinical items includes more than 6000 medication, more than 1500 laboratory, more than 1000 imaging and more than 1000 nursing orders. Nonorder items include more than 400 abnormal lab results, more than 7000 problem list entries, more than 5000 admission diagnosis ICD9 codes, more than 300 treatment team assignments, and patient demographics. Medication data was normalized with RxNorm mappings 45 down to active ingredients and routes of administration. Numerical lab results were binned into categories based on "abnormal" flags established by the clinical laboratory or by deviation of more than 2 standard deviations from the observed mean if "high" and "low" flags were not prespecified. We aggregated ICD9 codes up to the 3-digit hierarchy such that an item for code 786.05 would be counted as 3 separate items (786.05, 786.0, 786). This helps compress the sparsity of diagnosis categories while retaining the original detailed codes if they are sufficiently prevalent to be useful. The above preprocessing models each patient as a timeline of clinical item instances, with each instance mapping a clinical item to a patient time point. With the clinical item instances following the "80/20 rule" of a power law distribution, 46 most items may be ignored with minimal information loss. Ignoring rare clinical items with fewer than 256 instances reduces the item vocabulary size from more than 23 000 to $3400 (15%), while still capturing 6 million (90%) of the 6.7 million item instances. After excluding common process orders (e.g., check vital signs, notify MD, regular diet, transport patient, as well as most nursing orders and PRN medications), 1512 clinical orders of interest remain. LDA topic modeling algorithms infer topic structures from "bag of words" abstractions that represent each document as an unordered collection of word counts (i.e., 1 column of the worddocument matrix in Figure 1). To construct an analogous model for our structured clinical data, we use each patient's first 24 hours of data to populate an unordered "bag of clinical items," reflecting the key initial information and decision making during a hospital admission. We randomly selected 10 655 ($50%) patients to form a training set. We chose to use the GenSim package 47 to infer topic model structure, given its convenient implementation in Python, streaming input of large data corpora, and parallelization to efficiently use multicore computing. Model inference requires an external parameter for the expected number of topics, for which we systematically generated models with topic counts ranging from 2 to 2048. Running the model training process on a single Intel 2.4 GHz core for Figure 1. Topic modeling as factorization of a word-document matrix. Simulated data in the top-left reflects that the word "Heart" appears 12 times in the article "Evidence Underlying AHA." Factoring this full matrix into simpler matrices can discover a smaller number of latent dimensions that summarize the content. Topic modeling represents these latent dimensions as topics defining a categorical probability distribution of word occurrences in the topic-word matrix. This reveals the underlying statistical structure of the data, but an algorithmic process cannot itself provide meaning. By observing the most prevalent words in each topic axis, however, an underlying meaning is often interpretable (e.g., prevalence of the words "heart" and "aspirin" in the first topic axis implies a general topic of "Cardiology"). 10 655 patients and 256 topics requires $1 GB of main memory and $2 minutes of training time. Maximum memory usage and training time increases proportionally to the number of topics modeled, while the streaming learning algorithm requires more execution time but no additional main memory when processing additional training documents. Evaluation To evaluate the utility of the generated clinical topic models and determine an optimal topic count range, we assessed their ability to predict subsequent clinical orders. For a separate random selection of 4820 ($25%) validation patients, we isolated each use of a pre-existing human-authored order set within the first 24 hours of each hospitalization. We simulated production of an individually personalized, topic model-based "order set" at each such moment in time. To dynamically generate this content, the system evaluates the patient's available clinical data to infer the relative weight of relevance for each clinical topic, P(Topic j jPatient k ). With this patient topic distribution defined, the system can then score-rank a list of suggested orders by the probability of each order occurring for the patient: We compared these clinical order suggestions against the "correct" set of orders that actually occurred for the patient within a followup verification time of t. Sensitivity analyses with respect to this followup verification time varied t from 1 minute (essentially counting only orders drawn from the immediate real order set usage) up to 24 hours afterwards. Prediction of these subsequent orders is evaluated by the area under the receiver operating characteristic curve (c-statistic) when considering the full score-ranked list of all possible clinical orders. Existing order sets will have N suggested orders to choose from, so we evaluated those N items vs the top N score-ranked suggestions from the topic models toward predicting subsequent orders by precision (positive predictive value) at N and recall (sensitivity) at N. We executed paired, 2-tailed t-tests to compare results with SciPy. 48 Table 1 reports the names of the most commonly used humanauthored inpatient order sets, while Table 2 reports summary usage statistics during the first 24 hours of hospitalization. Table 3 illustrates example clinical topics inferred from the structured clinical data. Figure 2 visualizes additional example topics and how patienttopic weights can be used to predict additional clinical orders. DISCUSSION Complex clinical data like clinical orders, lab results, and diagnoses extracted from EHRs can be automatically organized into thematic structures through probabilistic topic modeling. These thematic topics can be used to automatically generate natural "order sets" of commonly co-occurring clinical data items, as illustrated in the examples in Table 3. Figure 2 visually illustrates how these latent topics can separate clinical items that are specific or general across varying scenarios, and how they can be used to generate personalized clinical order suggestions for individual patients. Suggestions have some interpretable rationale by indicating that a patient case in question appears to be "about" a given set of clinical topics (e.g., abdominal pain and involuntary psychiatric hold) and the suggested Use rate reflects the percentage of validation patients for whom the order set was used within the first 24 hours of hospitalization. Size reflects the number of order suggestions available in each order set. Notably, these essentially all reflect nonspecific care processes, while scenario specific order sets (e.g., management of asthma, heart attacks, pneumonia, sepsis, or gastrointestinal bleeds) are rarely used. Metrics count only orders used in the final set of 1512 preprocessed clinical orders after normalization of medication orders and exclusion of rare orders and common process orders. orders (e.g., serum acetaminophen level, Electrocardiogram (EKG) 12-Lead) are those that commonly occur for other patient cases involving those topics. In the absence of a gold standard to define high-quality medical decision making, we must establish a benchmark to evaluate the quality of algorithmically generated decision support content. Human-authored order sets and alerts represent the current standard of care in clinical decision support. Figure 4 indicates that existing order sets are slightly better than topic model-generated order suggestions at anticipating physician orders within the immediate time period (< 2 h). This is, of course, biased in favor of the existing order sets since the evaluation time points were specifically chosen The most prominent clinical items (e.g., medications, imaging, laboratory orders, and results) are listed for each example topic, with corresponding P(Itemi jTopic j ) weights. The bottom rows reflect the percentage of validation patients with estimated P(Topic j jPatient k ) > 1% along with our manually ascribed labels that summarize the largely interpretable topic contents. where an existing order set was used. This ignores other time points where the clinicians did not (or could not) find a relevant order set, but where an automated system could have generated personalized suggestions. Topic model-based methods consistently predict more future orders than the existing order sets when forecasting longer followup time periods beyond 2 hours. On an absolute scale, it is interesting that manually produced content like order sets continues to demonstrate improvements in care 21,23,24,49 despite what we have found to be a low "accuracy" of recommendations. Table 2 indicates that initial inpatient care on average involves a few order sets (3.0), with a preference for general order sets with a large number of suggested orders (> 100), resulting in higher recall (43%) but low precision (11%). This illustrates that such tools are decision aids that benefit clinicians who can interpret the relevance of any suggestions to their individual patient's context. Framed as an information retrieval problem in clinical decision support, retrieval accuracy may not even be as important as other aspects for real-world implementation (e.g., speed, simplicity, usability, maintainability). 26 Even if algorithmically generated suggestions were only as good as the existing order sets, the more compelling implication is how this can alter the production and usability of clinical decision support. Automated approaches can generate content spanning any previously encountered clinical scenario. While this incurs the risk of finding "mundane" structure (e.g., the repeated sub-diagnosis codes for diabetes and pulmonary embolism in Table 3), it is a potentially powerful unsupervised approach to discovering latent structure that is not dependent on the preconceptions of content authors. The existing workflow for pre-authored order sets requires clinicians to previously be aware of, or spend their time searching for, order sets relevant to their patient's care. Table 1 illustrates that clinicians favor a few general order sets focused on provider processes (e.g., admission, insulin, transfusion), while they rarely use order sets for patient-focused scenarios (e.g., stroke, sepsis). With the methods presented here, automated The top left reflects clinical orders most associated with Topic Y , with little association with Topic X , suggestive of a workup for diarrhea and abdominal pain. The bottom right reflects clinical orders associated with Topic X , suggestive of a workup for an intentional (medication) overdose and involuntary psychiatric hospitalization. The top right reflects common clinical orders that are associated with both topics. For legibility, items whose score is < 0.2% for both topics are omitted and only a subsample of the bottom-left items are labeled. The diagonal arrow represents a hypothetical patient inferred to have P(Topic X jPatient k ) ¼ 80% and P(Topic Y jPatient k ) ¼ 20%. The dashed lines reflect orthogonal P(Item i jPatient k ) isolines to visually illustrate how clinical order suggestions can be made from such a topic inference. In this case, orders farthest along the projected patient vector (e.g., serum acetaminophen) are predicted to be most relevant for the patient. inference of patient context could overcome this usability barrier by inferring relevant clinical "topics" (if not specific clinical orders) based on information already collected in the EHR (e.g., initial orders, problem list, lab results). Such a system could present related order sets (human-authored or machine-learned) to the clinician without the clinician ever having to explicitly request or search for a named order set. The tradeoff for these potential benefits is that current physicians are more likely comfortable with the interpretability and human origin of manually produced content. Most of the initial applications of topic modeling have been for text document organization. [33][34][35] More recent work has applied topic modeling and similar low-dimensional representations to clinical data for the unsupervised determination of clinical phenotypes 38 and concept embeddings, 40 or as features toward classification tasks such as high-cost prediction. 39 Other efforts to algorithmically predict clinical orders have mostly focused on problem spaces with dozens of possible candidate items. [50][51][52][53] In comparison, the problem space in this manuscript includes over 1000 clinical items. This results in substantially different expected retrieval rates, 54 even as the latent topics help address data interpretability, sparsity, and semantic similarity. While there is likely further room for improvement, perhaps with other graphical models specifically intended for recommender applications, 55 our determination of order set retrieval rates contributes to the literature by defining the state-ofthe-art real-world reference benchmark for this and any future evaluations. Limitations of the LDA topic modeling approach include external designation of the topic count parameter. Similarly, while we used default model hyperparameters that assume a symmetric prior, this may affect the coherence of the model. 56 Hierarchical Dirichlet process 57 topic modeling is an alternative nonparametric approach that determines the topic count by optimizing observed data perplexity; 58 however, this may not align with the application of interest. Validating against a held-out set of patients allowed us to optimize the topic count against an outcome measure like order prediction. Precision and recall is optimized in this case with approximately 32 topics of inpatient admission data. Another key limitation is that the standard LDA model interprets data as an unordered "bag of words," which discards temporal data on the sequence of clinical data. Our prior work noted the value of temporal data toward improving predictions. 59 This could potentially be addressed with alternative topic model algorithms that account for such sequential data. For each real use of a preauthored order set, either that order set or a topic model (with 32 trained topics) was used to suggest clinical orders. For longer followup times, the number of subsequent possible items considered correct increases from an average of 5.4-20.6. The average correct predictions in the immediate timeframe is similar for topic models (3.2) and order sets (3.8), but increases more for topic models (9.3) vs order sets (6.7) when forecasting up to 24 hours. At the time of order set usage, physicians choose an average of 3.8 orders out of 54.8 order set suggestions, as well as 1.6 ¼ (5.4 -3.8) a la carte orders. (B) Topic models vs order sets by recall at N. For longer followup verification times, more possible subsequent items are considered correct (see 4A). This results in an expected decline in recall (sensitivity). Order sets, of course, predict their own immediate use better, but lag behind topic model-based approaches when anticipating orders beyond 2 hours (P < 10 À20 for all times). (C) Topic models vs order sets by precision at N. For longer followup verification times, more subsequent items are considered correct, resulting in an expected increase in precision (positive predictive value). Again, topic model-based approaches are better at anticipating clinical orders beyond the initial 2 hours after order set usage (P < 10 À6 for all times). (D) Topic models vs order sets by ROC AUC (c-statistic), evaluating the full ranking of possible orders scored by topic models or included/ excluded by order sets (P < 10 À100 for all times). Another limitation of any unsupervised learning process is that it can yield content with variable interpretability. For example, while we manually ascribe labels to the topics in Table 3, the contents are ultimately defined by the underlying structure of the data and need not map to preconceived medical categorizations. This is reflected in the presence of items such as admission diagnoses of thoracolumbar disc displacement, osteomyelitis, and tests for rapid Human Immunodeficiency Virus (HIV) antibodies that do not seem to fit our artificial labels. From an exploratory data analysis perspective, however, this may actually be useful in identifying latent concepts in the clinical data that could not be anticipated prospectively. When we discarded rare clinical items (< 256 instances), we may also have lost precision on the most important data elements. As noted in our prior work, this design decision trades the potential of identifying rare but "interesting" elements in favor of predictions more likely to be generally relevant and that avoid statistically spurious cases with insufficient power to make sensible predictions. 61 Organization of clinical data through probabilistic topic modeling provides an automated approach to detecting thematic trends in patient care. A potential use case illustrated here finds related clinical orders for decision support based on inferred underlying topics. This has the general potential for clinical information summarization 13,62 that dynamically adapts to changing clinical practices, 63 which would otherwise be limited to preconceived concepts manually abstracted out of potentially lengthy and complex patient chart reviews. Such algorithmic approaches are critical to unlocking the potential of large-scale health care data sources to impact clinical practice. CONTRIBUTORS JHC conceived the study and design, and implemented the algorithms, performed the analysis, and drafted the initial manuscript. MKG, SMA, LM, and RBA contributed to analysis design and manuscript revisions, and supervised the study.
2017-07-15T02:59:15.397Z
2016-09-20T00:00:00.000
{ "year": 2016, "sha1": "46690a7d258bb1337060877030afaed964fd194c", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jamia/article-pdf/24/3/472/13139026/ocw136.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "46690a7d258bb1337060877030afaed964fd194c", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }