id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
254984654
pes2o/s2orc
v3-fos-license
A literature review of open-ended concept maps as a research instrument to study knowledge and learning In educational, social or organizational studies, open-ended concept maps are used as an instrument to collect data about and analyze individuals’ conceptual knowledge. Open-ended concept map studies devoted to knowledge and learning apply a variety of methods of analysis. This literature review systematically summarizes the various ways in which open-ended concept maps have been applied in previous studies of knowledge and learning. This paper describes three major aspects of these studies: what methods of analysis were used, what concept map characteristics were considered, and what conclusions about individuals’ knowledge or understanding were drawn. Twenty-five studies that used open-ended concept maps as a research instrument were found eligible for inclusion. In addition, the paper examines associations between the three aspects of the studies and provides guidelines for methodological coherence in the process of such analysis. This review underscores the importance of expatiating on choices made concerning these aspects. The transparency provided by this method of working will contribute to the imitable application of open-ended concept maps as a research tool and foster more informed choices in future open-ended concept map studies.” Introduction Concept maps were introduced by Novak and Gowin to activate or elaborate (prior) knowledge. "Concept maps are graphical tools for organizing and representing knowledge. They include concepts, usually enclosed in circles or boxes of some type, and relationships between concepts indicated by a connecting line linking two concepts" (Novak and Cañas 2008, p.1). Concept maps have a wide variety of applications and are an increasingly 1 3 popular learning and strategy tool (e.g. Novak and Cañas 2008;Stevenson, Hartmeyer and Bentsen 2017). It is on the rise as an instructional method (e.g. Nair and Narayanasamy 2017), to aid in curriculum design (e.g. Buhmann and Kingsbury 2015) and as an assessment tool (e.g. Marriott and Torres 2016). In recent years, concept maps have been used more widely as a research instrument. This includes the application of concept maps to study knowledge, explore mental models and misconceptions, or to describe people's opinions. Figure 1 provides an example of a concept map. The focus question or prompt for the example in Fig. 1 could have been "What is a concept map?" or the topic provided could have been 'Concept maps'. The words enclosed in circles are concepts, also referred to as nodes. These concepts are connected with arrows, or links, that are indicated with linking words, explaining the relation between the concepts. A proposition is a concept-link-concept combination, for instance 'concept maps-include-hierarchies'. In this example, there are four hierarchies or strings of concepts stemming from the root concept 'concept maps'. Links between concepts from the same hierarchy are called links. Links between concepts from different hierarchies are cross-links. A concept map assignment includes: concepts or nodes, links or linking lines, linking phrases, and the concept map structure, and all these aspects can be either respondent driven or provided by the instructor (Ruiz-Primo 2000). For instance respondents think of concepts themselves, or are provided a list of concepts they can use for their concept map. In most studies that apply concept maps, at least one of these four aspects of the task is instructor-directed; most commonly, the concepts to be used are provided (Ruiz-Primo et al. 2001). In open-ended concept maps respondents choose their own terms for nodes, links or linking lines, linking phrases and structure, therefore they resemble the respondents' knowledge structure (Cañas, Novak and Reiska 2013;Ruiz-Primo et al. 2001). Openended concept maps are graphical tools in which respondents are invited to represent their personal construction of knowledge, without instructor-directed aspects. In an openended concept map assignment, only a topic or prompt is provided to the respondents. For instance Ifenthaler, Masduki and Seel (2011) asked respondents to create a concept map to depict their understanding of research skills, Beyerbach (1988) asked students to draw a concept map for teacher planning and Çakmak (2010) asked respondents to generate a concept map concerning teacher roles in teaching process. Open-ended concept maps are commonly applied to explore student knowledge, to evaluate what or how students learn or to explore misconceptions in student knowledge (Greene et al. 2013;Stevenson et al. 2017). The application of open-ended concept maps Fig. 1 Example of a hierarchical concept map by Ed Dole (adapted from Dori 2004) to study knowledge and learning faces new challenges concerning application and analysis, as this application transcends the traditional, strictly defined, quantitative use of concept maps in other domains, such as engineering, mathematics and psychology (Wheeldon and Faubert 2009). When exploring knowledge and learning, the strictly defined quantitative use of concept maps is believed not to do justice to the personal and idiosyncratic nature of people's understanding (Novak 1990). Open-ended concept maps are therefore used to study people's knowledge and understanding of complex phenomena, such as leadership or inclusive education. They are also used when respondents' knowledge is expected to be fragmented, or when the misconceptions and/or limited nature of people's understanding are part of the study (Greene et al. 2013;Kinchin, Hay and Adams 2000). The variation in outcomes using open-ended concept maps as a research instrument reduces the comparability and leads to difficulty in data analysis (Watson et al. 2016a;Yin et al. 2005). Previous studies describe limitations concerning methods of analysis. Several studies argue that quantitative analysis neglects the quality or meaning and learning (Bressington, Wells and Graham 2011;Buhmann and Kingsbury 2015;Kinchin et al. 2000). Others argue that it is an insufficient evaluation of values or perceptions as expressed by respondents (Jirásek et al. 2016). Some studies address contradicting outcomes when methods of analysis are compared, and they question the validity and reliability of concept map analysis and concept maps as a research instrument (Ifenthaler et al. 2011;Kinchin 2016;Watson et al. 2016a;West et al. 2002). Cetin, Guler and Sarica (2016) claim that it is unclear how the reliability and validity of open-ended concept map analysis can be determined. Additionally, Ifenthaler et al. (2011) argue that some of the methods of analysis applied in previous studies have questionable reliability and validity. Tan, Erdimez and Zimmerman (2017) complement these statements by addressing the lack of clarity they perceived in the existing methods of analysis when choosing a method of analysis for their study. This literature review explores the analysis of open-ended concept maps in previous studies. Firstly, we contemplate what aspects of the process of analysis to consider for this review, based on the quality appraisal of the process of analysis in qualitative research. Creswell (2012) emphasizes the interrelation of the steps concerning data gathering, interpretation and analysis in qualitative research. Accordingly, Huberman and Miles (2002) proposed three central validities for assessing the quality of qualitative research: descriptive, interpretative and theoretical. These are respectively concerned with the data collected or the characteristics of the data that are considered, how data are interpreted or analyzed, and the conclusions that are drawn. These three aspects should be aligned or coherent to increase the quality of research, and are therefore explored in previous openended concept map studies. The method of analysis applied, concept map characteristics measured, and conclusions drawn, and the associations between these aspects are explored (Chenail, Duffy, George and Wulff 2011;Coombs 2017). The research question is: Which methods of analysis are applied to open-ended concept maps when studying knowledge and learning, and how are these associated with concept map characteristics considered and conclusions drawn? 3 2 Method To answer the research question, we extract information concerning method of analysis applied, concept map characteristics measured, and conclusions drawn from previous open-ended concept map studies. Following guidelines for critical interpretative synthesis reviews, this review combines an aggregative and interpretative approach to critically understand the analysis in previous studies (Gough and Thomas 2012). The aggregation entails the representation of clusters for each aspect based on cross-case analysis, as presented in the results. Subsequently, patterns among these aspects are explored and interpreted, leading to considerations for future studies concerning these three aspects and their methodological coherence. Selecting articles (inclusion) This review applies a comprehensive search strategy to include all relevant studies (Verhage and Boels 2017). Scientific articles are selected from three databases: the Education Resources Information Center (ERIC); PsycINFO; and the Web of Science (WOS). The keywords for the search are 'concept map ', and 'analysis' or 'assessment' or 'scoring'. Studies are included from 1984 onward, when Novak and Gowin established the term 'concept map.' Due to the broad social science disciplines included in WOS, a further selection is made based on the predetermined WOS Category 'Education educational research', to exclude for instance geographical studies that map cities, and are not concerned with knowledge or learning but spatial planning. The combined search yields 451 studies in ERIC, 198 studies in PsycINFO and 498 studies in WOS. Three reviews of concept maps studies are used in a selective search strategy: the openended concept map studies described in these reviews are included in this study (Anohina and Grundspenkis 2009;Ruiz-Primo and Shavelson 1996;Strautmane 2012). Based on the snowballing technique with these reviews, 85 additional studies are included, and 1316 in total, as depicted in Fig. 2. The first review by Anohina and Grundspenkis (2009) presented manual methods of analysis, and addressed the feasibility of automating these methods. The reviews by Ruiz-Primo and Shavelson (1996) and Strautmane (2012) related methods of analysis to the openness of the concept mapping assignments. These reviews did not explore the associations between the methods of analysis and the conclusions drawn in these studies. Our review is one of the first to address the associations between the methods of analysis, the concept map characteristics considered and the conclusions drawn, instead of focusing on these aspects separately. This coherence is relevant to explore as it enhances the rigor and quality of qualitative studies, by ensuring an appropriate alignment of these aspects within studies (Davis 2012;Poucher et al. 2020). Screening articles (exclusion) The selected articles are screened. 254 duplicates are excluded. Next titles and abstracts are screened. In order to increase the quality of this process, the inclusion and exclusion criteria are discussed with co-researchers until consensus is reached. 703 articles did not apply concept maps, or concept maps were used as learning tool, instructional tool, for curriculum design or to analyze answers, texts or interviews. These are excluded. For the remaining 359 studies that apply concept maps as research instrument, step 2 of the screening is based on the methods section. Studies are included based on the following inclusion criteria: • Concept maps were used as a research instrument; • The study was an empirical study; • Respondents made their own concept map; and • An open-ended concept map assignment was applied. In seven studies, other visual drawings than concept maps were used. In 31 studies, concept map analysis was described based on theory instead of empirical data. In 71 studies, concepts map were constructed by the researcher based on interviews, together with the respondent during an interview or at group level based on card sorting techniques. In 154 studies, at least one of the aspects of the concept mapping task was instructor-directed. Studies are included if one or more focus questions or one central concept was provided. Step 1: Excluded on title/ abstract N = 703 Step 2: Excluded on methods/results N = 329 Step 3: Excluded on full text N = 5 Reviews N = 85 Fig. 2 Search and selection strategy The critical appraisal of the methods section, leads to two additional exclusion criteria (Verhage and Boels 2017). Seventy studies applied another research instrument alongside concept maps. For these studies, the results sections are read to discover whether concept maps were evaluated separately. In 30 studies, concept map analysis was combined with interviews or reflective notes. 36 studies compared concept map scores with results from other research instruments, such as knowledge tests or interviews. These studies were excluded because the method of analysis or conclusions for the concept maps was not described separately. This is problematic for our research purposes, as this study is concerned with concept map analysis and conclusions based on concept maps analysis, and not analysis and conclusions based on other instruments. Four studies that applied two instruments-but reported on the analysis of concept maps separately-are included. For step 3 of the selection process, the full texts of the remaining 30 studies are read. Five studies are excluded; in two studies, different concept map characteristics are summed up and not described separately. One study calculated correlations between different concept map characteristics, and two studies drew conclusions purely on group level, resulting in 25 studies being included in this review, as depicted in Fig. 2. Data selection from articles Information on the following topics is extracted from the articles: the method of analysis, the concept map characteristics, the conclusions, the rationale behind the choices made, and general or descriptive information. A data selection scheme is developed which depicts the extracted information (Table 1). To increase the reliability of the data selection, this scheme was continuously discussed and adjusted by the authors over the entire selection process. Reliability was further ensured by using signal words, based on common terms used in the studies. For three of the studies included, the data selection was performed independently by two researchers. Both researchers selected statements from the articles concerning the items described in Table 1. One researcher included 79 statements and the other 94 statements. The statements selected by the researchers overlapped completely. The 15 statements only selected by one researcher were discussed and added to the data. A total of 109 citations were extracted from these three studies. Data analysis Data analysis is performed by using the selected articles for within-and cross-case analysis (Miles and Huberman 1994). In this review, the cases are the articles included. The first step of analysis is to order the extracted information in a meta-matrix (see "Appendix A") that presents all relevant condensed data for each case or article separately (Miles and Huberman 1994). If no explicit statements are found, information is added by using the within-case analytic strategy of 'overreading' (Ayres, Kavanaugh and Knafl 2003). For instance, for a study that counted specific concept map characteristics but did not describe the method of analysis any further, the method of analysis was described as quantitative analysis. To prepare data for cross-case analysis, different labels for the same aspect are unified. For instance, 'counting nodes' is relabeled as 'number of nodes.' The second step entails coding the selected statements concerning the research object, research design, methods of analysis, concept map characteristics, and conclusions drawn in the articles, using a cross-case analysis approach. Preliminary coding of statements concerning methods of analysis is based on the way studies refer to their method of analysis. However, the same term was sometimes used to refer to more than one analysis method, while in other cases, multiple terms were used to refer to a single method. Thus, the designation of clusters based on how the studies referred to their methods of analysis proved inconclusive and ambiguous. Different distinctions between methods of analysis are found in the literature. For example, in the review by Anohina and Grundspenkis (2009) the use of an expert's map is one choice, as well as the choice for quantitative or qualitative analysis and structural or relational analysis, to make a distinction between methods of analysis. In the review by Strautmane (2012), the criteria for similarity analysis, e.g. "proposition similarity to expert's CM", or "convergence with expert's CM", are described as separate criteria. Also in the review by Ruiz-Primo and Shavelson (1996), comparison with a criterion map, is described as a separate method of analysis. In this review, the distinction between quantitative, qualitative, similarity and holistic analysis is chosen, as these methods of analysis consider the concept map characteristics distinctively and they are based on different principles and theoretical assumptions. Holistic analysis is a separate method of analysis, as it is based on a rubric, and similarity is a separate analysis, as it is based on a reference map. Moreover, these four methods of analysis lead to different types of conclusions, and are therefore considered as distinctive ways to analyze and interpret data from concept maps. In conceptual terms, these four methods of analysis are mutually exclusive, as they estimate the concept map characteristics differently. However, when applied to analyze data, they can be combined: similarity analysis compares concept maps to a reference map, either qualitatively or quantitatively. The statements concerning concept map characteristics are unified; for instance, 'breadth and depth' or 'hierarchical structure of the map' were both coded as structural complexity. All concept map characteristics related to the semantic content of the map, referred to as 'terms used', 'content comprehensiveness', 'correctness' or 'sophistication', are clustered as semantic sophistication. The conclusions are clustered in the same way as the methods of analysis. Conclusions about numbers of concept map characteristics are labelled as quantitative, conclusions about descriptions are labelled as qualitative, conclusions about overlap with a reference map are labeled as similarity and conclusions about the quality of the map as a whole are labelled as holistic. For the same three studies that two researchers selected statements from independently, the statements were coded by two researchers independently. A total of 109 statements were coded. Krippendorf's alpha of the inter-rater agreement was 0.91. The discrepancies in coding were all related to concept map characteristics. For instance, counting cross-links was coded as interlinkage by one researcher and as structural complexity by the other researcher, while the validity of links was labeled as semantic sophistication by one researcher and category representation by the other. The first author coded the data from the remaining 22 articles included in the review and consulted the co-authors when in doubt. The third step of the analysis was exploring the associations between the methods of analysis, concept map characteristics and conclusions drawn. Across all articles, the associations between the choices made in these three areas were explored. For instance, for all studies that applied quantitative analysis, the concept map characteristics considered were explored and listed. This showed that quantitative analysis was concerned with specific concept map characteristics, e.g. size, structural complexity, category representation, interlinkage or complexity index. Subsequently, the conclusions drawn were explored for each combination of methods of analysis and concept map characteristics considered. Based on the associations found across articles between the methods of analysis applied, the concept map characteristics considered and the conclusions drawn, considerations for methodological coherence between these aspects were formulated. Results Twenty-five empirical studies from 1988 through 2018 used open-ended concept maps. Twenty-one of these studies consisted of multiple measurements, most commonly a pretest/post-test design. Four of these studies compared two experimental groups, and two compared an experimental and control group. Two of the four studies that consisted of one measurement compared two experimental groups. In two studies, respondents received their previous concept map to adjust in the post-test, and in one study respondents could choose to adjust their previous map or to start a new one. Concept maps were either made on paper with pen, on sticky notes or on a computer, most commonly with the CMapTool, developed by the Florida Institute for Human and Machine Cognition. "Appendix A" provides the meta-matrix of these studies, including all selected information. For each aspect (method of analysis, concept map characteristics, and conclusions) the categorization or clustering based on the cross-case data analysis is presented in separate paragraphs. How concept map characteristics are associated with methods of analysis is described in the paragraph concerning the concept map characteristic. How conclusions drawn are related to concept map characteristics and methods of analysis is described at the end of the paragraph concerning conclusions drawn. Methods of analysis The different methods of analysis as described in the studies are extracted and presented in the table below. The explanation of these methods of analysis is provided after Table 2. Based on these studies it appears that the same term was sometimes used to refer to more than one analysis method, while in other cases, multiple terms were used to refer to a single method. These varieties increase the ambiguity experienced with concept map analysis, as described by Watson et al. (2016b). Based on the ways in which concept map characteristics are estimated, we propose the following distinction: quantitative, similarity, holistic and qualitative analysis. Quantitative analysis, or counting concept map characteristics, was performed absolutely or relatively-for example, the number of links was counted separately or calculated in relation to the number of nodes. Category representation was also determined absolute, as the number of nodes belonging to a category, or relative, dividing the number of nodes belonging to a category by the total number of nodes in a map. These different calculations can result in different conclusions. According to Besterfield-Sacre and colleagues (2004, p. 113), quantitative analysis "fails to capture the quality of that content. Further, these scoring methods can be time consuming, lack standards, and may introduce inappropriate bias." Similarity analysis described or calculated the percentage of overlap and/or discrepancy compared to a reference map. To calculate percentage of overlap, the terms used by respondents need to be aligned with the reference map. Similarity analysis provided insights into the (degree of) overlap and discrepancies with a reference map and was performed manually or in an automated manner (Ifenthaler et al. 2011). Holistic analysis included scoring the structure or content for the concept map as a whole. Besterfield-Sacre and colleagues (2004) developed the scoring rubric that was commonly used for holistic analysis. They developed this rubric to score the overall comprehensiveness, organization and correctness of the map, based on the topics experts discussed while analyzing concept maps. Holistic analysis was determined on an inter-rater basis and is a cognitive complex task for which subject matter knowledge is necessary; this scoring is subjective (Borrego et al. 2009;Yaman and Ayas 2015). Qualitative analysis of semantic content was performed in most studies, either inductively or deductively to determine categories. Qualitative analysis was the only way to explore concept maps content inductively. Most studies applied more than one method of analysis. Quantitative analysis was applied in 19 studies, qualitative analysis also in 19 studies, holistic analysis in six studies and similarity analysis in five studies. Why methods of analysis were chosen is described in several studies. Ritchhart and colleagues (2009, p. 152) stressed that qualitative analysis "allowed us to best represent all of the data from the maps". Beyerbach (1988, p.339) applied qualitative analysis, as it revealed "the nature of growth of student teachers' thinking [..] and conceptual development". Quantitative analysis was chosen as it "demonstrates the student learning gains" (Borrego et al. 2009, p.14). Similarity analysis was performed as "Comparisons of the students' maps to an expert's map will provide information regarding how much is learned from the course and whether the concepts that are learned and included in the maps are done so "correctly" and as intended according to the expertthe faculty instructor" (Freeman and Urbaczewski 2002, p. 42). Besterfield-Sacre et al. (2004, p. 109) choose holistic scoring to explore students' conceptual understanding, as an increase in understanding results "in higher quality maps as reflected by the holistic score." Concept map characteristics This section describes what concept map characteristics were measured, how and why. Table 3 provides an overview of the results. The concept map characteristics described in this review concern characteristics as portrayed in the included studies. These include characteristics concerning the structure of concept maps, such as size, structural complexity and type of structure, and characteristics concerning the content of concept maps, for instance the terms used or categories represented in concept maps. The different concept map characteristics as portrayed in the included studies are presented below. Size The number of nodes was referred to as the size or extent, and considered as "a simple indicator for the size of the underlying cognitive structure" (Ifenthaler et al. 2011, p. 55). Size was established by counting the number of unique or correct nodes or propositions. Invalid nodes were included to study mental models or when misconceptions were important. Structural complexity Structural complexity is concerned with how complex the structure of a concept map is. The concept map nodes and links were used to study structural complexity quantitatively, holistically or in similarity. The scoring system for structural complexity from Novak and Gowin (1984), based on the number of hierarchies, cross-links and examples, was applied in seven studies, and these measures were adjusted in four of these studies. Specific aspects of structural complexity, were breadth or number of hierarchies and depth or Structural complexity is interpreted as an indicator of respondents' understanding and was the main focus of the first scoring system as proposed by Novak and Gowin (1984). It is considered as a relatively objective measurement that is related to the complexity of respondents' knowledge structure or understanding Type of structure Qualitative: categorizing the type of structure of the map as a whole based on common global morphologies The type of structure of the map provides a more holistic view of the structure and is easy to score Semantic sophistication Qualitative: semantic sophistication was explored by describing and interpreting the terms used Holistic: terms used were scored based on a rubric Similarity: terms used were compared to a reference map The semantic sophistication shows which concepts are considered and how these are described, providing insights into the content of maps and making maps with different terms more comparable Category representation Based on the terms used, categories of nodes and/or links were distinguished inductively or deductively (qualitative) Qualitative: the categories are described Quantitative: category representation is calculated as the number of nodes per category, or as a percentage of total number of nodes in the concept map Category representation is interpreted as an indicator of knowledge coverage or balance, considering the representation of relevant categories Interlinkage Quantitative: counting the number of links between categories Qualitative: describing the type of links (e.g. validity of links) between categories The number of links between categories provides insights into the interconnectedness or integration of categories in concept maps Complexity index Quantitative: the complexity index is a particularization of interlinkage, dividing the number of interlinks by the number of categories, multiplied by the number of nodes The complexity index is interpreted as the overall coverage and connectedness of concept maps, combining category representation and interlinkage 1 3 hierarchy level (Beyerbach 1988;Read 2008). Other references to structural complexity, all calculated based on the number of links, are complexity, connectedness or dynamism and these measures are more commonly used for non-hierarchical concept maps (Ifenthaler et al. 2011;Tripto et al. 2018;Weiss et al. 2017). Freeman and Urbaczewski (2002, p. 45) computed structural complexity as the number of relationships depicted in the map beyond the minimal amount necessary to connect all of the concepts linearly. Ifenthaler and colleagues (2011) included computations from graph theory, such as unlinked nodes that are not connected to the other nodes, the cyclic nature of a map, i.e. if all nodes can be reached easily, or the longest and/or shortest paths from the central node. Structural complexity was also scored based on a rubric, taking into account the overall organization of the map. For instance a score of 1 if the concept map is connected only linearly, a score of 2 when there are some connections between hierarchies, or a score of 3 for a sophisticated structure (Besterfield-Sacre et al. 2004). Another way to score structural complexity is by comparing structural characteristics with a reference map (Ifenthaler et al. 2011). The analysis of structural complexity is more sensitive in measuring change than other analyses (West et al. 2000). However, the limited hierarchical interpretation of structural complexity based on quantitative analysis can lead to different scores than holistic scoring of structural complexity (Watson et al. 2016a). According to West and colleagues (2000, p. 821), scoring structural characteristics "[becomes] more difficult as maps grow more complex," and Blackwell and Williams (2007, p. 7) mentioned that scoring structural characteristics "can conceal the essentially subjective basis on which it rests." Type of structure Studies concerned with the type of structure or shape of the map categorized concept maps qualitatively based on global morphologies. Global morphologies are common typical structures found in concept maps, such as chain, spoke or net structures, as depicted in Fig. 3. This analysis provides a measure for the aptitude for learning and "avoids many pitfalls of quantitative analysis" (Hay et al. 2008, p. 224). Yaman and Ayas (2015, p. 853) categorized concept maps based on their type of structure, and stated that it was "very easy and informative." Semantic sophistication Semantic sophistication is concerned with the terms as used by the respondents, for concepts as well as for links. Semantic sophistication was explored by describing or clustering the terms used by the respondents qualitatively. Analysis of the semantic sophistication or content revealed the information in concept maps and what respondents think (Kostromina et al. 2017;Ward and Haigh 2017): "These qualitative analyses go beyond traditional assessment techniques in providing the instructor with a much clearer view of what his/her students know, think, and understand" (Freeman and Urbaczewski 2002, p. 51). Semantic sophistication was also scored based on a rubric, taking into account the comprehensiveness or correctness of content, and whether maps conformed to fact, logic or known truth (Besterfield-Sacre et al. 2004). Gregoriades and colleagues (2009) described how holistic scoring allowed them to assess overall understanding. The semantic sophistication was also measured in comparison to a reference map. Beyerbach (1988, p. 341) calculated "convergence towards a group consensus, and convergence toward an expert's map to indicate conceptual growth." Freeman and Urbaczewski (2002, p. 42) compared students' maps to an expert's map to assess how much was learned and whether the learned concepts were integrated correctly. Category representation Category representation is concerned with categories of nodes and/or links in concept maps. Different types of categories were established, either valid and invalid nodes or propositions, where invalid nodes are outside of the scope of the prompt, and invalid propositions are incorrectly linked. Another category was old and new nodes in repeated measures, where old nodes were already present in the first map, and new nodes were added in the second map. Also content-related categories were distinguished, for instance concepts at different levels. One study distinguished different system levels, in order to reveal students' systems thinking abilities-or, more specifically, students' "ability to identify system components and processes at both micro and macro levels" (Tripto et al. 2018, p. 649). Category representation can only be calculated quantitatively after categories are determined in maps qualitatively. Category representation was calculated by the number of nodes per category and was also referred to as knowledge richness, frequencies of themes, presence of systems, representational level or category distribution (Çakmak 2010;Kostromina et al. 2017;Ritchhart et al. 2009;Tripto et al. 2018;Yaman and Ayas 2015). Ritchhart and colleagues (2009, p. 154) calculated the percent of responses in each category. Interlinkage Interlinkage concerns the links between categories, and can only be calculated after categories are established. Interlinkage was also referred to as 'complexity' or 'degree of interconnectedness.' Interlinkage was interpreted as "students' ability to identify relations between system components" (Tripto et al. 2018, p. 649). Specifically, the interlinkage between old and new nodes was used to study learning, or how new knowledge is connected to existing knowledge (Hay et al. 2008). Güccük and Köksal (2016) explored meaningful learning based on the number of interlinks and interpreted more interlinks as more meaningful learning. Ward andHaigh (2017, p. 1248) concluded that the analysis of interlinkage between old and new nodes allowed for holistic examination of the quality of learning. Complexity index The complexity index is calculated based on the number of concepts, number of categories, and number of links between categories. It was calculated in two studies to "characterize the overall coverage of and connectedness between the categories" (Watson et al. 2016b, p. 549). The complexity index is a particularization of interlinkage, calculated by dividing the number of interlinks by the number of categories, then multiplying this number with the number of nodes (Segalàs et al. 2012, p. 296). A variety of concept map characteristics was considered, leading to different insights. Size was measured in 18 studies. Structural complexity was also taken into account in 18 studies. Structural complexity considered nodes and links and seems relatively easy and objective to determine; however, it is more interpretative than it seems, especially as concept maps grow more complex. Type of structure was measured in three studies, and revealed the overall structure of the concept map, disregarding the content and allocating one score for the overall structure of the map. Although it is a time-consuming step to describe and interpret terms used by respondents, this is the only way to gain insights into the semantic content of the maps. The semantic sophistication was taken into account in 13 studies. Terms used were sorted into themes or categories, which was done in eleven studies, either inductively or deductively, based on a theoretical framework or scoring rubric, or in comparison with a reference map. Evaluating the terms used and categorizing or unifying them was a necessary preliminary step for calculating other concept map characteristics, namely: category representation, interlinkage and complexity index. These characteristics were only considered when meaningful categories were present and interconnection of these categories was convenient, for instance, in the case of systems thinking. Interlinkage was determined in five studies and the complexity index in two studies. Explanation of the rationale for concept map characteristics and measures varied from just mentioning which characteristics are measured and how, to studies that explain their operationalization based on theoretical descriptions of the research object that is explicitly deduced into specific concept map characteristics and measures. Some studies explicitly explained the choice for concept map characteristics based on the conclusions to be drawn. For instance Besterfield-Sacre et al. (2004, p.106) explain counting cross-links as follows: "We propose that measuring inter-relatedness is a way to assess the extent of knowledge integration." How concept map characteristics were related to methods of analysis, is described in Table 3. Conclusions drawn in the studies The different conclusions drawn in the included studies about knowledge or learning are presented below. Conclusions about knowledge, for instance knowledge extent, were based on the number of nodes (Gregoriades et al. 2009). In repeated measures, an increase in number of nodes was interpreted as "more detail" (Beyerbach 1988, p. 345) or "greater domain knowledge" (Freeman and Urbaczewski 2002, p. 45). Counting the nodes was performed to "quantify knowledge understanding" (Besterfield-Sacre et al. 2004, p. 105). Conclusions about an "increase in richness" (Van den Boogaart et al. 2018, p. 297) or "balanced understanding" (Watson et al. 2016b, p. 556) were based on counting the number of nodes per category. Conclusions about better coverage and interconnectedness or systemic thinking were based on the complexity index (Segalàs et al. 2012). Conclusions about what respondents knew, or in repeated measures about how their knowledge changed over time were based on describing the content of concept maps (Freeman and Urbaczewski 2002, p. 42). It "revealed that teachers assign a significant role both to its own activity and activity of the University administration, as well as cooperation with students" (Kostromina et al. 2017, p. 320). Conclusions about the complexity of knowledge, or the complexity of the knowledge structure, were based on the number of links. For instance, a conclusion about "more complex constructions of their knowledge" was based on the number of links, or structural complexity of concept maps (Read 2008, p. 127). In repeated measures, conclusions about "conceptual growth" were drawn based on structural complexity measures (Beyerbach 1988, p. 342). Conclusions about knowledge integration and learning gains (Borrego et al. 2009) were based on scoring the structural quality of concept maps based on a rubric and, in repeated measures, about additional conceptual understanding (Besterfield-Sacre et al. 2004;Watson et al. 2016b). Conclusions about the use of semantically correct concepts (Ifenthaler et al. 2011), or correct integration of concepts (Freeman and Urbaczewski 2002), were based on a comparison with a reference map. Other conclusions drawn based on comparison with a reference map were that students have significant misconceptions (Gregoriades et al. 2009), or that respondents gained a better understanding (Freeman and Urbaczewski 2002;Ifenthaler et al. 2011). Conclusions about meaningful learning were based on counting the number of links between old and new concepts (Hay 2007). Conclusions about development were also based on the type of structure of concept maps in repeated measures. Conclusions drawn based on the type of structure were in one study, that pre-and post-maps were both mainly non-hierarchical (Yaman and Ayas 2015), and in one study, that 16 of 18 respondents' pre-and post-maps showed "remarkable structural homology, even where the content and its internal organisation were different" (Hay et al. 2008, p. 233). Most studies applied more than one method of analysis and combined concept map characteristics when conclusions were drawn. Two of the three studies applying one method of analysis, qualitatively described the terms used in maps. The other study that applied one method of analysis, explored the semantic sophistication and structural complexity holistically. All other studies analyzed at least one concept map characteristic with two methods of analysis (for instance combining quantitative and similarity analysis of structural complexity), but most commonly multiple concept map characteristics and multiple 1 3 methods of analysis were applied. However, a conclusion concerning counting different concept map characteristics was that it has shown opposing results within several studies: "While cross link scores were lower in some cases, hierarchy scores increased dramatically demonstrating that students were seeing each proposition in greater depth" (Blackwell and Williams 2007, p. 6). Or in the study of Freeman and Urbaczewski (2002), where structural complexity scores decreased while all other scores increased. In all studies, an increase in a measure or concept map characteristic was interpreted as conceptual development or growth, and in all but one repeated measures studies, development was found. This particular study described the main themes based on qualitative analysis of the terms used, without interpreting this as development. Two studies mentioned that the number of links or cross-links did not increase, and two studies found homogenous types of structures, but still concluded that understanding increased, mainly based on other measures. Although most studies draw conclusions about conceptual understanding or in repeated measures about conceptual growth, conclusions drawn are related to the methods of analysis applied and concept map characteristics considered. Conclusions about an increase or growth, are mainly based on counting structural concept map characteristics, or applying quantitative analysis. Conclusions about meaningful learning and more balanced understanding, were also based on quantitative analysis. Conclusions about knowledge, learning or conceptual growth, or integration of knowledge were based on a comparison with a reference map. Conclusions concerning knowledge integration, learning and coverage were based on holistic analysis. Conclusions about the content of maps, and what respondent know or what themes they mention, were based on qualitative analysis of terms used. Conclusion The central research question was: Which methods of analysis are applied to open-ended concept maps when studying knowledge and learning, and how are these associated with concept map characteristics considered and conclusions drawn? The conclusions are presented based on the three main aspects of the research question, namely methods of analysis, concept map characteristics and conclusions drawn, as well as their mutual associations. Methods of analysis This review explored the methods of analysis applied in open-ended concept map studies and provided a first step towards exploring which methods of analysis are applied and how. Four categories of methods of analysis were identified, namely: (1) quantitative analysis based on counting concept map characteristics; (2) similarity analysis based on a comparison with a referent map; (3) holistic analysis that entails the scoring of maps as a whole based on a rubric; and (4) qualitative analysis that involves describing characteristics, for instance the terms used. The 25 studies applied different methods of analysis. Due to the idiosyncratic nature of the data stemming from open-ended concept maps they can be analyzed in different ways (Novak 1990). Qualitative and quantitative analysis are most commonly applied to open concept maps, but to make concept maps more comparable, it is common to use both methods, in which case quantitative analysis is preceded by qualitative analysis. Quantitative analysis is performed to count differences between maps. Similarity analysis explores the overlap with a reference map based on the nodes or structural characteristics. Holistic analysis scores the structure and content of concept maps. Qualitative analysis is used to explore or describe concept map characteristics and to explore the uniqueness of each map. Each method of analysis deals with the idiosyncratic data differently. Qualitative analysis of semantic sophistication is the only way to explore the idiosyncratic nature of openended concept maps and the terms for concepts as the respondents use them. Quantitative analysis reduces the unique terms respondents use to numbers. Similarity and holistic analyses value the terms used based on an existing framework and provide a score for overlap or correctness respectively. When an open-ended concept map is used to gather the unique terms respondents use and there is no correct map available, qualitative analysis is required to explore or describe this information or to make quantitative analysis more meaningful. Concept map characteristics Concept map characteristics are not always explicitly described, and many different descriptions are used for similar concept map characteristics. This study distinguished between seven concept map characteristics as described in the included articles. Concept map characteristics that were counted or evaluated quantitatively are size, structural complexity, category representation, interlinkage and complexity index. These are all related to structure, except for category representation, as this referred to the number of concepts per category. The type of structure, semantic sophistication, categories and interlinkage can be described or evaluated qualitatively. These are all related to the content of the map, except for type of structure. The structural complexity and semantic sophistication can also be evaluated in relation to a reference map based on a rubric. Similarity and holistic methods of analysis combine structural and content-related features of concept maps. Conclusions drawn in the studies Our review shows that although the methods of analysis vary, the conclusions drawn are quite similar. Despite whether concept map characteristics were counted, compared, scored or described, conclusions were drawn about understanding or conceptual growth in repeated measures. All studies with repeated measures applying quantitative analysis found an increase of a measure that was interpreted as conceptual growth. Similarity analysis in repeated measures revealed an increased overlap in specific measures, which was considered as development of understanding. Holistic analysis revealed that better understanding or knowledge integration was found in repeated measures. Three studies used qualitative analysis to identify conceptual growth, for instance, the concept of leadership, where students considered leadership more as a process in the post-test. Twenty of the 21 studies with repeated measures found some type of development or learning gains, most commonly referred to as conceptual growth. Associations across articles Associations were explored between the coding of the methods of analysis, the concept map characteristics, and the conclusions drawn, in order to provide guidelines for methodological coherence between these aspects. Figure 4 provides an overview of the types 1 3 of conclusions that can be drawn from open-ended concept maps, in accordance with the identified methods of analysis and concept map characteristics. Figure 4 can serve as a guide for future open-ended concept map studies which use the specific types of conclusions to be drawn as a means of deciding what method of analysis to apply and what concept map characteristics to consider. For instance, in cases of quantitative analysis, Fig. 4 suggests that no conclusions can be drawn about correctness or quality, only about the extent of domain knowledge, and an increase or decrease in repeated measures. In similarity analysis, correct and incorrect nodes and links are determined, based on a correct model. Therefore, conclusions can be drawn concerning correctness. When applying a rubric for holistic scoring, one overall score is often given for the entire map, in which the overall quality is scored (Besterfield-Sacre et al. 2004). Quality of content includes correctness, but only for the map as a whole. The concept map characteristics of size, category representation and semantic sophistication consider the nodes and lead to conclusions about knowledge. The concept map characteristics of structural complexity, Fig. 4 Associations between methods of analysis, concept map characteristics and types of conclusions interlinkage, complexity index and type of structure consider both the nodes and the links and lead to conclusions about knowledge structure or integration. By explicitly using the conclusion to be elicited as a basis for choosing methods of analysis and concept map characteristics, the transparency of research increases, which enables better quality assessment (Coombs 2017;Verhage and Boels 2017). Discussion When relating these conclusions to broader theory, the first point of discussion is that 20 out of 21 repeated measures studies identified learning or conceptual growth. This raises the question whether all development is interpreted as development, and whether an increase in nodes and links represents better understanding or not. Rikers, Schmidt and Boshuizen (2000), who studied encapsulated knowledge and expertise in diagnosing clinical cases, found that the proportion of encapsulating concepts increases as an indicator of expertise development. This entails a decrease in the number of nodes and links as expertise develops, as experts are able to differentiate between concepts and relationships that are more and less relevant according to a specific contexts. Accordingly, Mintzes and Quinn (2007) explore different phases in expertise development based on meaningful learning theory. Their distinction between phases of development is based on the number of expert concepts, which in turn are relevant superordinate concepts that are absent from novices' concept maps. Accordingly, Schwendimann (2019), who studied the process of development of concept maps, also found differences between novices and experts mainly in the professional terminology experts use for their concepts and linking words. Therefore, the conclusion that an increase is always better is contradicted by many studies of expertise development in the field of cognitive science (Chi, Glaser and Farr 1988). Our review did not aim to discuss the value of open-ended concept maps as an instrument to study knowledge or knowledge development. Nor did it aim to explore the validity of different methods of analysis, or determine which method of analysis is most valid, as these methods of analysis can be related to different research purposes or research objects (Kim and Clariana 2015). Quantitative analysis is based on an evaluative approach which assesses knowledge or growth based on specific measures or characteristics, such as size or complexity. Similarity and holistic methods analyze concept maps from expected structures or content and have an evaluative or prescriptive purpose. Similarity analysis and holistic analysis take a more normative approach to the analysis of open-ended concept maps, by comparing them to a reference map or scoring the map based on a rubric, respectively. On the other hand, qualitative analysis has a more explorative purpose. Merriam and Grenier (2019) explain similar purposes of qualitative research methods and point out that more open or qualitative analysis is suited to more explorative purposes, while more restricted approaches are more appropriate for evaluative purposes. This review has several limitations. Only studies applying open-ended concept maps to study knowledge and learning were included. The results of this study could be different when reviewing studies that apply more closed concept maps, or studies that combine the application of concept maps with other research instruments. Also, the research objects are not included in Fig. 4, as references to research objects were ambiguous in previous studies and it was unclear whether studies referred to the same objects differently, or studied different objects. As a result, this review focused on the process of analysis, disregarding what aspects of knowledge or learning were studied. Moreover, Fig. 4 provides no guidelines for alignment with other aspects of coherence in qualitative studies, for instance the philosophical positioning, the fundamental beliefs or theoretical perspective taken (Caelli et al. 2003;Davis 2012). The findings in this review could be substantiated by further research exploring implicit choices or implicit methodological coherence that could not be extracted from the articles. This could take the form of interviews with the authors of the studies about their methodological assumptions, approaches and chosen methods of analysis. They could then be asked why and how they made choices related to their research object, the concept map characteristics they considered and the conclusions they drew. While previous reviews by Ruiz-Primo and Shavelson (1996), Anohina andGrundspenkis (2009) andStrautmane (2012) explored aspects of the process of analysis separately, or related the method of analysis applied to the level of openness of the concept map task, this review examined three aspects of the analysis process in coherence. By doing so, the present study aims to inform the ongoing discussion in social sciences and beyond about the quality of analysis. Unfortunately, open-ended concept map studies often still feature ambiguous language use. This ambiguity decreases transparency about the method of analysis, the concept map characteristics, and, ultimately, the conclusions drawn and the methodological coherence of these aspects (Chenail et al. 2011;Seale 1999). Clarifying which approach is chosen to make sense of the information in openended concept maps provides a method of dealing with idiosyncratic information provided by the respondents and will support other researchers or policy makers to better interpret and value the conclusions drawn. By describing studies on the basis of the proposed distinction between methods of analysis applied and how they interpret or value information from concept maps, the constraints of each method can be discussed, and findings or conclusions can be understood with a degree of confidence (Chenail et al. 2011). This distinction between methods of analysis can enhance transparency about the conclusions to which a specific method of analysis can and cannot lead. Clarity about the choices within and across studies is stimulated by uniform referencing to these choices, which decreases the confusion caused by the variety of ways in which scholars refer to similar constructs. In future research, the guidelines provided in this study will assist scholars to make more informed choices for their analysis of idiosyncratic data gathered with open-ended concept maps. Appendix A See Table 4 The possible solutions as presented in teachers' concept maps concern teacher activities, organizational resources and joint working with other teachers and with students Pearsall, Skipper and Mintzes (1997) Novak and Gowin's (1984) analysis A substantial amount of knowledge restructuring takes place. 75% is "accretion" or "tuning." "Radical" change concentrated in the first few weeks. "Active" and "deep" learning result in elaborate, well-differentiated knowledge structures Read (2008)
2022-12-23T14:37:23.482Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "318c70604a0937bdf459d7bb1adfbd77127cad5b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11135-021-01113-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "318c70604a0937bdf459d7bb1adfbd77127cad5b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
236929404
pes2o/s2orc
v3-fos-license
Incidence and Outcomes of COVID-19 in People With CKD: A Systematic Review and Meta-analysis Rationale & Objective Coronavirus disease 2019 (COVID-19) disproportionately affects people with chronic diseases such as chronic kidney disease (CKD). We assessed the incidence and outcomes of COVID-19 in people with CKD. Study Design Systematic review and meta-analysis by searching MEDLINE, EMBASE, and PubMed through February 2021. Setting & Study Populations People with CKD with or without COVID-19. Selection Criteria for Studies Cohort and case-control studies. Data Extraction Incidences of COVID-19, death, respiratory failure, dyspnea, recovery, intensive care admission, hospital admission, need for supplemental oxygen, hospital discharge, sepsis, short-term dialysis, acute kidney injury, and fatigue. Analytical Approach Random-effects meta-analysis and evidence certainty adjudicated using an adapted version of GRADE (Grading of Recommendations Assessment, Development and Evaluation). Results 348 studies (382,407 participants with COVID-19 and CKD; 1,139,979 total participants with CKD) were included. Based on low-certainty evidence, the incidence of COVID-19 was higher in people with CKD treated with dialysis (105 per 10,000 person-weeks; 95% CI, 91-120; 95% prediction interval [PrI], 25-235; 59 studies; 468,233 participants) than in those with CKD not requiring kidney replacement therapy (16 per 10,000 person-weeks; 95% CI, 4-33; 95% PrI, 0-92; 5 studies; 70,683 participants) or in kidney or pancreas/kidney transplant recipients (23 per 10,000 person-weeks; 95% CI, 18-30; 95% PrI, 2-67; 29 studies; 120,281 participants). Based on low-certainty evidence, the incidence of death in people with CKD and COVID-19 was 32 per 1,000 person-weeks (95% CI, 30-35; 95% PrI, 4-81; 229 studies; 70,922 participants), which may be higher than in people with CKD without COVID-19 (incidence rate ratio, 10.26; 95% CI, 6.78-15.53; 95% PrI, 2.62-40.15; 4 studies; 18,347 participants). Limitations Analyses were generally based on low-certainty evidence. Few studies reported outcomes in people with CKD without COVID-19 to calculate the excess risk attributable to COVID-19, and potential confounders were not adjusted for in most studies. Conclusions The incidence of COVID-19 may be higher in people receiving maintenance dialysis than in those with CKD not requiring kidney replacement therapy or those who are kidney or pancreas/kidney transplant recipients. People with CKD and COVID-19 may have a higher incidence of death than people with CKD without COVID-19. Introduction The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in December 2019 as the cause of coronavirus disease 2019 as defined by the World Health Organization (WHO). By May 2021, over 151 million people were confirmed to have been infected with SARS-CoV-2, with over 3.1 million deaths reported due to COVID-19 globally. 1 Initial evidence suggested a higher incidence of severe COVID-19 in people with chronic diseases including chronic kidney disease (CKD), 2 and an association with acute kidney injury (AKI) due to SARS-CoV-2 virus infection of the tubular epithelium and podocytes, 3 cytokine production, cardiac dysfunction, and hypoxemic tubular injury. 4 Initial evidence suggested a poor prognosis of COVID-19 in kidney transplant recipients 5 --with a 25% mortality rate --and further evidence highlighting CKD as a risk factor for severe COVID-19 has since been published. 6 A systematic analysis of the incidence and prognosis of COVID-19 is necessary to understand the extent and severity of clinical outcomes associated with COVID-19 in people with CKD. This can inform patient and clinician knowledge, treatment and vaccination strategies, resource management, stratification according to risk of clinical outcomes in clinical guidelines, public health policy-making, and intervention trials. In this systematic review, we evaluated the incidence and outcomes of COVID-19 in adults and children with CKD including those treated with kidney replacement therapy (KRT). Data sources and searches We searched MEDLINE, EMBASE, and PubMed (using LitCovid February 2021 using a highly sensitive search strategy designed by an information specialist (Item S1). We have reported this review in accordance with the PRISMA statement. 7 Study selection We included retrospective and prospective cohort studies or case-control studies investigating the incidence of COVID-19 or outcomes in adults and children with any level of CKD (including CKD treated by dialysis [CKD G5D] and kidney or pancreas-kidney transplant recipients [KTRs]) with or without COVID-19. We defined CKD using the KDIGO CKD guideline, 8 and defined COVID-19 using the WHO criteria, 9 based on a positive reverse transcriptasepolymerase chain reaction (RT-PCR) assay for SARS-CoV-2. 10 We included grey literature, studies of any language and studies including people with or without CKD from which we extracted data on people with CKD. We excluded case series, case reports, randomized and quasi-randomized trials, participants without pre-existing CKD, and diagnosis of COVID-19 infection using serological evaluation. We excluded studies that included only deceased patients, patients admitted into the intensive care unit (ICU), or hospitalized patients in the analyses of the incidence of death, ICU admission, and hospital admission respectively. Risk of bias assessment Eleven review authors (E.Y.M.C., PN, AK, TEC, VMS, MR, EA, SJ, AL, DJJD, JCC) assessed the risk of bias of the included studies, which was checked by two authors (EYMC, VMS). We resolved disagreements through discussion with another author (EMH, GFMS). We used the Quality In Prognosis Studies (QUIPS) tool for risk of bias assessment (Item S2). 14,15 Data synthesis We pooled the incidence of COVID-19 and outcomes in people with CKD using a randomeffects model with the Freeman-Tukey Double Arcsine Transformation for variance stabilization. 16 Meta-analysis of single proportions was performed using metaprop in the R meta package. 17 We used the Wilson's approach for calculation of confidence intervals (CI), 19 and reported the range of the effects using prediction intervals (PrI) for analyses including at least three studies to improve clinical interpretation of the range of COVID-19 incidence and related [20][21][22][23] Where data was available in people with COVID-19 and without COVID-19, we pooled incidence rates as an incidence rate ratio (IRR) using a random-effects model to evaluate the prognostic association between COVID-19 and outcomes in people with CKD. For meta-analysis of incidence rates, we used the metainc function in the R meta package. 17 We adjudicated evidence certainty using an adapted GRADE framework for prognostic factor research, which involves evaluating six factors that can decrease evidence certainty (phase of investigation, study limitations, inconsistency, indirectness, imprecision, publication bias) and two factors that can increase it (moderate-to-large effect size, exposureresponse gradient). 24 Sensitivity analysis We conducted sensitivity analyses based on study sample size (≥1000 versus ≤1000 participants), studies with high risk of bias in any methodological domain, and studies reporting both the incidence of COVID-19 and death. Where data were available, we performed subgroup analysis based on age, CKD stage, COVID-19 severity (defined by the study investigators), case definitions of COVID-19 (suspected, probable, and confirmed), WHO region (African, Americas, South-East Asia, European, Eastern Mediterranean, or Western Pacific),World Bank Income Group (low, low-middle, upper-middle, or high income), study location (hospital or community), diabetes, and obesity. Subgroup analysis was performed by testing the significance of the between-study variance with a chi-square Q-text using the R packages meta and metainc. 17 Studies reporting the incidence of COVID-19 or outcomes in across CKD categories (CKD without KT, CKD G5D, and/or KTRs) but not in each subgroup were included in overall analyses but excluded from subgroup analyses. Most studies were performed in Europe, the Americas, and Western Pacific, and in high-to upper middle-income countries (Item S4). Risk of bias assessment Study participation was adequate in 178 studies (51%), inadequate in four studies, and unclear in the remaining studies (Figure 2). Risk of bias from study attrition was low in 27 studies (30% of potential prospective studies), high in five studies, unclear in the remaining studies, and not studies; 3210 participants) (p=0.1 between CKD subgroups). Other The incidence of dyspnea, recovery from COVID-19, ICU admission, hospital admission, hospital discharge, need for supplemental oxygen, and sepsis are reported in Table 2 studies; 5271 participants) (p=0.9 between CKD subgroups). The incidence of AKI, death-censored kidney allograft loss, myocardial infarction, stroke, and fatigue are reported in Table 2 and Item S6. A single study reported on vascular access thrombosis and none reported on kidney failure, life participation, or limb amputation. Sensitivity and subgroup analyses Sensitivity analysis including only studies reporting both incidence of COVID-19 and death revealed a higher incidence of death in CKD G5D and KTRs compared to CKD without KRT. The incidence of COVID-19, death, and respiratory failure in people with CKD were higher in J o u r n a l P r e -p r o o f Page 13 of 27 studies with a low or unclear risk of bias, small sample size, from the Americas or Europe compared to studies with a high risk of bias, large sample size, or from other WHO regions. Studies from high-income countries reported a higher incidence of COVID-19 in people with CKD, respiratory failure, hospital admission, and short-term dialysis compared to upper and lower middle-income countries. High and low-income countries reported a higher incidence of death compared to upper and lower middle-income countries. Children with CKD were reported to have a lower incidence of COVID-19 and associated outcomes compared to adults with CKD. The was no association between diabetes or obesity and death in people with CKD and COVID-19 (Item S7, Item S8). Discussion Three hundred-and-forty-eight studies reported the incidence or prognosis of COVID-19 in people with CKD. The certainty of the evidence was generally low due to study limitations, inconsistency in the findings between studies, and/or imprecision in the calculated estimates. Study participants were mostly hospitalized adults; from Europe, USA, or China; and from highor upper-middle income countries, which may limit the generalizability of our findings. In lowcertainty evidence, we found the COVID-19 incidence in people with CKD was 66 per 10,000 person-weeks, which is higher than the global COVID-19 incidence of 5 per 100,000 personweeks. 27 This may be in part due to ascertainment bias since people with CKD are more likely to receive close healthcare monitoring than the general population. In low-certainty evidence, we found a higher incidence of COVID-19 in people with CKD G5D compared to people with CKD without KRT or KTRs, which may be attributable to greater exposure to SARS-CoV-2 from greater use of health facilities in people on maintenance haemodialysis. 28 This hypothesis is supported by the single included study reporting COVID-19 incidence in people on home-based J o u r n a l P r e -p r o o f Page 14 of 27 dialysis of 6 per 10,000 person-weeks, 29 which is similar to the reported incidence in the general population in Italy and USA of 2-6 per 10,000 person-weeks. 30,31 In low-certainty evidence, people with CKD and COVID-19 may have a ten-fold higher incidence of death compared to those without COVID-19. The incidence of death from COVID-19 of 16 per 1,000 person-weeks in the general population is lower than our findings in people with CKD, 1 which may be attributed to a dysfunctional immune system in CKD. [32][33][34] We found a higher incidence of COVID-19 and associated death in people with CKD from the Americas and Europe compared to other regions, and in adults compared to children, which is similar to in the general population. 1,35,36 Therefore, the heterogeneity observed in most of our analyses could partially be explained by similar variations in the general population based on geographical location and age. However, such significant heterogeneity lowers our confidence in the summary estimates, which need to be interpreted in the context of the 95% PrIs. Data were absent on outcomes other than death in people with CKD without COVID-19, and on COVID-19 severity stratified by CKD subgroup, preventing comprehensive evaluation of COVID-19 as a prognostic factor in people with CKD. Whilst multiple systematic reviews have evaluated the prognostic impact of pre-existing CKD in people with COVID-19, 37-39 the incidence rate of COVID-19 and associated outcomes in people with CKD have not been comprehensively assessed. A systematic review of people receiving maintenance hemodialysis that included 29 studies reported an incidence of COVID-19 of 7.7%, death in 22.4%, ARDS in 18.5%, and ICU admission in 6.6%. 40 Another systematic review of KTRs with COVID-19 that included 15 studies reported a mortality rate of 24% and AKI in 50%. 41 Two systematic reviews also found CKD to be associated with severe COVID-19. 42,43 However, time periods were not reported in these reviews, preventing calculation of an incidence J o u r n a l P r e -p r o o f Page 15 of 27 rate and comparison with our results. These systematic reviews also included substantially fewer studies compared to our review and none adjudicated evidence certainty using GRADE nor reported other COVID-19-COS outcomes. A large systematic review found an increased risk of death in people with CKD G5D or organ transplant recipients with COVID-19 compared to people with CKD without KRT, 44 which is not consistent with our findings. These discrepant findings could be due to a falsely high incidence of death in people with non-KRT CKD in our study because of an inaccurate denominator of all people with non-KRT CKD or reporting of mostly hospitalized people. 44,45 There are several strengths and limitations to this review. We performed a systematic search designed by an information specialist for studies evaluating COVID-19 in people with any level of CKD. We evaluated both COVID-19 incidence and prognosis using the COVID-19-COS and SONG core outcomes. Limitations of our review included the limited measurement of known confounding factors impacting on the incidence and outcomes of COVID-19 in people with CKD, such as old age, male sex, Black or South-Asian ethnicity, lower socioeconomic status, obesity, diabetes, malignancy, or respiratory, cardiovascular, liver, neurological, or autoimmune diseases. 44 Second, the lack of reporting of prognostic outcomes in people with CKD but without COVID- and prognostic outcomes is especially difficult for people with CKD without KRT, which is often under-reported, 46 lowering the generalizability of our results for people with CKD without KRT. Furthermore, detection of CKD in our review is limited by the lack of reporting of albuminuria and a high proportion of studies (49%) where there was a high or unclear risk of inadequate participation by all eligible participants with CKD. Fifth, only a subset of the prespecified outcomes was reported in each study. Whilst this may be understandable for kidneyspecific outcomes, selective reporting of COVID-19-COS core outcomes represents a significant risk of selective reporting. 47 This was highlighted by our finding of a higher incidence of COVID-19 in people with CKD G5D but without a difference in the incidence of death due to different studies reporting each outcome. Indeed, sensitivity analysis including only studies reporting both the incidence of COVID-19 and death found CKD G5D was also associated with a higher incidence of death. Sixth, the median study duration varied from 7-274 days, which was inadequate in a significant proportion of studies for the detection of patient-level outcomes. Seventh, variability in study definitions of COVID-19 and prognostic outcomes may have affected the accuracy of our results. The method of COVID-19 diagnosis was inadequate or unclear in 37% of included studies, the definition of recovery was not defined in most studies, and there was heterogeneity in the definition of AKI. Most studies were retrospective in nature, which may lead to higher risks of selection bias, misclassification bias, and confounding compared to prospective studies. Lastly, we reported the incidence rate of COVID-19 in people with CKD, which assumes a constant risk of COVID-19 regardless of the time interval, though this assumption is unlikely true since the incidence of COVID-19 in the general population varies month-to-month. 1 In particular, our search up to February 2021 does not account for the impact of COVID-19 vaccination implementation strategies in many countries or the impact of recent surges of COVID-19 in countries such as India. Our systematic review found that people with CKD may be at a higher risk of COVID-19 than the general population and may be at a higher risk of COVID-19-related death compared to people with CKD without COVID-19. Decision-making by clinicians and policy-makers should focus on preventative measures for people with CKD, in particular people receiving maintenance dialysis. Future studies measuring and adjusting for confounders, and adequately powered to report the COVID-19-COS and SONG-CKD core outcomes in people with CKD with and Item S1. Electronic database search strategies. Article Information Authors
2021-08-06T13:16:52.488Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "a99c60783d00597430df36bc90cac4b2e7e170e6", "oa_license": null, "oa_url": "http://www.ajkd.org/article/S027263862100771X/pdf", "oa_status": "BRONZE", "pdf_src": "ElsevierCorona", "pdf_hash": "a99c60783d00597430df36bc90cac4b2e7e170e6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
9754976
pes2o/s2orc
v3-fos-license
Efficacy of rintatolimod in the treatment of chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME) ABSTRACT Chronic fatigue syndrome/ Myalgic encephalomyelitis (CFS/ME) is a poorly understood seriously debilitating disorder in which disabling fatigue is an universal symptom in combination with a variety of variable symptoms. The only drug in advanced clinical development is rintatolimod, a mismatched double stranded polymer of RNA (dsRNA). Rintatolimod is a restricted Toll-Like Receptor 3 (TLR3) agonist lacking activation of other primary cellular inducers of innate immunity (e.g.- cytosolic helicases). Rintatolimod also activates interferon induced proteins that require dsRNA for activity (e.g.- 2ʹ-5ʹ adenylate synthetase, protein kinase R). Rintatolimod has achieved statistically significant improvements in primary endpoints in Phase II and Phase III double-blind, randomized, placebo-controlled clinical trials with a generally well tolerated safety profile and supported by open-label trials in the United States and Europe. The chemistry, mechanism of action, clinical trial data, and current regulatory status of rintatolimod for CFS/ME including current evidence for etiology of the syndrome are reviewed. Introduction Chronic fatigue syndrome/Myalgic encephalomyelitis (CFS/ME) is a seriously debilitating disorder in which disabling fatigue is a universal symptom in combination with a variety of variable symptoms that include sleep disorders, cognitive defects, joint/ muscle pain, and headaches [1]. The fatigue is not improved by bed rest and may be worsened by physical activity. CFS/ME is an economically devastating illness whose societal costs to the U.S. economy are conservatively estimated at more than $20B yearly [2]. Heart failure, cancer, and suicide are the most common causes of death (59.6%) in patients with CFS/ME memorialized by the National CFIDS Foundation [3]. The mean age of death from cancer was 47.8 years and suicide was 39.3 years, which is significantly younger than the general population. Although limited by the informal collection of data, the analysis remains the most comprehensive to date and suggests a significantly increased risk of early death. The etiologic/pathogenic basis for CFS/ ME is unknown and may be multifactoral with a variety of microbes, hormonal, and immunological abnormalities linked to its pathogenesis [4][5][6]. Moreover, CFS/ME may have a familial component [7] and be dependent on genetic signatures [8][9][10]. Recently discovered time-dependent plasma immune signatures indicate a dynamic and evolving pathogenesis [11]. With no approved drug therapy available, treatment is aimed at symptom relief and improved ambulatory function [12]. These include over-the-counter and off-label prescription drugs, behavioral modifications, and graded exercise therapies. The rationale for the initial open-label trials with rintatolimod in CFS/ME was based on its recognized broad antiviral and immunomodulatory properties as an inducer of interferon (IFN) [13]. These properties now are known to be mediated by its activity as a dsRNA toll-like receptor 3 (TLR3) agonist [14] in the induction of innate immunity and the initial cellular orchestration for the progression to adaptive immune responses, which are mediated in part by inflammatory chemokines and cytokines [15]. TLR3 is abundant in functional dendritic cells (DCs), central in the host adaptive immune response system [16]. All of the TLRs use a MyD88-dependent signaling pathway with the exception of TLR3 that uses the MyD88 independent TRIF pathway [17]. Two other dsRNAactivated inducers of gene expression that initiate innate immune responses are the cytosolic helicases mda5 and RIG-1. Rintatolimod, however, does not activate these helicases [18,19]. The selectivity of rintatolimod for TLR3 without helicase activation preserves the non-MyD88 TRIF pathway with its reduction in inflammatory cytokine induction observed with the MyD88/MAV pathways and is responsible at least in part for its improved safety record in clinical trials, compared to other forms of dsRNA that cause helicase activation [20]. The initial success of open-label trials provided the basis for the double-blind, placebo-controlled Phase II and Phase III clinical trials as well as its FDA designation as an Orphan Drug and a FDA-authorized treatment protocol. syndrome that may be severely debilitating. Clinical trials are difficult to conduct due to spontaneous remissions although remissions become less frequent with increasing time from initial symptom onset. There have been several attempts by big pharma to introduce new pharmaceutical entities although none have been successful to date. Despite efforts to stimulate drug discovery for CFS/ME by the FDA [22], there appear to be no pending programs. The majority of CFS/ME patients experience a sudden onset of symptoms, which many can identify as a specific date of acute onset [23,24]. The distinction of acute versus slow onset may represent a key element of differential pathogenesis, which has not been explored adequately. The signs and symptoms of acute onset are suggestive of an infectious basis in which some patients experience a prolonged extension of symptoms with profound fatigue as the unifying element. Introduction to the drug Rintatolimod is a dsRNA that functions as an activating ligand for TLR3 (Figure 1(a)). Unlike other dsRNAs, the activity of rintatolimod is limited to TLR3 with no induction of the cytosolic helicases (Figure1(b)) [18,19]. The importance of this unique property of rintatolimod is a reduction of inflammatory cytokines that has limited the clinical utility of other TLRactivating ligands that use the inflammatory cytokine inducing MyD88-dependent pathway of intracellular signaling [20]. Chemistry Rintatoloimod is comprised of a single polypurine (inosine) strand hydrogen-bonded with a single polypyrimidine strand (cytosine) containing an inosine H-bond mismatched pyrimidine (uridine) for every 12 cytosines. These are assembled into a double-stranded RNA structure (Poly I: Poly C 12 U) that is maintained under physiological conditions by typical 'Watson-Crick' hydrogen bonding between purine and pyrimidine base pairs. The introduction of the pyrimidine base uridine at a 1:12 ratio (U:C) into the polypyrimidine strand maintains the overall double-stranded structure, but creates sites of thermodynamic instability ( Figure 2) that allow rapid hydrolysis of Poly I: Poly C 12 U by serum nucleases to simple nucleosides compared to the parent Poly I: Poly C. Table 1 provides the specifications, for clinical grade rintatolimod, Figure 1. MyD88 dependent and Myd88 independent signaling pathways for the TLRs and helicases. A. Intracellular pathways for MyD88 independent TLR3 nuclear signal transduction initiated by TRIF binding to the TIR of the TLR3 homodimer. TLR3 monomers dimerize with binding of the dsRNA ligand. Activated TRIF initiates two pathways. The first results in the transitory induction of the IFNs. The second is a species variable pathway (rodents ≫primates) that operates though NFκB (dashed line), which transiently induces the production of inflammatory cytokines. The adapter protein cascade initiated by TRIF (TIR-domain-containing adapter-inducing interferon) includes TBK1 (TANK-binding kinase 1 binds to TRAF3), TRAF1/3 (TNF receptor associated factors), NAP1 (Nck-associated protein 1), IKK (IκB kinase), IKKε (inhibitor of IκB kinase), P13K (Phosphoinositide 3-kinase), IRF3/7 (interferon regulatory transcription factors), TAK1 (protein kinase of MLK family), TAB1 (TGF-β activated kinase 1), RIP1 (Receptorinteracting [TNFRSF] kinase 1), NFκB (nuclear factor kappa-light-chain-enhancer of activated B cells), IκB (inhibitor NFkB). The ectodomain of TLR3 consists of a horseshoe shaped structure populated by 23 leucine-rich β-sheets (orange disks) connected by non-ordered chains containing RNA binding residues. The transmembrane a-helices (solid orange) connect the ectodomain to the cytoplasmic TIR domain (dark green). The phosphorylated TIR binds TRIF to initiate the adapter protein cascade. B. Intracellular pathways for MyD88 dependent for TLR 1/2 and 1/6 heterodimers and TLR 4-10 homodimers with the diverse PAMP ligands represented by a green bar is not necessarily as accurate in placement as is dsRNA with TLR3 in 1A. TLR4 uses both the MyD88 dependent and independent pathways. Reproduced from Mitchell which includes the elemental composition, CAS registration number, nucleotide composition, chemical abstract names, other identifying names, and formulation. Rintatolimod for clinical use is provided as a sterile liquid formulation of high MW dsRNA polymer freely soluble under physiological conditions with a shelf life of over 7 years at 2-8°C. Pharmacokinetics The pharmacokinetic analysis of rintatolimod was conducted early in its clinical development under the auspices of a National Institutes of Health grant (CA29545) awarded to Hahnemann University. Extensive studies were conducted on the pharmacokinetics of rintatolimod administered as an intravenous infusion to human subjects Phase I/II clinical trials (CFS/ME and non-CFS/ME patients) [25,26]. Limited pharmacokinetic analysis was conducted in animals. Rintatolimod extracted from blood is in the dsRNA conformation and amenable to quantitation by a solution hybridization technique using a radioactive probe under chaotropic salt conditions which inhibit RNase degradation while allowing molecular probe hybridization displacement of the homologous RNA strand (>50 bp limit of detection). Since a minimum of 40-50 nucleotides are required for binding to TLR3 [27,28], the half-life measured in this assay is also the half-life for the ability of rintatolimod to induce innate immune responses. The assay is linear over a range of 3-530 μg/ml and has been validated in a multi-day study shown to be reproducible within 15% and to be 94% accurate [25,26]. There were a total of 132 patient-visits for which one or more of the derived pharmacokinetic parameters were available for analysis from the intravenous administration of 200-700 mg of rintatolimod over an average 30 minute infusion time ( Figure 3). Immediate post-infusion analysis of rintatolimod and its metabolites showed an average of 60 ± 27 percent of theoretical maximum indicating significant degradation during the infusion period. First-order decay kinetics of rintatolimod from blood as a function of time is illustrated for infusion doses of 200 mg (Figure 3(a)), 400 mg (Figure 3(b)), and 700 mg (Figure 3(c)) and is consistent with an open one-compartment model. Since the poly (C 12 U) n strand is more rapidly hydrolyzed than the poly(I) n strand, the functional biological blood half-life (i.e. dsRNA 40-50 bp) is similar to the physical half-life measured by this probe hybridization method. The pharmacokinetic data show that drug accumulation arising from prolonged chronic treatment did not occur. Cmax and Cmax/Dose decreased Figure 2. Diagramatic representation of rintatolimod. The dsRNA structure is maintained by hydrogen bonding. The introduction of a uracil into the poly C strand provides thermodynamic instability with an increased susceptibility to blood nuclease hydrolysis. The poly I strand is represented by blue (inosine). The poly C 12 U bases are represented by green (cytosine) and red (uracil). with duration of treatment (non-CFS/ME) or remained constant (CFS/ME). Total exposure, AUC, was unaffected by duration of treatment for both disease groupings. Elimination half-life increased 60% with prolonged treatment time (>12 weeks); nevertheless, carryover is not observed since the longest half-life observed in any of the patients, 72 minutes, was still less than 1/50 of the shortest proposed dosing interval (72-96 hours, or twice/week). Gender as a factor in rintatolimod pharmacokinetics is unlikely to have any clinically significant impact on the selection of doses for the treatment of CFS/ME. Gender was detected as a statistically significant factor for clearance, Cmax and Cmax/Dose in selected populations. In none of the cases, however, were the differences (25-30%) likely to be clinically significant given the wide safety margin associated with the recommended dosing in CFS/ME patients. Age was not found to be a factor in the pharmacokinetics of rintatolimod. There was insufficient diversity in race in the pharmacokinetically evaluated population to permit the examination of the impact of race as a factor. Pharmacodynamics and mechanism of action TLR3 is activated by dsRNA [14]. The Toll-Like Receptors (TLRs) are a family of Class I transmembrane receptors (n = 10 in humans) that bind to pathogen-associated molecular patterns (PAMPs), which function as a first line of defense against microbial pathogens by the induction of innate immunity [29]. The PAMP for TLR3 is dsRNA detected in endosomes of antigenpresenting cells and on the cell surface of selected cells including endothelial cells and airway epithelium [30][31][32]. The expression pattern is consistent with sentinel activity for the detection of replicating virus in the host organism. Cellular location of TLR3 is modulated by UNC93B1 whose transcription is up regulated by dsRNA and which promotes trafficking of differentially glycosylated TLR3 to the plasma membrane [33]. Binding of dsRNA to TLR3 allows dimerization of TLR3 monomers and activation of a cytosolic phosphorylation that initiates a cascade of molecular events initiating transient activation of hundreds of genes [34]. Rintatolimod is restricted to TLR3 activation [18,19]. In contrast, other dsRNA configurations such as poly I: poly C activate the cytosolic helicases that utilize the pro-inflammatory MyD88 pathway. In contrast, TLR3 is the only TLR that exclusively uses a non-MyD88 pathway (TRIF) that minimizes the expression of systemic cytokines [20]. Figure 4 is a molecular model that illustrates rintatolimod-driven TLR3 dimer formation by non-covalent bonding. Studies with the dsRNA homopolymer, poly I:poly C, have demonstrated a positive allosteric affect dependent on the size of the dRNA [27] by lateral clustering [35]. Although TLR3 is activated with a minimum size of 40-50 bp [27,28], approximately 90 bp are required for the maturation of dendritic cells to mature antigen-presenting cells [36]. The release minimum MW specifications for rintatolimod take advantage of this allosteric property of TLR3 that favors activity with higher MW dsRNAs [27,33]. Clinical efficacy Of thirteen studies conducted with rintatolimod, nine were performed in severely debilitated CFS/ME patients (KPS ≤60). Three of these studies constitute the main studies of efficacy. Three were double-blind, randomized, placebo-controlled Phase II/III clinical trials at multi-sites (AMP-502, AMP-502 T, and AMP-516). AMP-502 T was a limited extension, which provided further efficacy and safety information. AMP-516 had a crossover in which subjects were initially randomized to active treatment or placebo followed by the blinded cross-over phase (AMP-516 C). All subjects received active treatment while remaining blinded to the initial treatment assignment. Five additional open label studies (AMP-501, AMP-504, AMP-509, AMP-511, and AMP-502E) measured safety and various efficacy parameters as well as long-term effects. Taken together, the entire database provides a consistent picture of durable activity in this population of severely disabled subjects and the accumulation of over 90,000 doses. More than 1200 patients have been enrolled in various rintatolimod studies in which over 830 unique CFS/ME patients received active drug. In order to maintain direct comparability between rintatolimod clinical trials, only those subjects who met the inclusion criteria utilizing the original 1988 criteria of the CDC [37] and exceeding duration of the disease complex (1 year versus 6 months) were enrolled. Subjects enrolled in the Phase III clinical trial AMP-516 met both the CDC 1988 diagnostic criteria and the more relaxed 1994 CDC definition [38]. An international consortium proposed in 2011 that Myalgic Encephalomyelitis Each curve is the product of 'n' infusions, using first order decay kinetics. Revised from the doctoral thesis of Kenneth Strauss [25] with permission from the author. was a preferable term for the syndrome complex and in addition to the defining post-exertional exhaustion, diagnosis required additional evidence of neurological, immune, and energy impairments [39]. Although the Institute of Medicine has proposed recently Systemic Exertion Intolerance Disease (SEID) as a more descriptive name for CFS/ME [40], profound fatigue remains as the core descriptor for all definitions. The rintatolimod clinical trials focused on alleviation of that core symptom and its effect on quality of life. The majority of patients had long disease durations of 6-9 years before enrollment that illustrates the chronicity of severe illness in some patients and for whom rintatolimod was targeted. Those patients randomized were representative of the severe disease state (KPS ≤60) and there were no important subsets of adult patients who were excluded. To date, no children have been studied. Open label trials [41,42] There have been five open-label studies of rintatolimod in CFS/ME, one of which is an on-going cost recovery study authorized by the FDA. These studies are summarized in Table 2. They provided guidance for the Phase II/III doubleblind placebo-controlled trials. AMP-511 is a continuing open label treatment protocol that provides additional current safety data for rintatolimod in humans. AMP-516C was a cross-over trial at the conclusion of the placebo-controlled AMP-516 Phase III clinical trial [43]. Although technically an Double blind, placebo-controlled, randomized multisite Phase II and Phase III trials There have been three placebo-controlled clinical trials of rintatolimod in CFS/ME. AMP-502 was a Phase II study and AMP-516 was a Phase III study in which the primary endpoint achieved statistical significance (p ≤ 0.05). AMP-502 T was a small extension of AMP-502 under blinded conditions for dose escalation evaluation purposes. The trials are summarized in Table 3. Clinical trials AMP-502 was a ninety-two patient, 24 week Phase II trial at four independent U.S. sites [24]. Study requirements were: age 18-60, KPS 20-60, restrictions on females of child-bearing age, and endurance levels verified by treadmill. The median KPS in both the placebo (n = 47) and rintatolimod (n = 45) treatment arms was 50 ('requiring considerable assistance for daily care') and the average duration of CFS/ME symptoms was 6.1 (rintatolimod) and 4.4 (placebo) years. General patient demographics for AMP-502 are summarized in supplemental Table S1. Patients assigned to each treatment group were well matched demographically and with respect to the severity of their illness with the exception of gender. There were no significant differences between treatment groups in the incidence of CDC CFS-defining symptoms, in the degree of patient debilitation as measured globally by means of KPS, or in their ability to perform routine activities of daily living measured by the Activities of Daily Living (ADL). The primary endpoint was KPS. Secondary endpoints were exercise tolerance (ET), ADL, SCL-90-R neurocognitive functional status, signs and symptoms, medications for CFS/ME, and hospitalizations or emergency room services. A total of 84 of the 92 enrolled patients completed 24 weeks of treatment. Of the 8 dropouts, 3 of 4 placebo patients discontinued the study because of CFS/ME symptom intensification. The remaining 5 dropped out for non-medical reasons. At 24 weeks, patients receiving rintatolimod had statistically significantly greater improvements from baseline compared to the placebo cohort for global performance and perceived cognition ( Table 4). The mean of the primary endpoint, KPS, was significantly improved (p < 0.001) as well as the median (p = 0.023). Statistically significant increases in the KPS score were observed in the rintatolimod-treated cohort compared to the placebo group at weeks 16, 20, and 24. At Week 24, the distribution of changes in KPS between placebo and rintatolimod cohorts showed 50% more responders in the rintatolimod cohort. Disease progression as measured by KPS was apparent in 6 of 47 (12.7%) in the placebo arm while 1 of 45 (2.2%) progressed in the treatment arm. Similar improvements were observed in the secondary endpoints associated with quality of life (cognition, ADL), exercise tolerance, and exercise work (O 2 utilization). The objective quantitative improvement in the symptoms of CFS/ME observed with rintatolimod were seen also in the use of medications to alleviate CFS/ME symptoms. At the beginning of the study, patients were instructed to minimize all medications but were then allowed the use of prescription and over-the-counter drugs as needed. During the last 4 weeks of the study, the rintatolimod cohort used significantly fewer drugs to alleviate CFS/ME and CNS symptoms as well as pain as compared to the initial 4 weeks of study (Table 5). Forty-two percent (n = 39) of the patients had laboratory evidence of Herpes viral activation as demonstrated by giant cells expressing vital antigens on peripheral mononuclear cell (PBMC) culture. Sixty-nine percent were antigen positive for HHV-6, 8% cytomegalovirus, 5% herpes simplex, and 0% for Epstein-Barr virus. Expression of HHV-6 was associated with a poorer mean KPS score (50 v. 58, p < 0.02) although there were no differences in improvements observed in the rintatolimod cohort. Hospital emergency room admissions were dramatically improved in the rintatolimod cohort [42]. Fourteen of ninety-two enrolled patients were hospitalized or required Emergency Room services during the study. Significantly (p < 0.005), placebo patients were hospitalized or admitted to an emergency room for a total of 114 days compared to 7 rintatoloimod patients for a total 19 days (Table 6). AMP-516 was a Phase III randomized, double-blind, placebocontrolled, multi-site (n = 12) clinical study [43,44]. There were a total of 234 well-matched (age, gender, CFS/ME duration, and body weight) patients equally divided between the two cohorts (n = 117) in a 40-week study in which 79% (n = 93) and 86% (n = 101) of the rintatolimod and placebo cohorts, respectively, completed the arduous clinical trial (40 weeks with IV infusions of~35-40 minutes, twice per week). ET utilizing a modified Bruce treadmill protocol was the primary endpoint. At Week 40, there was a net improvement of 21.3% (p = 0.047) from baseline in the ITT rintatolimod cohort compared to the ITT placebo cohort (Table 7A). Mean ITT intrapatient cohort percent improvement of 36.5 % (p < 0.001) versus 15.2% placebo (p = 0.198) further supports a beneficial effect of rintatolimod on ET. Net ET improvement of 24.6% from baseline in the rintatolimod cohort was observed for the smaller (n = 93) 40-week rintatolimod trial completion group (p = 0.019) (Table 7B), and in those ITT patients without significant rintatolimod dose reductions net improvement was 28.0% at the end of the 40-week trial (p = 0.022) ( Table 7C). In agreement with the ITT analysis, both the trial completion and those patients without significant lapses in drug administration demonstrated intrapatient improvement (p < 0.001) versus placebo (p > 0.24). Table 7D illustrates the effect of baseline ET stratification (≤9 minutes vs. >9 minutes) on ET performance at 40 weeks in the ITT population. Those patients able to achieve a > 9 minute duration on the CFS/MEmodified Bruce protocol at baseline and randomized to rintatolimod (n = 60) demonstrated a statistically significant advantage over placebo (n = 66) (p = 0.034) at 40 weeks although the lack of statistical efficacy in the ≤9 minutes cohort may be a lack of statistical power (n = 40 versus n = 60 between the placebo and rintatolimod sub-cohorts, respectively). The individual patient ET responses to rintatolimod compared to placebo for the ITT population is captured in Figure 5. Individual patient change in ET from baseline at 40 weeks is plotted from lowest to highest ET performance [44]. There is a minimum of three different ET response cohorts-a high response cohort, a minimal response cohort, and a negative response cohort. In the high response cohort (upper 40% on right side of the plot), there is a clear improvement in ET in the rintatolimod cohort versus placebo. The middle cohort represents minimal change between rintatolimod and placebo. The negative response cohort (on the left side of the plot) shows deterioration in ET performance in both rintatolimod and placebo patients. Nevertheless, rintatolimod appears to reduce deterioration in ET versus the placebo controls for the poorest modified treadmill responders at baseline. Additional post hoc evidence supporting the efficacy of rintatolimod in CFS/ME was provided by an analysis of the frequency distribution of percent improvement in ET from baseline to Week 40 in the rintatolimod versus placebo cohorts (Table 8A). The proportions of patients in the ITT population with changes in ET from baseline to Week 40 of at least 25% and of at least 50% were 1.7 and 1.9-fold greater for patients randomized to rintatolimod than placebo, 39% versus 23% (p = 0.013) and 26% versus 14% (p < 0.028), respectively. CFS/ME patients further segregated into two ET cohorts. Dichotomization of ET ≥25% and ≥50% improvement from baseline within the subcohort, ET >9 minutes at baseline, further identifies responders to rintatolimod versus placebo (Table 8B) (p ≤ 0.004). Impaired oxygen consumption, characteristic of chronically debilitated patients with severe cardiac dysfunction is also well documented in CFS/ME [45]. Decreased oxygen uptake is also an index of physical dysfunction in terms of reduced ambulatory skills, rapid onset of exertional dyspnea, and profoundly debilitating fatigue. The maximal oxygen utilization (VO 2 max) in CFS/ME patients during exercise treadmill testing can also be used as a criterion for determining the seriousness of the disease and its evolution. The functional impairment of the AMP-516 CFS/ME subset population (~1/2 of drug and placebo cohorts) was determined by measuring maximal oxygen consumption during ET testing (Table 9) [39]. Despite the reduced statistical power for a subset analysis, rintatolimod improved VO 2 max by 5.5% (p = 0.05). The original placebo cohort in a blinded cross-over Stage 2 of AMP-516 achieved a mean intra-patient percent improvement in ET of 39% (p = 0.04) at 24 weeks, while the original rintatolimod cohort maintained their improvement in ET (Table 7E). Similar improvements were observed in the AMP-516 study secondary endpoints. Decrease in drug use was observed in both the ITT and study completion patients taking CFS/ME palliative drugs (p = 0.015 and p = 0.01, respectively). KPS, ADL, and SF-36 vitality scores were significantly improved from baseline (p < 0.01) in the rintatolimod cohort. Correlation of clinical trial efficacy data A total of 331 patients were evaluated and randomly assigned to either the placebo (n = 164) or rintatolimod (n = 162) cohorts in two primary efficacy trials. Both the Phase II and Phase III double-blind, placebo-controlled trials achieved statistical significance of their primary endpoints, both physical performance based, KPS and ET, respectively. Both studies showed significant improvement in the primary symptom of CFS/ME, fatigue. Alleviation of profound fatigue was achieved as evidenced by significant improvement in ET or KPS in each controlled trial. Supportive evidence for improvement in the quality of life for the rintatolimod cohorts were reduced use of medications in an effort by patients to reduce the debilitating symptoms of CFS/ME and reduced hospital/emergency room admissions as well as SF-36 vitality and ADL score improvements. Perceived cognition was a secondary endpoint in AMP-502, which demonstrated statistical significant improvement versus placebo. Improvements observed in the placebo-controlled, double-blind clinical trials were observed in the open label trials, detailed in the supplemental data and provide further confidence that rintatolimod is clinically active in a substantial number of patients with CFS/ME. Human trial safety Chronic administration of rintatolimod in clinical trials has been generally well tolerated. To date, over 90,000 doses have been infused intravenously. The major toxicities seen in a minority of patients during the first weeks of infusion are mild flu-like symptoms probably secondary to the induction of interferon. Table 10 provides all adverse events observed in the rintatolimod Phase II/III-controlled clinical trials in which there is >5% difference between drug and placebo cohorts. There were a total of 44 serious adverse events (SAE) observed, which were equally divided between the rintatolimod and placebo cohorts. In the opinion of the site principal investigators, none of the SAEs were definitely attributable to the study drugs (rintatolimod or placebo). One probable SAE occurred in the placebo cohort (Table 11). Summation of all SAEs (open label plus controlled clinical trials) demonstrates that 7.7% of patients receiving rintatolimod experienced a SAE compared to 8.6% of the placebo cohort [42]. Comparative animal toxicities Rintatolimod demonstrates significant reduced toxicity in primates versus other animal species [20]. Table 12 provides the relative species-dependent acute toxicity of rintatolimod. There is a two order of magnitude difference in the relative toxicities between rabbits and the cynomolgus monkey with dog and rat showing intermediate toxicities between the extremes. This relative toxicity differential is extended to sub-acute and chronic toxicity studies in rats versus primates. In the rat hematological, hepatic, bone marrow, and thyroid pathologies have been observed [20]. In the cynomolgus monkey, the only significant observed anomaly is in the thyroid and bone marrow. At higher doses (greater than equivalent clinical dosing), thyroid is associated with follicular hyperplasia and increases in plasma TSH and T4. Significantly no elevations of thyroid hormones have been observed in humans. Increased myelopoiesis has been observed in monkeys. No bone marrow examinations have been conducted in humans. There is no evidence that rintatolimod can be carcinogenic. RNAs are recognized as electrophilic targets for chemical carcinogens, but not as carcinogens themselves. Indeed, it is probably not possible for non-coding dsRNAs such as rintatolimod to be carcinogenic and this is supported by a comprehensive literature review. An extensive review of the current literature including standard toxicology/reference texts found no evidence that dsRNA can act as a carcinogen [42]. In addition, the National Toxicology Program lists no RNA as a suspected or proven carcinogen. Similarly, there is no evidence that rintatolimod can serve as a source for the generation of siRNAs or microRNAs [42]. Mutagenesis studies in four independent studies with rintatolimod were negative (Table 13) [42]. Classical two species, 2-year studies for carcinogenesis have not been done. However, a 6-month study in the rat and cynomolgus monkey provided no evidence of cancer or pre-cancer dysplasias in the multiple organs examined [42]. In rat, embryological studies indicated a potential for fetal death and miscarriage [42]. There has been no indication of teratological toxicity in rat [42]. In rabbits, a decrease was seen in fetal number and weight. Toxicity in female rabbits may be associated with malformations or defects in fetal rabbits [42]. The differential toxicities between rats and humans are correlated with relative systemic cytokine responses as well as differential blood half-lives [20]. Statistically significant dose-dependent cytokine differences between rats and non-human primates have been observed during the first week of rintatolimod administration as well as at Week 8 [20]. Human plasma has greater nuclease activity than other species and complements the estimates of relative species toxicities found in the comparative acute toxicity analysis (Table 12). Regulatory affairs The first CFS/ME patient treated with rintatolimod was at the request of the FDA. The dramatic response of this index patient resulted in an open-label clinical trial (AMP-501) [41] that yield results that mimicked the index patient. The success of that trial resulted in a major effort by Hemispherx Biopharma to bring the drug to market. During a multi-decade dialog with the FDA, rintatolimod has been sequestered with five separate review groups. Currently there is an open NDA with continuing dialog with the FDA. As detailed earlier, there have been two double-blind, placebo-controlled trials. Both trials achieved statistically significant improvements in their primary endpoints with minimal toxicities. The drug has not received a marketing approval despite the lack of proven efficacious agents in the treatment of this disease that can be severely debilitating and is estimated to affect over one million persons in the U.S. Rintatolimod has FDA orphan drug status with individual patient access to the drug with a treatment protocol for CFS/ME (AMP-511) in existence for nearly two decades. Recent post hoc studies have demonstrated that about 40% of CFS/ME patients can be expected to respond to rintatolimod while about 10% of the poorest responders appear to be retarded in deterioration of symptoms with time [44]. A CFS/ME-modified treadmill test provides an objective basis to select those patients most likely to have a vigorous response to rintatolimod. Discussion Rintatolimod has been generally well tolerated and significantly improved physical performance primary endpoints in two double-blind, placebo-controlled clinical trials that are supported by five open-label trials, one of which was a cross-over of a Phase III trial that remained double-blinded as to original assignment group (active drug versus placebo). Rintatolimod is not active in The MTD is defined as the highest dose with no observed mortality or moribund toxicity. all patients. Indeed, the multiplicity of patient primary and secondary responses to rintatolimod was not unexpected. CFS/ME is a syndrome with the common unifying symptom of profound fatigue. Approximately, 30-40% of severe CFS/ME patients can be expected to achieve some clinical benefit using the original CDC definition with severity defined as KPS ≤ 60. There is substantial evidence to support the hypothesis that individuals with persistence of debilitating symptoms are associated with a variety of inappropriate immune responses initiated by an intracellular pathogen and driven by dysfunctional gene responses that fail to clear or suppress the initiating agent. The differential responses observed with rintatolimod administration support this hypothesis. It is thus of interest to consider the potential of possible pathogens and modifying influences of dysfunctional gene expression and immune responses observed in CFS/ME. Acute infection and fatigue The fatigue associated with acute infection is usually selflimited although some may have a prolonged course. Examples of the latter include Epstein-Barr virus (HHV-4), the causative agent of acute infectious mononucleosis, and Borrelia burgdorferi (post-treatment chronic Lyme disease). Recent proteomic analysis of cerebral spinal fluid (CSF) from persistence of these two infectious agents, however, distinguish patients with chronic Lyme disease from CFS/ME [46]. Studies linking contemporaneous laboratory diagnosis of acute infectious agents with the subsequent development of CFS/ME are rare although non-contemporaneous reports of a variety of infectious agents associated with CFS are common. An example of a prospective infectious diagnosis with the subsequent development of CFS/ME was established in a small community in Australia [47] using IgM to IgG seroconversion or rising IgG titers to Epstein-Barr virus (a DNA virus), Ross River virus (an alpha RNA virus), and Coxiella burnetii (an obligate intracellular bacterium and the causative agent of Q fever). Of the 253 patients with antibody evidence of acute infection with these 3 intracellular pathogens, 12% subsequently developed CSF/ME demonstrating that no single infectious agent is responsible for the syndrome and that induction of CFS/ME is dependent on factors other than primary infection. Similarly, in a small study CSF/ME patients with diagnostic high titer IgG titers to Chlamydophila pneumoniae, an obligate intracellular bacterium associated with atypical pneumonia, responded to anti-Chlamydial antibiotics with symptom resolution and declining antibody titers suggesting a cause and effect relationship [48]. There are a small number of case reports of the association of parvovirus B19 (a small ssDNA virus causing erythema infectiousum in children) and CFS/ME with resolution in three cases treated with pooled human immunoglobulin [49]. In a study involving 200 patients with CSF/ME, no differences between patients and controls were found in IgM or IgG titers against structural proteins of parvovirus B19 although 42% of patients versus 7% of controls had antibodies against the viral regulatory NS1 protein [50] suggesting a non-productive persistence of parvovirus B19. The affinity of polioviruses (enteroviruses) with their infection affinity for the CNS and GI tract has generated a number of studies linking CFS/ME with the enteroviruses. A large study of 165 CSF/ME patients exhibited the enteroviral VP1 structural antigen in 82% of gastric biopsies versus 20% of controls [51]. Mycoplasma have been demonstrated by PCR in PBMC in several studies [52,53] of patients with CFS/ME. Although effective antibiotic treatment is available, no antibiotic studies have been reported in CFS/ME associated with mycoplasma. Two studies linking retroviruses with CFS/ME have proven to be non-reproducible in the scientific community. The first described a retrovirus with sequence homologies to HTLV II [54]. More recently XMRV [55] and the closely associated polytrophic virus [56] have proven to be laboratory artifacts [57] although their reports generated premature attempts at therapy with anti-retroviral agents used for control of HIV. Similar to Epstein-Barr virus, most adults have antibodies to HHV-6 with no evidence of persistent-related disease. Unlike Epstein-Barr virus, however, numerous studies using culture and PCR have been reported linking HHV-6 with CFS/ME. In a study reported before a formal definition of CFS/ME had been developed, 70% of 259 patients with a CFS-like illness exhibited active HHV-6 replication in lymphocytes in primary culture [58]. The first patient treated with rintatolimod at the request of the FDA more than 2 decades ago had primary culture evidence of an active HHV-6 infection [59]. On the basis of that patient's rapid response to rintatolimod, the first open label trial was initiated in CFS/ME patients with evidence of a HHV-6 infection that similarly showed significant activity [41]. The available HHV-6 data is consistent with the CFS/ME cohort responding to rintatolimod treatment. Conversely, those patients with evidence of Chlamydial infection responding to antibiotics may populate a TLR3 agonist (rintatolimod) nonresponder cohort. Multiple co-infections Nicolson et al. [60] reported on the incidence of co-infections in 200 CFS patients and 100 controls by PCR analysis of whole blood. A total of 52% of CFS patients versus 6% of controls had evidence for the presence of at least one Mycoplasma species. A total of 7% were PCR positive for C.pneumoniae versus 1% in the controls and 30.5% positive for HHV-6 and 9% in the controls. Evidence for co-infection in Mycoplasma positive blood had similar incidences although there was no evidence of co-infections with these pathogens in the control cohort. One might logically expect a mixed non-cleared infection to respond to rintatolimod variably similar to the current mid-third minimal responders. Despite two decades of attempts to identify specific infectious agents as the initiator of the signs and symptoms of CFS/ ME, it is apparent that a multiplicity of obligate intracellular pathogens are capable of disease initiation. Although the pathogenesis of persistence in a minority of affected individuals has remained unclear, dysfunctional genetic responses involving the immune system and energy metabolism have been linked to the CFS/ME phenotype. Immune markers Consistent with the evidence of infection with multiple obligate intracellular pathogens are the increased numbers of activated cytotoxic CD8+ cells observed in CFS/ME [61]. Although NK cell numbers are generally in a normal range, NK cell function is decreased [62]. The IFN inducible 2ʹ-5ʹ adenylate synthetase/ RNase L pathway is dysfunctional in CFS/ME. Constituitive RNase L is activated by 2ʹ-5ʹ oligomers of adenine that results in cellular mRNA degradation. 2ʹ-5ʹ adenylate synthetase (2ʹ-5ʹA) transiently induced by Type 1 IFNs requires dsRNA for activation. In CFS/ME there is as much as a log increase in bioactive 2ʹ-5ʹA with a concomitant reduction in latent 2ʹ-5ʹA and increase in the bioactivity of RNase L in PBMCs [63,64]. A clinical evaluation demonstrated that 46 of 73 patients with CFS/ME had an elevation of RNase L activity that was associated with a significant (p < 0.001) decrease in ET [65]. The dramatic increase in bioactive 2ʹ-5ʹA has been substantially linked to a 37kD proteolytic fragment of the expressed 83kD enzyme [66,67]. The up-regulation of 2ʹ-5ʹA synthetase/RNase L pathway is consistent with an antiviral response pathway that has been rendered dysfunctional by aberrant PBMC intracellular proteolysis. The latter is the result of 2ʹ-5ʹA synthetase dysfunction in some patients with CFS/ME with the production of dimers rather than higher oligomers of 2ʹ-5ʹA that inhibit proteolysis of the native RNase L [68]. Although the existing methodology is research lab-based and not suitable for high volume reference or hospital-based laboratories, the relative abundance of the 37kD RNase L fragment relative to the intact 83kD enzyme has been suggested as a clinical lab assay for CFS/ME [69,70]. The observed normalization of this dysfunctional immune response to viral activation by IFN further suggests correction by rintatolimod of a viral initiation phenomena in CFS/ME. Genetic markers Differential gene expression in peripheral blood of CFS/ME patients has been reported by a number of investigators using DNA microchip analysis [71][72][73][74][75][76][77][78][79][80][81]. Recently, rigorous patient selection and a microchip that surveys the entire human genome coupled with qPCR gene validation has provided a more complete appreciation of the gene expression profiles that occur in CSF [9,82]. A complex array of differential gene expression can be categorized into functional subsets relating to responses to infection, immunity, inflammation, apoptosis, neurological function, and cancer [79]. Many of these differential gene responses are consistent with the large number of infectious agents linked with CFS/ME as well as the altered immune responses and variety of signs and symptoms observed with the disease. For example, eIF4G1 is an eukaryotic mitochondrial translation factor utilized in replication by a variety of viruses including the enteroviruses implicated in the pathogenesis of CFS/ME [83]. EIF4G1 variant 5 (GenBank:NM_004953) is up-regulated in CFS/ME suggesting a physiological response to viral replication as well as a gene variant favoring pathogen persistence. Similarly, genes associated with Epstein Barr Virus infection have been demonstrated recently to be up-regulated in most Kerr CSF/ME subtypes [84]. Patients with fatigue associated with Q-fever have similar gene expression profiles with CSF/ME [84]. Differential gene responses have been analyzed as a function of exercise tolerance in CFS/ME. Whistler et al. [73,74] associated exercise-dependent gene expression with the specific Gene Ontology categories of chromatin and nucleosome assemblies, cytoplasmic vesicles, membrane transport, and G-protein-coupled receptor. Distinct differences as a function of exercise have been demonstrated between CFS/ME, fibromyalgia, and multiple sclerosis [85,86]. Mitochondrial function markers Abnormal mitochondrial function [87] and structure by light and electron microscopy [88] have been observed in CFS/ME and consistent with reported muscle oxidative damage [89] and acetylcarnitine [90] and carnitine serum deficiencies [91]. A novel deletion in mitochondrial genes associated with energy production has been reported in a CFS/ME patient distinct from a common 4977 bp deletion observed in overt mitochondrial diseases [92]. The common 4977 bp deletion in mitochondrial DNA in CFS/ME has been reported 150-3000 times normal controls [93]. Rapid muscle intracellular acidosis in a CFS/ME patient detected by 31P nuclear magnetic resonance (31P-NMR) spectroscopy was suggestive of impaired oxidative metabolism [94]. Subsequent 31P-NMR studies demonstrated reduced biosynthesis of phosphocreatine [95] and lower levels of intracellular ATP [96] post-exercise. An ATP profile test consisting of ATP cytosolic transfer, oxidative phosphorylation efficiency, and the efficiency of ADP concentration within mitochondria from the cytoplasm in neutrophils has been reported [87]. Delivery of mitochondrial ATP to the cytoplasm as a function of CFS/ME severity as compared to normal controls demonstrated a clear discrimination of CFS/ ME severity (p < 0.001) with no overlap with controls. The ATP assay suggests that the mitochondrial dysfunction in CFS/ME is multi-factorial and provides a rational basis for the exercise intolerance observed in CFS/ME and the differential response in exercise tolerance to rintatolimod [44]. Rintatolimod and CFS/ME markers of disease Rintatolimod is clearly active in improvement of ET and quality of life in a subset of patients with CFS/ME. Moreover, the clinical evidence suggests that rintatolimod inhibits disease deterioration in a minority of patients who fail to show improvement from activation of TLR3. The basis for this differential response to rintatolimod is unknown but may be secondary to a variety of microbes coupled with a combination of genetic polymorphisms resulting in an immune response unable to clear either a productive microbial infection or an activated non-replicative intracellular microbe. The diagnosis of CFS/ME remains a diagnosis of exclusion. To date, there have been no laboratory-based markers for CFS/ME diagnosis although there have been potential candidates that unfortunately have not been adapted by clinical reference laboratories. Prime examples include a dysfunctional 2ʹ-5ʹ adenylate synthetase (2ʹ-5ʹA) and functionally inactive NK cells reported in CFS/ME by multiple investigators. RNAse L has been observed by De Meirlier to be corrected by rintatolimod in patients (AMP-509 trial) [42]. Functional impairment of NK cells has been studied by multiple investigators and found to be up-regulated by rintatolimod in vitro [62] although the necessary validation in patient trials has not been accomplished to date. Moreover, the functional research assay used (Cr 51 release from NK target cells) needs to be coordinated with the large number of NK cell markers that are available for flow cytometry on fixed cells. Despite the obvious need for inexpensive laboratory markers to aid clinicians in the diagnosis of CFS/ME as well as indicators of rintatolimod efficacy, there has been no national priority established for development. Response to rintatolimod is apparently related to a multifactorial pathogenic basis with a variety of reported intracellular pathogens, hormonal, and immunological abnormalities linked with a variety of genetic signatures. Those patients able to reach 9 minutes on a CFS/ME-modified Bruce protocol are evidence-based responders to the drug. About 10% of CFS/ME patients with the poorest exercise tolerance may experience less disease progression. No drug is approved for the CFS/ME indication and I know of no other pharmaceutical company with drugs in advanced development despite FDA attempts to promote development by the pharmaceutical industry. Expert commentary TLR3 is unique in its induction of innate immune responses. The exclusive use of a non-MyD88 pathway limits expression of inflammatory cytokines especially in humans as compared to non-primate animal models. Rintatolimod further extends this unique property by its lack of stimulation of the cytosolic helicases utilizing a non-TRIF signaling pathway observed with non-mismatched dsRNA polymers. Although significant improvements in physical performance primary endpoints has been seen in the Phase II/III trials and is supported by open label trials, the final regulatory approval may require objective laboratory diagnostics that identify patients most likely to respond to rintatolimod. Fatigue is the universal symptom of this disabling condition and is a common symptom in a multiplicity of human diseases (examples include multiple sclerosis, cancer, severe anemia, hypothyroidism). Table 14 lists the most likely diagnostic markers to identify responders to rintatolimod and their current developmental status. Rintatolimod has an extensive safety record in humans in which primates demonstrate significantly fewer toxic manifestations of a restricted TLR3 activation than non-primates. CFS/ME is a complex syndrome with a variety of diverse factors including gene expression related to disease induction. The identification of markers that can be adapted to clinical reference laboratories to identify rintatolimod responders would provide objective criteria for non-CFS/ME physicians and should provide a regulatory pathway to approval in CFS/ME. Five-year view Rintatolimod has been demonstrated to reach statistical significance in two randomized, double-blind placebo-controlled clinical trials with patients with well-defined disease. No other pharmaceutical is in apparent development for this woefully neglected disease. Orphan drug status and the recent award of a new form and substance patent [97] for rintatolimod provides extended commercial viability. Rintatolimod, a restricted TLR3 agonist, is clearly active in a subset of CFS/ ME patients and appears to reduce further disease deterioration in those patients who fail to improve physically. Genetic and immune markers identified (Table 14) may provide insights to other drugs for combinatorial CFS/ME therapy in which rintatolimod alone is non-efficacious. The use of Next Generation Sequencing of plasma and PBMN cells is needed to identify unknown systemic infectious agents such as from the gut microbiome that may be part of the etiological process in some patients with CFS/ME. Key issues • Rintatolimod is a high MW synthetic mismatched doublestranded (ds) RNA (Poly I: Poly C 12 U) polymer • Rintatolimod activates TLR3 that induces innate immune responses and initiates adaptive immunity • All Toll-Like Receptors with the exception of TLR3 activate the pro-inflammatory cytosolic MyD88 pathway • TLR3 uniquely activates the cytosolic TRIF pathway with limited pro-inflammatory responses especially in primates • Rintatolimod is a restricted TLR3 agonist with no activation of the cytosolic helicases that use a non-TRIF pathway • Restriction of rintatolimod provides an improved safety profile compared to other dsRNA agonists • Rintatolimod has achieved statistical significance in randomized Phase II and Phase III double-blind, placebo-controlled, multi-site clinical trials in which the drug was administered IV bi-weekly for up to forty weeks in patients with severe CFS/ME • Rintatolimod was generally well tolerated with adverse events randomly distributed between drug and placebo cohorts with the exception of initial mild flu-like symptoms in some patients • Approximately 30-40% of CFS/ME patients can be expected to respond beneficially to rintatolimod Acknowledgements Senior management at Hemispherx Biopharma provided access to all FDA files that allowed disclosure of unpublished data. Critique of the manuscript by Charles Stratton and David Strayer is gratefully acknowledged. C Stratton is the Director of the Clinical Microbiology Laboratories at Vanderbilt University Medical Center and an expert on Chlamydia pneumoniae in CFS/ME. D Strayer is the Medical Director at Hemispherx Biopharma and has been involved in all clinical development of rintatolimod. Kenneth Strauss reviewed the pharmokinetic data and approved reformatting of several figures from his doctoral thesis into Figure 3. A posthumous thanks of appreciation is given to David Gillespie of Hahnemann Medical College for his seminal early work in the clinical development of rintatolimod. The rintatolimod clinical trials have been arduous and particularly burdensome to those patients randomly selected to a placebo arm. Their dedication in helping to find a solution for the many severely affected by CFS/ME is appreciated. Declaration of interests WM Mitchell is an independent member of the Board of Directors of the public company Hemispherx Biopharma with stock and option ownership. The author has no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
2018-04-03T00:10:39.442Z
2016-05-25T00:00:00.000
{ "year": 2016, "sha1": "0d7e43aaa70b4a3014e64029826506e659d2c3fe", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1586/17512433.2016.1172960?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0d7e43aaa70b4a3014e64029826506e659d2c3fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236987238
pes2o/s2orc
v3-fos-license
Optical Microlensing by Primordial Black Holes with IACTs Primordial black holes (PBHs), hypothesized to be the result of density fluctuations during the early universe, are candidates for dark matter. When microlensing background stars, they cause a transient apparent enhancement of the flux. Measuring these signals with optical telescopes is a powerful method to constrain the PBH abundance in the range of $10^{-10}\,M_{\odot}$ to $10^{1}\,M_{\odot}$. Especially for galactic stars, the finiteness of the sources needs to be taken into account. For low PBH masses (in this work $\lesssim 10^{-8}\,M_{\odot}$) the average duration of the detectable event decreases with the mass $\langle t_e\rangle \propto M_{\mathrm{PBH}}$. For $M_{\mathrm{PBH}}\approx 10^{-11}\,M_{\odot}$ we find $\langle t_e\rangle \lesssim\,1 \mathrm{s}$. For this reason, fast sampling detectors may be required as they could enable the detection of low mass PBHs. Current limits are set with sampling speeds of 2 minutes to 24 hours in the optical regime. Ground-based Imaging Atmospheric Cherenkov telescopes (IACTs) are optimized to detect the $\sim$ns long optical Cherenkov signals induced by atmospheric air showers. As shown recently, the very-large mirror area of these instruments provides very high signal to noise ratio for fast optical transients ($\ll 1\,$s) such as asteroid occultations. We investigate whether optical observations by IACTs can contribute to extending microlensing limits to the unconstrained mass range $M_{\mathrm{PBH}}<10^{-10}M_\odot$. We discuss the limiting factors to perform these searches for each telescope type. We calculate the rate of expected detectable microlensing events in the relevant mass range for the current and next-generation IACTs considering realistic source parameters. Primordial black holes (PBHs), hypothesized to be the result of density fluctuations during the early universe, are candidates for dark matter. When microlensing background stars, they cause a transient apparent enhancement of the flux. Measuring these signals with optical telescopes is a powerful method to constrain the PBH abundance in the range of 10 −10 to 10 1 . Especially for galactic stars, the finiteness of the sources needs to be taken into account. For low PBH masses (in this work 10 −8 ) the average duration of the detectable event decreases with the mass ∝ PBH . For PBH ≈ 10 −11 we find 1s. For this reason, fast sampling detectors may be required as they could enable the detection of low mass PBHs. Current limits are set with sampling speeds of 2 minutes to 24 hours in the optical regime. Groundbased Imaging Atmospheric Cherenkov telescopes (IACTs) are optimized to detect the ∼ns long optical Cherenkov signals induced by atmospheric air showers. As shown recently, the very-large mirror area of these instruments provides very high signal to noise ratio for fast optical transients ( 1 s) such as asteroid occultations. We investigate whether optical observations by IACTs can contribute to extending microlensing limits to the unconstrained mass range PBH < 10 −10 . We discuss the limiting factors to perform these searches for each telescope type. We calculate the rate of expected detectable microlensing events in the relevant mass range for the current and next-generation IACTs considering realistic source parameters. 37 th International Cosmic Ray Conference (ICRC 2021) July 12th -23rd, 2021 Online -Berlin, Germany Introduction Dark matter (DM) is among the most fundamental ingredients for structure formation in the Universe [1]. Although it is widely observed by its gravitational interaction, a direct detection remains elusive. The nature of DM remains among the most important problems in physics. Primordial black holes (PBHs) were first proposed in [2] as a possible DM candidate. The various possible PBH formation mechanisms result either in a broad or narrow spectrum of PBH masses spanning almost twenty orders of magnitudes. In attempts to constrain the PBH abundance over the full mass range, different experimental concepts are deployed. An overview of current limits on the PBH abundance is shown in Figure 1. Gravitational microlensing is a powerful method currently constraining over eleven orders of magnitude in PBH mass. These limits are shown by the blue regions in Figure 1 During a microlensing event, a time-varying magnification of a background star can be observed when a compact object crosses the line of sight. By monitoring stars in the Large Magellanic Cloud with roughly 24-hour cadence, the MACHO and EROS experiments have constrained the PBH abundance in a mass range of [10 −7 , 10] [3,4]. Observation of stars in the Galactic bulge by OGLE with 20 to 60 minutes sampling revealed a population of six short events, which could be well explained by either free-floating planets or PBHs [5]. A microlensing study on Galactic stars was performed with a sampling speed of 30 minutes using 2 years of Kepler data that constrained PBH masses down to 10 −8 [6]. Recently, Andromeda observations with the Subaru Hyper Suprime-Cam (HSC) were carried out to search for microlensing with a dense cadence of 2 minutes [7]. The sensitivity to short events made it possible to extend microlensing limits on PBH abundance to 10 −11 , which is shown by the dashed blue line in Figure 1. However, Ref. [8] pointed out that with an updated source size distribution the PBH constraints are limited to masses . At low and high masses the limits are enclosed by constrains from black hole evaporation and accretion effects on the CMB [9, 10]. IACT arrays are designed to detect very-high-energy (VHE > 100 GeV) gamma-ray photons. As a gamma ray interacts with the atmosphere, it triggers an atmospheric air shower. IACTs are optimized to detect the nanosecond-timescale Cherenkov flashes produced in the ultraviolet and blue bands within these showers. For this, they are equipped with photodetectors sampling up to GHz frequencies. Beyond their abilities in gamma-ray astronomy, recent works proved that IACTs may also operate as competitive telescopes to detect very fast transient optical signals. In [11], the high signal to noise achieved over millisecond timescales allowed the detection of the diffraction pattern produced during asteroid occultations. Via a diffraction fitting technique they were able to perform direct measurement of two stellar diameters with unprecedented resolution [12]. In the following, we investigate the possibility to use IACTs to search for PBH-induced microlensing signals in the optical range. As an example of the current generation of IACTs we refer to VERITAS, located at the Fred Lawrence Whipple Observatory (FLWO) in southern Arizona (31 40N, 110 57W, 1.3km a.s.l.). Gravitational microlensing Microlensing events are observed by monitoring the apparent brightness of stars. A PBH located close to the line of sight (LoS) between a background star and Earth acts as a gravitational lens. In the case of microlensing, the multiple images that are created in this process are not individually resolvable and an apparent brightening of the star is observed. This leads to a timedependent apparent amplification of the star magnitude. Especially for nearby stars, the extension of the source has to be considered. The amplification in the finite source case FS ( * ) is described by equations (9)-(11) of [13]. The extension is encoded in the projected star radius * that depends on the the radius of the star , the ratio between the distance of the lens to the distance of the star and the Einstein radius E . The Einstein radius is the characteristic angle for gravitational lensing and depends on the the distances in the system as well as the PBH mass. The projected radius increases with * ∝ −0.5 PBH making it more crucial for low PBH . An event is detectable during the time in which the amplification exceeds the threshold for detection thresh of the instrument. Thus, the detectable event duration depends on the instrument and increases as the sensitivity of the instrument to flux changes improves. In this work, we study the potential for IACTs to detect PBHs in the unconstrained mass range < 10 −10 . The finite source amplification is limited to max,FS . For PBHs that are closer to the source star, → 1, the maximum amplification decreases. This results in a point max that is the limit for the PBH distance at which the amplification still could be detected by the instrument. For heavier PBHs, the projected radius is small and max ≈ 1. However, for lighter PBHs it decreases proportional to the PBH mass max ∝ PBH . The exact point of transition depends on the star parameters such as radius , magnitude , and distance as well as the sensitivity of the instrument to detect the transient signal thresh . For the current generation of IACTs and the sampling of 50 Hz selected in this study, this transitions is around PBH ≈ 10 −8 for the best target. Following [14], the optical depth for the finite source limit can be calculated using: where 0 ≈ 7.9 × 10 −3 /pc 3 is the local dark matter density. As will be discussed in section 3, close-by, bright stars are the best targets for IACTs. Thus, we can assume the constant local DM density along the LoS. We use the approximate formula for the total event rate from [14] Γ FS ≈ 2 FS max . ( where ≈ 220 km / s is the halo circular velocity. For larger PBH , where max ≈ 1, the lower total number of PBHs in the DM halo leads to Γ FS ∝ −1 PBH . However, due to the correlation of max to PBH below this transition, we find Γ FS ∝ PBH . The duration in which the event might be detectable is given by the average event duration This duration is directly correlated with max and thus, also a function PBH . In the range of max < 1, we find ∝ PBH . Thus, the sensitivity to fast optical transients, could allow the detection of microlensing events with small PBH . PBH Microlensing Observations with IACTs The existing PBH studies use traditional optical instruments. These provide a good optical precision combined with the simultaneous monitoring of up to 10 8 stars [7]. This is possible due to the use of CCD cameras with a high pixel density. On the other hand, these instruments are limited to a minimum of 2 minutes sampling speed. Compared to these, IACTs are optical telescopes that are constructed to detect the ∼ nanosecond flashes of Cherenkov light from atmospheric air showers. For an overview of IACTs as optical instruments, see [15]. Their large optical reflectors make them powerful detectors for high time resolution photometry, as they minimize atmospheric scintillation noise. Their overall optical precision however is modest, compared to traditional optical instruments. VERITAS is sensitive to measure the flux of objects with ∼ 10.2 magnitude at 2,400 Hz with uncertainties of 10% [11]. Each of the VERITAS cameras consists of 499 pixels made of photomultiplier tubes (PMTs) that cover the field of view of 3.5 degrees [16]. As each pixel is integrating the background light of a large part of the sky, only a bright target in the foreground would be a feasible candidate for PBH searches. In this study, we conservatively assume that only one bright foreground object possibly could be detected at once. Target Selection In the following, we investigate the optimal target star to constrain the abundance of PBHs below 10 −10 with VERITAS. We only consider shot noise for which the relative uncertainty decreases as −0.5 with the total number of photons . The total photon counts consist of the source src and the night sky background (NSB) bck contribution. We assume a constant NSB level of magnitude = 9 and scale the relative uncertainties on the measured star fluxes to arbitrary magnitudes. The sensitivity to detect flux changes thresh is embedded into this uncertainty. For bright stars this value is roughly constant thresh ∼ 1. At higher magnitudes thresh increases exponentially with . For the sensitivity of VERITAS with the given sampling speed in this study of 50 Hz, we find the transition to be around ≈ 13. In the mass range we consider in this work ( PBH < 10 −10 ), this influences the maximum distance max . For 13 we find max ∝ 10 −0.2 and above max ∝ 10 −0.8 . To describe the smooth transition we use a linear interpolation ( ). With this, max can be described by the stellar parameters and the PBH mass: By inserting Equation 4 and Equation 1 into Equation 2 , the dependency of the total event counts in this regime can be written as The ideal target star is selected to optimize the expected event rate. It is a trade off between the distance of the star (corresponding to a large amount of DM along the LoS), the radius and the magnitude of the star (accounting for a large amplification in a microlensing event). We investigate objects in the JSDC catalog [17] that contains the V magnitude as well as the expected angular diameter of 482,723 stars. Furthermore, we query the SIMBAD database [18] to obtain the distances for these objects, resulting in 433,378 candidates. The distribution of the relevant star parameters is shown in Figure 2. The colors and size of the markers represent how well-suited they are as a potential target for a microlensing PBH search. We find, that among the best target stars the majority are B-type stars. Due to their very small radii especially hot subdwarf stars (sdO/Bs), which are situated in the extreme horizontal branch [19,20], are among the best targets. The optimal target is the sdO/B star PG 0240+046 with a V magnitude 11.98, distance = 692 pc and radius = 0.174 R . We note, that further effects should be considered for the selection of the optimal target. Among these are the possible saturation of the pixels of the instrument for bright stars and the possible variability of the star. The consideration of these is beyond the scope of this work.g Furthermore, the JSDC catalog does only contain a small fraction of O-type stars that possibly also might be good targets due to the large ratio of brightness to radius. Event Rates As discussed above, IACTs benefit from a very fast sampling speed compared to previous investigations. While in these cases, the low mass limits are often limited by the minimum event duration that could be detected, IACTs are less constrained here. Therefore, we do not integrate the differential event rate Γ over the relevant range of event duration (e.g., see [14,21,22]). Instead, the mainly limiting factor for IACTs is the low number of stars and the limited sensitivity to flux changes. These are already considered in the total event rate Γ given by Equation 2. We follow the typical assumption that the whole local dark matter consists of PBHs with a delta mass profile. For simplicity, we assume that during the observations always one star with parameters as PG 0240+046 could be monitored, which is is an optimistic assumption. Thus, the results should be considered as upper limits. The counts develop linear with the sampling duration and the relative error due to shot noise decreases with −0.5 . In this work, we assume that a 50 Hz sampling is used which results in a ∼ 1/7 times lower relative uncertainty compared to the 2, 400 Hz sampling of [11]. We require 4 consecutive samples to be enhanced by more than 3 sigma for the detection of a PBH event which corresponds to less than one fake positive per year. The event rate with VERITAS as a function of the PBH mass are presented by the blue solid line in the left panel of Figure 3. The transition of the max dependency is around 10 −8 . On the right graph, we show the average duration of these events. As expected, at the mass of interest, it decreases with PBH which is a consequence of the change in max . The green area shows the range, in which the average event duration would be detectable in 4 consecutive samples with 50 . With the given sampling speed, VERITAS could detect events down to PBH ≈ 10 −12 . However, in the interesting mass range PBH < 10 −10 such an event can only be expected every ∼ 10 6 years of observation. We also investigate the possible performance of a next-generation IACT. The black dashed and dotted lines in Figure 3 show the results assuming an improvement in the uncertainty of the flux measurement (and thus in thresh − 1) by a factor of 10 and 100 respectively (accounting for e.g. larger mirror reflecting areas or photodetectors with better quantum efficiencies). The event rates increase by a factor ( thresh − 1) −2 and the event duration is larger by ( thresh − 1) −1 . Nevertheless, with the 100 times improved sensitivity, in the relevant mass range an event can only be expected after ∼ 150 years of observation time with the given sampling speed. These results strongly depend on the sampling duration . As described above, it influences the sensitivity ( thresh −1) ∝ 1/ √ and thus at low PBH masses directly changes the event rate Γ FS ∝ and duration ∝ √ . E.g., observing with a 1 Hz sampling, a next-generation instrument with 100 times improved sensitivity could expect ∼ 0.05 events per year of observation time at the peak PBH ≈ 10 −11 and the average duration would still be detectable. Even with this tuning of the search, no large event rate is expected. Additionally, IACTs are limited to observe during the night and a promising target star is not always expected in the FoV. Conclusions Imaging Air Cherenkov telescopes have proven in the past, that they can be powerful instruments for fast optical astronomy. In this work, we investigate the possibility of IACTs to detect microlensing of primordial black holes in the currently unconstrained mass range of PBH < 10 −10 . At low PBH masses, the event duration correlates linearly with the PBH mass, making fast sampling speed with high signal to noise crucial. We find that the limiting factors for the expected event rate are the modest optical accuracy and the small number of stars that IACTs could simultaneously observe. No detectable PBH-induced microlensing events are expected over the whole VERITAS Observatory lifetime. These searches are still not competitive even assuming an increase in the sensitivity by a factor of 100 for a next-generation instrument. Besides the fast sampling speed, a good sensitivity to flux changes as well as the ability to monitor a large number of stars is also required to constrain the PBH abundance with masses at PBH < 10 −10 .
2021-08-13T01:16:08.814Z
2021-07-05T00:00:00.000
{ "year": 2021, "sha1": "464307f59c08d0e9fd0487512bab7775077c9ec0", "oa_license": "CCBYNCND", "oa_url": "https://pos.sissa.it/395/495/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "464307f59c08d0e9fd0487512bab7775077c9ec0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
21674853
pes2o/s2orc
v3-fos-license
Focusing unpolarized light with a single-nematic liquid crystal layer Abstract. We describe a simple but very efficient optical device that allows the dynamic focusing of unpolarized light using a single-nematic liquid crystal layer. The operation principle of the proposed device is based on the combination of an electrically variable “half-lens” with two fixed optical elements for light reflection and a 90-deg polarization flip. Such an approach is made possible thanks to the close integration of the thin film wave plate and mirror. Preliminary experimental studies of the obtained electrically variable mirror show very promising results. Introduction Mobile miniature cameras are undergoing continuous growth of complexity and, at the same time, are under very strong cost reduction pressure. 1 One of the highly desired features of such cameras is the autofocus function required for bar code or text scanning. Today, the widely accepted approach, enabling the autofocus, is based on the use of the axial mechanical movement of the camera's base lens with the help of voice coil motors (VCMs). Despite the dominating position, this technology has several drawbacks such as the tilt and gravity sensitivities, as well as limited possibilities for further miniaturization and cost reduction. This explains the emergence of alternative approaches, such as liquid 2 or elastomeric 3 lenses, based, respectively, on the deformation of an interface between two immiscible liquids and on the deformation of an elastomer. While those approaches are very elegant and polarization insensitive, they still suffer from several drawbacks: the necessity of hermetic packaging and the associated high-manufacturing cost are probably the most important aspects here. At the same time, most of the thermotropic liquid crystalline (LC) materials 4,5 are hydrophobic and do not require such precautions. Their technical performance and excellent environmental robustness were already demonstrated by the LC display (LCD) application. 6 Many approaches have been demonstrated to also build nematic LC (NLC) components for optical imaging. 1,[7][8][9][10][11][12][13][14][15][16][17] However, those materials have a fundamental limitation: they are polarization sensitive since the refractive index modulation here is achieved by the electrical field-induced reorientation of their local anisotropy axis (commonly called "director," representing the local average direction of the long molecular axes of the NLC 4 ). Figure 1(a) schematically demonstrates an electrically variable NLC lens using a single-layer of NLC that is in the Y; Z plane (with its ground state director being parallel with the Y-axis). The application of the external nonuniform excitation (e.g., electrical field; see various methods of generating such nonuniform excitation in Refs. 1 and 7-17) may generate a lens-like orientational distribution of the director across the Y; Z plane, while the director [shown as short bars in Fig. 1(a), confined in the NLC layer] remains mainly in the X; Y plane. Such a reorientation may generate, for example, a spherical phase delay to create a lens, similar to a gradient index (GRIN) lens, the refractive index profile of which may be expressed as (see, e.g., Ref. 1) n GRIN ðrÞ ¼ n c − r 2 ∕ð2fLÞ; (1) where r, f, L, and n c are the radial distance (from the center of the lens), focal length of the lens, the LC layer's thickness, and the refractive index at the center of the lens, respectively. The local value of n GRIN is defined by the local birefringence Δn of the used NLC as well as the angle of the director with respect to the light wavevector k i (see, e.g., Ref. 18). Such a lens may provide an electrically variable optical power (OP) 1 that may be approximately expressed as where the OP is expressed in diopters (D ¼ m −1 ), δn is the effective (see hereafter) birefringence (always ≤ Δn ≡ n k − n ⊥ , with n k and n ⊥ being, respectively, the refractive index values of the NLC for extraordinary and ordinary polarization modes of light), r m is the half of the clear aperture diameter of the lens, and L is the thickness of the NLC layer. However, such a lens may focus only the E y polarization (extraordinary) component of the normally incident light (with the wavevector k i ) since, during the operation of the lens, the director of the NLC layer remains in the X; Y plane. Thus, the orthogonal (ordinary) polarization component E z of light will exit from the NLC layer with a uniform phase delay and thus will not be focused. This is the reason *Address all correspondence to: Tigran Galstian, E-mail: galstian@phy.ulaval .ca why this single-layer lens may be called a "half-lens," since it can only focus half of the unpolarized light. Indeed, almost all practical solutions for lenses (intended to work with unpolarized light, e.g., sun light) would require the fabrication of "full" lenses with two NLC layers having perpendicular orientations of their directors (e.g., the director of the first layer being in the X; Y plane, while the director of the second layer being in the X; Z plane) to focus the unpolarized light propagating in the X direction [ Fig. 1(a)]. Those NLC layers must be positioned very closely and must perform in a similar way to avoid focusing two polarization components (E y and E z ) of the incident light (with wavevector k i ) in close focal points on the X-axis. Otherwise, additional image quality degradation or "polarizational aberrations" would be generated. To the best of our knowledge, the only single-layer LCbased alternative approach in an effort to simplify the manufacturing of such lenses, was the use of LC materials in the so called blue phase. 19 This is a phase that may be macroscopically considered as isotropic in the ground state, which, however, undergoes anisotropic refractive index changes when subjected to an external electrical field. In this case, the refractive index modifications may be similar for two orthogonal polarization components of light at normal incidence on the LC cell. While the use of a single LC layer (along with a very fast response time and the absence of alignment layers) is a very attractive feature, the required voltages here (to operate the lens) are much higher (≈70 V) and the obtained lens still demonstrates polarization dependence at larger incidence angles. Proposed Solution At the same time, to increase the viewing angle performance of LCDs, transparent solid broadband anisotropic layers have been recently introduced using liquid reactive mesogen (RM) materials, which are aligned as NLC materials and then are polymerized to obtain thin polymerized RM (PRM) solid films. In the present work, we propose a simple, but very efficient solution to the above-mentioned polarization dependence problem by using the well-known "quasiisolation" scheme using a PRM quarter wave plate. Namely, we propose to combine a single NLC layer with a very thin layer of such an anisotropic layer of PRM to build a light focusing element [electrically variable liquid crystal mirror (EVLCM)], which may be polarization insensitive. The operation of the proposed EVLCM is based on the broadband polarization flip (rotation at 90 deg) of the reflected light. Before describing the basic concept, let us emphasize that the thin and integrated character of the PRM is key in this application (see hereafter). The basic geometry of this component is schematically demonstrated in Fig. 1(b). The ground state director of the NLC is parallel to the Yaxis. Here also, during the operation of the EVLCM, the director of the NLC layer remains in the X; Y plane. The anisotropic PRM layer is built to perform as a broadband quarter wave plate (λ∕4). Its anisotropy axis C is aligned at 45 deg with respect to the Y-axis. For simplicity, only one original (initial) polarization component (ordinary, E z ) of the incident light is shown. This component (light is incident from the left) will exit from the NLC layer with a transversally uniform phase front (and thus without focusing). However, it will further traverse the quarter wave plate (λ∕4), will be reflected from the mirror M, and will traverse the same quarter wave plate (λ∕4) for a second time, but in the opposite direction. This double passage will rotate its polarization at 90 deg. Thus, the obtained beam will enter (from the right) into the NLC layer as an extraordinary polarized light (along the Y-axis). In this case, a lens-like phase front modulation (and focusing) will be generated by the NLC layer. Obviously, the perpendicular initial polarization component E y of the incident light (from the left) will be focused on its first passage through the NLC layer and will not undergo further focusing during its second passage (from the right), since then it will have ordinary polarization. While the combination of a quarter wave plate with a reflecting element has already been used, the current solution is using the very thin and closely integrated characters of PRM and thin film mirror technologies to insure that the effective two focusing NLC layers are virtually very close to each other. Otherwise, the proposed tunable mirror would have unacceptable aberrations (see below). Experimental Conditions and Results To validate the proposed scheme, we have used a half-lens provided by LensVector (see, e.g., Ref. 1). Its clear aperture was ∅ ≈ 2.5 mm, the optical birefringence of the NLC used (homemade mixture) was Δn ¼ 0.2, and the thickness L of the planar oriented [thanks to polyimide (PI) layers] NLC was 50 μm. The lens used a well-known "modal control" scheme, 1,8 where the bottom and top substrates of the cell are coated, respectively, by uniform and hole-patterned (HP) indium tin oxide (ITO) electrodes (Fig. 2). The uniform electrode is ground. The aligned PI layers are rubbed in antiparallel (180 deg) directions, generating thus an approximate pretilt angle of 3 deg. This pretilt is responsible for the preferable "asymmetric" molecular reorientation when a centro symmetric electrical field is applied to the cell via the HP ITO (see, e.g., Ref. 20). The smooth lens-like profile of the electrical field (inside the NLC layer) is obtained thanks to the very thin weakly conductive layer (WCL) made of ZnO (coated on the top of the HP ITO) providing several tens of MΩ∕□ sheet resistance. The WCL helps to propagate the electrical potential from the periphery of the HP ITO toward the center of the cell. This softens the electrical field's profile in the NLC layer and minimizes the optical aberrations. The frequency of the electrical signal, applied to the HP ITO, is changed from 1 kHz (when the electrical field is uniform in the NLC layer and thus the OP of the cell is zero; there is no light focusing) to 5.5 kHz (when the electrical field is noticeably reduced in the center of the cell compared with the periphery and thus the NLC's reorientation is lens-like and the cell focuses light). It may be useful to mention that, at 1 kHz, the driving electrical field is not only uniform (thanks to the WCL), but also perpendicular to the cell substrates. In this case, thanks to the above-mentioned pretilt, the angle between that electrical field and the ground state molecular orientation is the same everywhere in the cell. Consequently, the electrical field-induced molecular reorientation is performed in one direction everywhere in the cell (from the Y-axis, Fig. 2 toward the X-axis). When the frequency of the electrical field is continuously increased up to 5.5 kHz, the field becomes nonuniform and curved. The molecular reorientation is adapted correspondingly, while still keeping the asymmetric (Fig. 2) reorientation and avoiding the orientational defects (this is the so-called "frequency tuning" concept of LensVector Inc. 21 ). The experimental setup used for the characterization of the electrically variable mirror is described in Fig. 3. An unpolarized CW He-Ne laser (operating at 632.8 nm) beam was used as probe. A telescopic system was used to adjust the diameter of the beam (>3 mm). A polarizer (mounted in a rotating frame) was used to choose the desired initial polarization of the probe. A polarization-insensitive beam splitter (BS) was used to forward the reflected, from the EVLCM, light toward the Shack-Hartmann wave front sensor (SH), purchased from ThorLabs. An additional fixed lens (with F ¼ 15 cm focal length, positioned at a 2F distance from the sample and the SH) was used to image the output (the same as the input) plane of the EVLCM on the input of the SH. The incident planar and reflected spherical wave fronts are shown as Incid. and Refl., respectively. The experimental procedure was as follows. The wave front of the reflected, from the EVLCM, light for a given input initial polarization (ordinary or extraordinary) was studied at different electrical excitation values, generating various wave front curvatures and corresponding OPs. Simultaneously, the root mean square (RMS) wave front errors (in micrometers) were also measured for each OP. Then, the direction of the initial polarization was rotated for 90 deg (by the polarizer) and the same experiment was conducted again. The obtained preliminary (nonoptimized) results are presented in Fig. 4. As we can see, the variation of the electrical excitation frequency (typical control mode for LensVector lenses; using alternative current sinusoidal-shaped electrical signal with an RMS amplitude of 6.4 V) allows the gradual change of the OP (in diopters) of the EVLCM [squares; on the left vertical axis of Fig. 4(a)]. At the same time, we can observe a corresponding change of RMS aberrations (circles, right vertical axis). We can also notice some undesired differences of OPs for the cases of initial polarizations being parallel (filled squares) or perpendicular (open squares) to the ground state director of the NLC. There is also a relatively small difference of the RMS aberrations for corresponding polarization components (parallel-filled circles and perpendicular-open circles) of the probe. Figure 4(b) shows the three-dimensional picture of a typical wave front detected by the SH sensor. Discussions We can first use the lens parameters and Eq. (2) to determine the theoretical limit of the possible maximum achievable OP. It appears to be OP ¼ 12.8D. The maximum OP obtained in our experiments was approximately 30% less, which is not surprising since the effective optical path difference (OPD) in an NLC layer is typically given by the equation where the δnðxÞ ¼ n e ðxÞ − n ⊥ is the local value of the effective birefringence (the difference between the refractive indices of ordinary and extraordinary polarization modes), which changes with the propagation of light along the X-axis. As we have already mentioned, this value is always ≤ Δn, since the molecular reorientation is nonuniform in the NLC cell (more in the center of the cell than close to its substrates). A more important aspect of the experimentally obtained results is the mismatch of OPs for two perpendicular polarized modes [ Fig. 4(a)]. As can be seen, the OP difference is less than AE1D at 8.5D of OP (at 5.25 kHz of driving frequency). This is not so bad given the fact that, to form an image, such a mirror must be combined with a fixed focus lens of 250D to 300D of OP. However, we have performed some additional studies (not reported here) that have shown that the undesired residual polarization dependence was caused by our homemade PRM film. Indeed, it appeared that the film was not performing exactly as a quarter wave plate and the generated relative phase delay Δφ ¼ 2πδn PRM L∕λ (where δn PRM , λ, and L are the values of the PRM's anisotropy, the light wavelength in vacuum, and the thickness of the PRM, respectively) was slightly different from π∕2. Thus, the desired 90-deg rotation was not performed accurately and some ellipticity of backpropagating light polarization was generated. Thus, a part of the light (initially of ordinary polarization) was not focused and the SH sensor interpreted this as a lower OP (open squares) and higher aberrations (open circles) [ Fig. 4(a)]. For the same reasons (the PRM film is not yet optimized), we did not study the angular properties of our homemade quarter wave plate. Such a study would be necessary to evaluate the possible angular dependences of the value of Δφ and its influence on the angular performance of the proposed mirror when it would be integrated in various optical imaging systems. However, for practical reasons, we prefer to use commercial polymeric quarter wave plates, which are presently fabricated in large volumes for the roll-to-roll fabrication of displays by many manufacturers (e.g., JX Nippon Oil & Energy Corporation). At the same time, thin film reflection or antireflection coatings are already used in large window applications. Both of those components are thus well mastered and have low cost. We think that the proposed mirrors may be used in both optical autofocus and zoom in the architectures applying light reflection (at least one of the mirrors being the EVLCM proposed here). Indeed, today's cameras (sometimes 6 units per mobile phone!) are continuously shrinking in size having lateral sizes of 5 mm × 5 mm, which enable their use when the camera is aligned with its optical axis parallel to the main screen of the mobile device (the typical thickness of which is about 10 mm). Referring to Fig. 1(b), the EVLCM may be designed to operate at 45 deg and thus the incident light (along the X-axis) may be reflected, e.g., first in the −Y direction and then in the −Z direction, to form the image. It might be important to notice that such sizes almost exclude the use of VCM technology to implement advanced functionalities (too difficult to further miniaturize), while the LC lenses and mirrors may be used in even smaller architectures. Conclusion Thus, we can already say that the proposed solution may be used for many telescopic optical schemes. So, despite the above-mentioned missing elements, we think that the proposed element is very promising for multiple points of views. First of all, it uses a single NLC layer to focus unpolarized light, which greatly simplifies the manufacturing process and thus reduces the cost of the element. Second, the PRM layer and the mirror may be built (coated) directly on the internal surface of the NLC cell's back substrate, providing a minimal distance D between the two "effective" NLC layers that may be reduced to approximately D ≈ 50 μm. Let us note that in "traditional" lenses (using two crossed half-lenses) this distance is defined by the double of the cell substrate thickness (ranging from several hundreds of micrometer to millimeter) at least. Finally, the required voltages here are an order of magnitude lower compared with blue-phase LCs and the NLCs used here are better mastered at the industrial level.
2018-05-13T15:26:45.875Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "c49d7ca613dc90d83539cddc39b7919eb769883c", "oa_license": "CCBY", "oa_url": "https://www.spiedigitallibrary.org/journals/Optical-Engineering/volume-54/issue-2/025104/Focusing-unpolarized-light-with-a-single-nematic-liquid-crystal-layer/10.1117/1.OE.54.2.025104.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "021f690199e7050f0dc725484906d981ba85df54", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Engineering" ] }
141057358
pes2o/s2orc
v3-fos-license
DRS Digital Library DRS Digital Library Are Traditional NPD Processes Relevant to IoT Product and Are Traditional NPD Processes Relevant to IoT Product and Service Development Activities? A Critical Examination Service Development Activities? A Critical Examination Increasingly physical products are being equipped with sensors, which connect them to the Internet; the network of these 'smart products' are acknowledged as the Internet of Things. These digitalized artefacts have a wide variety of material properties that could include a range of outcomes, such as new products, platforms, services, and other value pathways that differentiate them from their non-digital counterparts. Practitioners and researchers acknowledge that these differences influence tremendously on IoT product and service development processes. These are significant for IoT firms that occupy the market, due to a paucity of established literature on this theme; it is difficult to find studies on NPD processes, which reflects this digitization. This is an exploratory paper. That explores current literature prior to further empirical data collection. Through a critical examination of literature, this paper examines how smart product development processes are different from traditional product development processes. Thus, this paper offers critical insights and observations to enable both practitioners and academics to ascertain a detailed understanding of diverse approaches to NPD process activities for the IoT. Introduction By 2020, it is estimated that 50 billion devices around the world will be connected to the Internet (Cisco, 2011). This seemingly recent trend has been decades in the making, but is at a critical tipping point now (Burkitt, 2014). The IoT era represents a transformative shift for the economy in which other major technology industry trends (e.g., cloud computing, data analytics, and mobile communications) can be incorporated (ibid, 2014). At present, the Internet of Things remains a fertile field for enterprises and so that in every six businesses is planning to roll out an IoT-based product (ibid, 2014). It is anticipated that the amount of IoT products will soon overtake the number of connected individuals: Gartner (2014) forecasts that the IoT will reach 26 billion units by 2020, up from 0.9 billion in 2009, and will affect how organisations develop new products and services. Consequently, the 'Internet of Things' (IoT) is becoming a popular theme of exploration amongst academics and industry practitioners, i.e. a new technological orientated paradigm regarded as a vision of connectivity, for anything, at anytime and anywhere, with an impact on everyday life more dramatic than the Internet had in the past twenty years (ITU 2005). IoT is also defined as "Interconnected objects having an active role in what might be called the Future Internet" (European Commissions Information Society, 2008). Such interconnected objects, so called 'smart products', have the capability to retrieve, store and share massive amounts of data which is also transforming business (Pisano, Pironti, & Rieple, 2015) and NPD processes (Johnson, Friend & Lee, 2017). Moreover, these pervasive adoptions of and innovations with digital technologies is radically changing the nature of products and services (Yoo, Boland, Lyytinen & Majchzak, 2012) and therefore influencing NPD processes on smart products and services. Study Rationale With the emergence of IoT as a new source of huge data, businesses face new opportunities as well as new challenges (Porter & Heppelmann, 2014). Not only manufacturers but also the many various service industries are in the process of adoption of the IoT to increase revenue through enhanced services and to lead the market (Lee & Lee, 2015). The adoption of this technology is rapidly gaining momentum since technological, societal, and competitive pressures push companies to innovate and transform themselves (ibid, 2015). Although researchers and practitioners often critically debate the opportunities and challenges to the adoption of IoT, not much attention has been focused on the New Product Development process of IoT; arguably one of the most critical marketing planning and implementation process activities. It is difficult to identify a generic NPD process, which reflects the rapidly growing digitization, as such, there is a paucity of studies on the differences between traditional NPD models and emergent approaches towards IoT product and service development models. Research Goal, Questions and Methods This paper forms part of a summary of IoT products development practices, and is focused on a critical examination of established NPD and NSD processes that are related to the development of IoT products and services. Its primary aim is to contribute to a deeper understanding of the IoT product and service development processes. The paper provides insights with regard to establishing new approaches to the IoT product development process, which could then enable academics and industry practitioners to better understand the process of developing IoT products and services. The following research questions will be both offered and critically debated: • What are the characteristics of existing NPD and NSD processes and their relevance to IoT product and service development activities? • What are the key factors affecting the development of IoT NPD processes, and how do they differentiate them from their non-digital counterparts? • What are the new attributes required by IoT product and service development activities? In order to answer these primary questions, this paper presents a common understanding of established NPD and NSD processes; then it offers a summary of relevant trends of IoT, and closing with implications for emergent approaches towards the IoT. The first stage focused on issues surrounding traditional NPD (Cooper, 1990;Booz, Allen & Hamilton, 1982;Trott, 2012;Takeuchi & Nonaka, 1986;Pennell, Winner, Bertrand, & Slusarczuk, 1989;Crawford, 1997;Baker & Hart, 1999;Cooper, 1994;Smith, 2007), NSD process (Johnson, Menor, Chase, & Roth, 2000) and innovation process identify the common characteristics of existing development processes for products and services. Secondly, it examines the key factors in digital innovation which affect approaches towards the development of hybrid products and services, including six dimensions of digital innovation (Yoo, Lyytinen, Boland & Berente, 2010); three dimensions of big data (McAfee & Brynjolfsson, 2012;Meta Group, 2001); new opportunities in digital age (Henfridsson, Mathiassen & Svahn, 2014) and three traits of innovations (Yoo et al, 2012). Finally, based on the study of NPD for IoT products and services, guidelines for developing smart products and services are then offered. The Internet of Things (IoT) The 'Internet of Things' is the combination of physical components (hardware), smart components (sensors, software and data analytics) and connectivity (wired or wireless connection) which empowers to improve value creation. The smart components elevate the capabilities of the physical product, whilst the connectivity components enhance the capability of the smart components. Connectivity allows IoT products both the capability to exchange information between the product and its environment, whether that its user, the manufacturer or other smart products, and the capability to offer functions that exist outside the physical device (Porter & Heppelmann, 2014). IoT products provide geometrically expanding opportunities for new functionality, greater reliability, higher product utilization, and capabilities that cut across and exceed traditional product boundaries (Porter & Heppelmann, 2014). IoT is penetrating a wide range of industries including retailing, manufacturing, healthcare, insurance, home appliances, heavy equipment, airlines and logistics (Lee & Lee 2015). These new types of products externally alter industry structures and boundaries but also internally re-shape the value chain by changing product design, marketing and manufacturing, and by demanding the need for product data analytics (Porter & Hepelmann, 2014). Giudice (2015) argues that the IoT utility is even reshaping innovation processes connected to products and services as well as interpreting the business process management in many sectors. Whether companies are going to take either get-ahead strategy or catch-up strategy (Firms implementing get-ahead strategy, use innovation reputation to differentiate from their competitors, in contrast, implementing catch-up strategy, companies are able to remain efficient in order to survive, grow, and even overtake the leader's position), all companies often expect to have appropriate NPD processes to develop their own IoT products or services in order to take advantage of IoT innovations in the future. A new product development process for IoT is therefore emerging where products consisting of electrical and mechanical parts become intelligent systems that combine hardware, software, control sensors, data storage and connectivity in infinite ways. With so much potential value in the investment of IoT technology, organisations need to have an appropriate and efficient NPD process to minimize the risk of failure. As such, this paper will now review how traditional NPD and NSD have evolved and their characteristics before exploring their relevance to IoT and, new approaches toward IoT product and service development(s). New Product Development and New Service Development As contemporary competitive pressure and pace of technological change increases, corporations face the challenges of increasing efficiency, creating breakthroughs, and pre-empting competitors (Meyer & Utterback, 1995;Kessler & Bierly, 2002). In this context, the continual development of new products are generally admitted as a requirement for companies' growth and long-term prosperity. Consequently, the subject of New Product Development (NPD) has gained a considerable amount of attention from product development professionals and researchers over the decade (Durisin, Calahretta, & Parmeggiani, 2010;Machado, 2013). A large number of academics has defined new product development, such as the transformation of a market opportunity and a set of assumptions about product technology into a product available for sale (Krishnan & Ulrich, 2001). Bruce and Cooper (2000) argue it is a term used to capture a range of different types of innovative activities leading to the production of a new service or product from radical innovations to simple modification and adaptations to existing products. Awwad and Akroush (2016) identify NPD efficiency as the most significant element to determine a company's competitiveness and survival, because it distinctively affects firm's financial performance. Thus, NPD is a fundamental business activity and as such "the development of new and improved products for the survival and prosperity of modern corporations" (Cooper, 2005). However, managing new product development is a challenging, complex and risky process (Bruce & Cooper, 2000;Goffin & Koners, 2011), as the failure rates of NPD are estimated about 40% of new products at launch, and further only 13% of companies report that their total efforts to NPD hit their annual profit objectives (Cooper, 2017). Hauser and Dahan (2007) argue that having a good NPD process is critical in terms of firms can efficiently managing the inherent risk of new product development. Numerous NPD process models are characterized as step-wise approaches such as stage-gate system (Cooper, 1990) or The Booz, Allen and Hamilton model (Booz, Allen & Hamilton, 1982) that managers are recommended to use (Harmancioglu, McNally, Calantone, & Curmusoglu, 2007), which are clear and useful in terms of management (Eveleens, 2010), but also effectively acting as a blueprint for organizations to follow and adapt as required (Oorschot, Sengupta, Akkermans, & van Wassenhove, 2010). However, these sequential models are regarded as relatively simple standard processes for NPD (Tidd & Bessant, 2005;Bruce & Cooper, p.11, 2000) and too prescriptive and mechanistic, therefore, fail to take into account overlaps of activities that will occur naturally in the workplace (Bruce & Cooper, p.11, 2000). Moreover, these models can increase cycle time (Schilling & Hill, 1998) so that besides the strength of the models, the weaknesses apparent in the sequential models led to the development of more complex models. : Cooper, R. G, 1990 Several researchers applied sequential models to the service development activity (Stevens & Dimitriadis, 2005). Johnson, Menor, Chase, and Roth (2000) developed a model describing the NSD sequence which identifies 4 broad stages and 13 tasks that must be conducted to launch a new service, and the components of the organisation that are involved within the process. Although sequential NSD models provide a descriptive view of ongoing processes, they suffer from three major weaknesses as linear NPD models do: 1) time-consuming and overly bureaucratic processes slow projects down (Cooper, 1994); 2) each stage does not describe the way of integration that firms organise development teams (Stevens & Dimitriadis, 2005); 3) linear models do not help to define what must be produced during each stage (ibid, 2005). From the idea to focus attention on the project as a whole rather than the individual stages (Trott, 2012), radically different approaches have emerged which are simultaneous approach such as parallel processing models (Takeuchi & Nonaka, 1986), Concurrent Engineering (Pennell, Winner, Bertrand, & Slusarczuk, 1989), Activity-stage models (Crawford, 1997), multiple convergent model (Baker & Hart, 1999) and Third-generation model (Cooper, 1994). The key benefit of parallel processing NPD models is that they decrease time to market by reducing the cycle time (Anderson, 1993) and emphasises the need for a cross-functional approach (Trott, 2012). However, adopting parallel processing requires more effort from all departmental functions, more effective management, and large-scale organisational changes in routine (Bruce & Cooper, 2000) so that most organisations are not willing to deal with the changes, altering the traditional method of NPD (Anderson, 1993). : Takeuchi & Nonaka, 1986; Right: Activity-Stage Model. Source: Crawford, 1997 Chesbrough's (2004) open innovation concept is not presented as a model of NPD as such; however, it has been highly influential in the areas of R&D management, innovation, and NPD. It emphasises the significance of external network interactions in relation to NPD activities and this phenomenon has grown due to a number of factors such as the reduction of the product life cycle, the aggregation of global competition and the rising costs of research and development (Caputo, Lamberti, Cammarano, & Michelino, 2016). Recently, flexible product development is the ability to make changes to the product being developed or in how it is developed (Smith, 2007) so that agile methodologies begins to attract interest from developers of physical products (Cooper, 2014;Ovesen & Sommer 2012) who experienced the limitations and challenges of traditional waterfall product development approaches. Agile development method is for software development based on iterative and incremental process consists of a number of short development cycles, known as sprints (Beck, Beedle, van Bennekum, Cockburn, Cunningham, & Flowler, 2001). It is argued that these 'sprints' improve communication and coordination activities, speed to market and faster responses to changed customer requirements or technical challenges (Begel & Nagappan, 2007). However, since agile methodology is for software development, some challenges for manufacturers adopting agile practices have been identified, i.e. a lack of scalability, a proliferation of meetings, and a lack of effective management (Cooper, 2017). Summarising the development of NPD models (Rothwell, 1994;Ortt & Duin, 2008;Cooper, 2016), is supported by some trends of increasing and particular significance. Firstly, the organisation's capability to develop new products quickly have become an increasingly important aspect in recent years in determining competitiveness, specifically in industries where product cycles are short and technological change rates are high (Rothwell, 1994;Goktan & Miles, 2011;Cooper, 2014;Chang & Taylor, 2016). Secondly, simultaneous processing is regarded as another key factor for successful NPD management, with cross function teams working independently which improves the speed, efficiency and flexibility of the NPD process (Williamson & Yin, 2014). Thirdly, unlike NPD approaches in the industrial era, which chiefly relied on information from internal research, recent approaches (e.g. open innovation) are more likely to look outside the company, such as to customers, competitors, and suppliers, in order to find new strategic partners and build comprehensive networks to have more value and competitive advantage (Chesbrough, 2006). Although a comprehensive set of literature surrounding NPD approaches has been discussed, and indeed successfully applied (Cooper, 1994;Chesbrough, 2003;Hansen & Birkinshaw, 2007;Sheu & Lee, 2011;Williamson & Yin, 2014), it is clear that established NPD processes are still too timeconsuming, with many that either are simply a waste of time or are cost ineffective. (Cooper, 2016;Ortt & Duin, 2008;Sheu & Lee, 2011;Cooper, 2014). More importantly, NPD processes reflecting the nature of IoT product and service development are limited in number. Emerging NPD approaches for IoT are now required since the field of innovation management must adapt to a changing economic, societal and technological context in this digitized era. Therefore, the attention of this discussion will focus upon the key factors that are influencing the development process for IoT products and services and how they differentiate from existing NPD processes. Yoo et al (2010) argues that to understand the nature of digital innovation, one must understand how digital technology differs from earlier technologies, in other words, characteristics of digital technologies; the reprogrammability, the homogenisation of data, and the self-referential nature of digital technology. The reprogrammability refers to a digital device that is, reprogrammable, allowing the device to perform a wide array of functions (Yoo et al., 2010). The homogenisation of data means that any digital content which can be stored, transmitted, processed, and displayed, can be combined easily with other digital data to deliver diverse services which blurs the boundaries of product and industries (ibid, 2010). Finally, the self-referential means that digital innovation requires the use of digital technology which allows fostering further digital innovation through a virtuous cycle of lowered entry barriers, costs, and accelerated diffusion rates (ibid, 2010). What factors differentiate traditional NPD to NPD for IoT The six dimensions of digital innovation, identified by Yoo et al (2010), are also associated with both innovation outcomes (convergence, and digital materiality) and innovation processes (generativity, heterogeneity, locus of innovation, and pace). The first dimension of digital innovation is digital convergence. Since digitized technologies share the same infrastructural capabilities, which open novel opportunities for products and services (Tilson et al, 2010), convergence refers to continuous integration of diverse and heterogeneous technologies through homogenization of digital data (Yoo et al, 2010). This therefore changes the nature of products towards becoming digital platforms: e.g., an automobile has become a mobile computing platform (ibid, 2010). More artefacts interacting with other digital devices, provide novel user experience: e.g., GPS (Global Positioning Systems) service in mobile phones, when combined with cars or clothing, deliver an array of service and innovation which connects previously disconnected customer experiences and creates a new kind of virtual physical world (ibid, 2010). Consequently, digital convergence will affect the process of developing IoT products and services in which firms need to differentiate user experience offerings, but consider the combination of devices, services and contents and then the interactions with other competitive digital devices and environment in which the IoT products operate. Secondly, digital materiality differentiates NPD processes significantly to their counterparts of physical materiality. Physical materiality refers to artefacts that can be seen and touched, that are generally hard to change, whereas digital materiality refers to what the software incorporated into an artefact can do by manipulating digital representations (Yoo et al, 2012) which allows designers to expand existing physical materiality by "entangling" it with software-based digital capabilities (Yoo et al, 2010;Zammuto, Griffith, Majchrzak, Dougherty, & Faraj, 2007) when they develop IoT products and service. IoT products can be defined not by their physical materiality but also by fundamental functionality enabled by digital materiality. Trainers or toothbrushes could be an example of physical materiality, however when it contains a microchip in the trainers or a toothbrush that then can be programmed to record a user's amount of physical activity or health status of user's gum, it presents new experiences as digital materiality. Generativity refers to the way actors, who were not directly involved in the original creation of a technology; begin to create devices, services, and contents which may not be consistent with the original purpose of the artefacts (Zittrain, 2006). An illustrative example of generativity is smartphones with apps, due to its reprogrammable nature, novel functions or capabilities can be added after a device has been produced and launched (Yoo et al, 2012). Higher levels of generativity allow higher numbers of novel ideas, which result in faster innovation cycles with increased iteration, that are more dynamic, agile innovation process than linear approach models (Yoo et al, 2010). Heterogeneity refers to the integration of diverse forms of data, information, knowledge, and tools and locus of innovation refers to dramatic geographical and social dispersion of innovation sites and processes due to low communication and storage cost (ibid, 2010). New forms of innovation, such as crowd sourcing and open source, enable the locus of innovation moving from inside an organisation to its periphery and edges (ibid, 2010). Both of them affect IoT product and service development processes in terms of enabling: independent innovation at different layers of digital service architecture; and innovation activities towards the periphery of the innovation network (ibid, 2010) both physically and geographically. As a number of innovations spurred by Apple's iPhone came from a number of app developers, rather than Apple itself, the de-centering of innovation activities pushes intelligence toward the edge of the organisation's enlarging network (ibid, 2010). The last dimension of digital innovation is pace. Pace refers to how frequently firms need to innovate, the speed to innovation, and the required speed of diffusion (ibid, 2010). Increased pace affects IoT product development processes in which innovation needs to be continuous, incessant, and fast, and allows an industry to increase the role of digital artefacts (ibid, 2010). Unlike traditional products which have a fixed, discrete set of boundaries and features, distinctive characteristics of IoT products are malleable, editable, open, transferable, etc. (Yoo et al., 2010;Zittrain, 2008;Henfridsson et al., 2014), delineated as "ambivalent ontologies" (Kallinikos, Aaltonen, & Marton, 2013). The scope, features and value of digital offerings can continue to evolve even after the innovative product has been launched or implemented, thus a new approach towards IoT product and service development should be identified. Moreover, most IoT designs are launched incomplete and in a state of flux in which both the scale and scope of the innovation can be expanded by various participating actors (Hanseth & Lyytinen 2010). Thus, this conveys an unprecedented level of unpredictability and dynamism in accordance with assumed structural or organisational boundaries of digital innovation, be it a product, platform, or indeed a service. Emergent approaches towards developing IoT based products and services Although, the number of study on emergent approaches towards developing IoT based product and services, Figure 6. (below) is a new approach, which is developed as a process for designing digital public space by Jacobs & Cooper (2018). This model is developed by combining existing NPD models, which can focus on underlying principles, and related tools that must be taken into consideration when designing Digital Public Space (Jacobs & Cooper;2018) Figure 6 A new process for designing IoT products and services. source : Jacobs & Cooper, 2018 One of the most distinctive attributes different to the existing NPD and NSD processes is that the new approach is not linear, but it is a continuous and emergent process, whereby; the Discovery phase enables co-design and collaboration to uncover the requirements and attributes crucial for the space. The Define phase uses narratives, scenarios and fictions to visualise and test the design idea before the Development phase, through which the products and services are created with users and lead adopters and implemented, with in use insight revealing emergent and new qualities that feed another cycle of discover, define and develop (Jacobs & Cooper;2018) This is because unlike tangible components, which get functionality at the time of production, digital components in IoT are able to modify subsidiary functionality, add supplemental functionality, or introduce entirely new functionality over the product lifecycle (Henfridsson et al., 2014). Not only one of the characteristics of digital technology, reprogrammability, but also digital materiality and pace which of the six dimensions of digital innovation, affect the scope, feature and value of IoT products and services can continue to evolve even after the innovation has been launched. Consequently, NPD processes for IoT has continuous and never-ending process cycle, which means that IoT products and services are able to keep evolving for enhanced customer experiences. Secondly, the process should contain a short cycle of discover, define and develop phase, which is comparable to the 'agile' approach, one of the existing software development approach with shorter, faster iterations within the process. The approach is feasible in IoT product and service development processes due to pace and generativity, and the dimensions of big data that are commonly referred to as the 3Vs: volume, velocity, and variety (McAfee & Brynjolfsson, 2012;Meta Group, 2001). As big data aids companies to acquire the massive volume of diverse market information promptly so that they are more easily to meet customer needs (Slater & Narver, 1995;Zhang, Wu, & Cui, 2015), it leverages the new approach towards development process for IoT. It is identified that companies, which use big data and analytics in their innovation processes, are 36% more likely to beat their competitors in revenue growth and operating efficiency (Marshall, Lievens, & Blazevic, 2015). Another attribute can be explained with one of the traits of innovations associated with pervasive digital technology, which is the emergence of distributed innovations (Yoo et al., 2012). Although it is not clearly shown in the model above ( Figure 6), during the process of developing IoT products and services, the control over innovation activities are occurred across organisations (Chesbrough, Vanhaverbeke, & West, 2006;von Hippel 2005) due to the fact that the use of IT enables to reduce communication costs so that democratize the innovation process, involving more of distributed actors which is referred as self-referential nature of digital technology, locus of innovation, and generativity (Yoo et al., 2010). Value in the IoT will be created through the transformation of customer experiences; companies need strong capabilities in experience design (Burkitt, 2014) which is as offerings, more entwined in a collaborative network of technology, people, and other offerings (Jacobs & Cooper;2018). In essence, designing Internet of Things requires the design of -the physical object; its software interface; its hardware interface; how it interacts with other devices over the network; how it is represented on a network to people and to other devices (Jacobs & Cooper;2018). This indicates that design for IoT can encompass and influence a wide range of design disciplines. Conclusion As novel and challenging as today's IoT is, it offers fertile opportunities for long-term sustainable growth for the organisation. Due to its nascent status, there is still a paucity of academic studies on the development process of IoT products and services, which is one of the most critical marketing planning and implementation process activities. Although the demands of a new approach towards developing new IoT products and services has received widespread attention, there are limited studies that focus upon this emergent topic. Connected devices offer new possibilities for everything from preemptive maintenance to new services and business models. In order to prepare for what is coming, business managers need to consider the new aspects of IoT development process in relation to their own business and the ecosystem of partners, as well as emerging technology. The main purpose of this study is to examine traditional NPD and NSD processes, considering factors that differentiate IoT products and traditional products in order to investigate if they are relevant to NPD activities for IoT. In exploring this theme, the paper draws attention to the primary research questions at large: What are the characteristics of existing NPD and NSD processes and their relevance to IoT product and service development activities? What are the key factors affecting the development of IoT NPD processes, and how they differentiate them from their non-digital counterparts? What are the new attributes required towards IoT product and service development activities? The authors have argued that the characteristics of existing NPD and NSD processes are identified as: established NPD processes are too time-consuming, with many that either are simply redundant or are cost ineffective; the firm's capability to develop new products efficiently have become an increasingly important aspect; simultaneous processing is regarded as another key factor for successful NPD management. Development processes are more likely to involve the customers, competitors, and suppliers, although, NPD and NSD processes are evolving, overcoming their shortcomings, traditional NPD approaches are not reflecting the nature of IoT product and service development. Yoo et al (2010) identified six dimensions of digital innovation, which are associated with innovation outcomes (convergence, and digital materiality) and innovation processes (generativity, heterogeneity, locus of innovation and pace). Three characteristics of digital technology, the reprogrammability, the homogenisation of data, and the self-referential nature of digital technology, are uncovered by Yoo et al (2010). The dimensions of big data, referred to as the 3Vs-Volume, Velocity, and Variety, which are relevant to the process of developing IoT products and services as identified by McAfee and Brynjolfsson (2012), and Meta Group (2001). Henfridsson et al (2014) discover influences of digital technology that affect design flexibility and design scalability. These key factors are identified as the main reasons that differentiate the IoT development process away from existing NPD processes. However, not all the factors deeply influence the IoT design and development process. Factors, such as reprogrammability, digital materiality, pace, generativity, selfreferential nature of digital technology, dimensions of big data, and locus of innovation are closely related to differentiation of IoT NPD processes from their non-digital counterparts. Finally, this paper referred to a new process for designing Digital Public Space (Jacobs & Cooper;2018) in order to explain the three attributes required towards IoT product and service development activities; this new approach should be a continuous and emergent process. The development process should contain a short cycle of discovery, definition and development phases; the activities during the process of developing IoT products and services should involve more distributed actors and stakeholders input. Although this paper has explored issues related to the NPD/S process for the IoT there, is some limitation that need to be addressed by further research. Firstly, further key factors need to be considered such as the size of the IoT ecosystem in which new products and services are developing, alongside the dimensions of digital innovation, artifacts and technology and the IoT firm's business strategy in comparison to a traditional company strategy. Although it is fair to say that many businesses with be both engaged in traditional NPD and IoT NPD. Secondly, relying solely on a limited literature review in order to identify new approach towards developing new products and services for IoT where only a limited number of studies have been discovered. Consequently, this paper has identified related and practical questions for further research: What are the key factors that differentiate the traditional NPD and emerging NPD for IoT in terms of its business strategy and process changes?; Is there a generic IoT product and service development process in the IoT industry? And finally, what is the NPD process for IoT firms, which create meaningful value and increased turnover for all its primary stakeholders?
2019-05-01T18:43:38.805Z
2018-06-28T00:00:00.000
{ "year": 2018, "sha1": "0bd541ecedf3ac22cccbdcf653ba0f9cda2b6a92", "oa_license": "CCBYNCSA", "oa_url": "https://eprints.lancs.ac.uk/id/eprint/129285/1/Are_Traditional_NPD_processes_relevant_to_IoT_prod.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "7b43233d4e4a1c427be6ceabe6d38e1c29f14657", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Business" ], "extfieldsofstudy": [ "Business" ] }
4620717
pes2o/s2orc
v3-fos-license
Sequence-Specific Targeting of Bacterial Resistance Genes Increases Antibiotic Efficacy The lack of effective and well-tolerated therapies against antibiotic-resistant bacteria is a global public health problem leading to prolonged treatment and increased mortality. To improve the efficacy of existing antibiotic compounds, we introduce a new method for strategically inducing antibiotic hypersensitivity in pathogenic bacteria. Following the systematic verification that the AcrAB-TolC efflux system is one of the major determinants of the intrinsic antibiotic resistance levels in Escherichia coli, we have developed a short antisense oligomer designed to inhibit the expression of acrA and increase antibiotic susceptibility in E. coli. By employing this strategy, we can inhibit E. coli growth using 2- to 40-fold lower antibiotic doses, depending on the antibiotic compound utilized. The sensitizing effect of the antisense oligomer is highly specific to the targeted gene’s sequence, which is conserved in several bacterial genera, and the oligomer does not have any detectable toxicity against human cells. Finally, we demonstrate that antisense oligomers improve the efficacy of antibiotic combinations, allowing the combined use of even antagonistic antibiotic pairs that are typically not favored due to their reduced activities. Introduction Antibiotic resistance is an important public health problem that emerged shortly after the discovery of antibiotics [1,2]. Pathogenic bacteria are either intrinsically resistant to some antibiotics or they acquire resistance via spontaneous mutations or horizontal gene transfer. These resistance mechanisms include deactivation or modification of antibiotics, pumping out antibiotics via efflux pumps, protection of antibiotic targets, and mutations in the target enzymes that decrease antibiotic affinity [3]. Even though the majority of these resistance mechanisms are well characterized at the molecular level, there has been limited success at avoiding the evolution of resistance in the clinic. There is a growing need for entirely new tools and strategies in order to stop or slow the evolution of antibiotic resistance in clinical settings [4]. Recent advances in biology, particularly whole genome sequencing technologies and geneediting tools, have enabled us to identify resistance-conferring genetic changes and perform genetic manipulations that can reverse evolved antibiotic resistance [5][6][7]. By using novel gene-editing tools such as CRISPR-CAS9 or engineered bacteriophages, it is now possible to edit bacterial genomes to modulate antibiotic sensitivity of bacteria and also design sequencespecific antimicrobials [8][9][10]. However, gene-editing tools are currently difficult to implement given the practical and ethical problems with mutating bacterial genomes within an infected human patient. Instead, we designed antisense oligomers, which target the mRNA of bacterial resistance genes, preventing translation in a sequence-specific manner [11]. Briefly, phosphorodiamidate morpholino oligomers (PMOs) are synthetic nucleotide oligomers made from six-membered morpholine rings joined together by phosphorodiamidate linkages. Each morpholine ring has a natural nucleobase attached (see [12] for PMO structure), and the oligomers are designed to bind complementary sequences in targeted mRNAs. PMOs are thought to exert their effects through translation inhibition as a result of steric hindrance when targeting bacterial mRNA within the close proximity of ribosome binding sequences [13]. Cell-penetrating peptides are conjugated to the phosphorodiamidate morpholino oligomers (PPMOs), which enhance uptake of the oligomer into the bacterial cell [14]. Unlike short RNA molecules, which are also being considered as therapeutic agents, the synthetic PMO backbone renders them resistant to being hydrolyzed by nucleases [12,15]. Previous reports have shown PPMOs to be bactericidal in vitro and in vivo in a number of gramnegative pathogens when targeting essential genes [11,16]. Here, we demonstrate that PPMOs inhibit several resistance-conferring genes, improving efficacy of several distinct antibiotic classes. Results Active excretion of antibiotic molecules via efflux proteins or reducing the uptake of drug molecules by mutating or down-regulating membrane proteins (i.e., porins) are two of the common strategies that multidrug-resistant bacteria utilize in order to render antibiotics ineffective [3]. We, and others, have previously shown that several genes that encode membrane proteins in multidrug-resistant E. coli strains either accumulated point mutations or had changes in their regulation [5,[17][18][19][20]. Thus, we hypothesized that deletion of such genes has the potential to increase antibiotic efficacy (Fig 1A). To test this idea, we selected five genes (acrB, emrB, marB, ompF, cmr) that encode membrane proteins in E. coli and deleted them with all 32 possible combinations in order to find the best target genes and quantify epistatic interactions between these gene deletions (S1 Fig) [3,[21][22][23][24]. We then measured the minimum inhibitory concentrations (MICs) of these mutants against 27 different antibiotics (Fig 1B and 1C and S2 Fig). Deletion of acrB, alone or in combination with other genes, significantly increased the susceptibility of E. coli to several antibiotics, up to~100-fold (Fig 1B and 1C and S2 Fig). Systematic deletions of E. coli genes that encode for membrane proteins demonstrate that the AcrAB-TolC efflux system is the major machinery responsible for intrinsic antibiotic resistance. (A) Physical deletion of a resistance gene in a bacterium may render the bacterium antibiotic sensitive. (B) Representative MIC determination using final optical density at 600 nm (OD600) values at 22 h of incubation with the wild type (WT) E. coli and gene deletion mutants in increasing doses of clindamycin. The left vertical dashed line represents the MIC concentration for the acrB deletion mutant (magenta) while the right vertical dashed line represents the MIC for the remaining strains (WT and the cmr, emrB, marB, ompF deletion mutants). (C) Heat map showing the normalized mean MIC values for every strain, measured as in (B). MIC values were normalized using the wild type strain as the reference. All MIC measurements were run at least in duplicate and were found to be highly reproducible (S2B Fig). However, deletions of the other genes (emrB, marB, ompF, cmr) did not significantly change antibiotic susceptibility (Fig 1B and 1C and S2 Fig). This suggests that these genes might be involved in acquired resistance when they are mutated or their regulation is altered, but they are not involved in intrinsic antibiotic resistance of E. coli against the 27 compounds we tested. Also, based on these measurements, there were no epistatic interactions between these gene deletions. The AcrAB-TolC efflux pump complex is among the best-characterized efflux pumps in E. coli and is composed of AcrB, the inner membrane antiporter, AcrA, the periplasmic adaptor protein, and TolC, the outer membrane channel (Fig 2A) [20,22,[25][26][27]. Deleting acrB led to increased susceptibility (Fig 1B and 1C), so we deleted the two other genes (acrA and tolC) that together form the AcrAB-TolC efflux pump complex (Fig 2A) in E. coli to identify their contribution to the intrinsic antibiotic resistance of E. coli. Indeed, deletion of any of these three genes increased antibiotic sensitivity of E. coli, and loss of intrinsic antibiotic resistance due to gene deletions was reversed by plasmid complementation (S3 Fig). We designed three PPMOs to target the acrA, acrB, and tolC genes, respectively (Fig 2A-2C). PPMOs were designed as 11-mers targeting gene regions near the translation start site with high-sequence specificity and low homology around other translation start sites in the E. coli genome (Fig 2C). We first tested the efficacy of these PPMOs by quantifying inhibitory Table). All three PPMOs designed to target acrA, acrB, and tolC (hereafter called acrA-PPMO, acrB-PPMO, and tolC-PPMO) induced antibiotic sensitivity to multiple antibiotics, while a control PPMO (control-PPMO), which has a base sequence with low homology to E. coli translation start sites, had no effect on antibiotic sensitivity (Fig 2E and S4 Fig). Fig 2E demonstrates an example of this sensitizing effect with clindamycin, a protein synthesis inhibitor that is not commonly used against E. coli infections because of its high MIC. Enhancing the efficacy of clindamycin against E. coli is a significant finding that could make this drug potentially effective against gram-negative bacteria. Strikingly, use of acrA-PPMO (Fig 2E, blue line) showed a~16-fold increase in clindamycin sensitivity comparable to thẽ 32-fold increase from deletion of the acrA gene ( Fig 2E, cyan line). In almost every PPMO and antibiotic combination, the effects of acrB-PPMO and tolC-PPMO were less potent than the effect of acrA-PPMO (S4 Fig). Hence, we used acrA-PPMO for the rest of our experiments. The sensitization effect of acrA-PPMO varied between a 2-and 40-fold reduction in MIC, depending on the antibiotic compound (S4 Fig, S2 Table). It was surprising to find that, for certain antibiotic compounds, acrA-PPMO treatment did not adequately recapitulate the effect seen with the acrA deletion mutant (e.g., compare chloramphenicol and oxacillin, S4 Fig). In order to further compare the phenotypic effects of the acrA deletion mutant to acrA-PPMO silencing, we tested the sensitization effect of acrA-PPMO with ten antibiotic compounds ( Fig 3B). Five of these compounds were selected because they were more potent against E. coli strains with the acrA gene deletion, and the remaining five antibiotic compounds were selected since their efficacies were not expected to change based on the acrA gene deletion data (S2 Fig). Fig 3A presents two example antibiotic dose-response curves demonstrating the effect of acrA-PPMO when used together with cefotaxime or meropenem. Susceptibility of E. coli to cefotaxime increased when acrA was deleted or silenced ( Fig 3A, left), whereas susceptibility to meropenem remained the same as the wild-type when acrA was deleted or silenced ( Fig 3A, right). This pattern was consistent with the previous susceptibility data in all antibiotics tested ( Fig 3B). In other words, the phenotypic effect of silencing the acrA gene with acrA-PPMO was indistinguishable from the phenotypic effect of acrA deletion (r = 0.94, p < 0.001, Pearson correlation test). The nucleotide sequence near the translational start site is conserved between several bacterial genera for acrA ( Fig 2C). We therefore hypothesized that acrA-PPMO would sensitize organisms with high sequence homology and would have no effect on organisms with low sequence homology. To demonstrate this, we compared the efficacy of acrA-PPMO against E. coli, Klebsiella pneumoniae, Salmonella enterica, Acinetobacter baumannii, Pseudomonas aeruginosa, and Burkholderia cenocepacia, which share between 36% and 100% acrA sequence homology to the E. coli target sequence ( Fig 2C). In E. coli, time-kill assays after 18 h of exposure to subinhibitory concentrations (1/4 MIC) of piperacillin-tazobactam and 10 μM acrA-PPMO resulted in a three order of magnitude reduction in colony forming units (CFUs) from the starting inoculum compared to a three order of magnitude increase at the same concentration of antibiotic alone ( Fig 3C). We demonstrated a similar sensitization effect of acrA-PPMO against K. pneumoniae and S. enterica, which share 100% sequence homology with E. coli (Fig 3C, top). Conversely, the acrA-PPMO had no activity against A. baumannii, P. aeruginosa, or B. cenocepacia, consistent with their lower (36%-45%) sequence homology (Fig 3C, bottom). Importantly, this demonstrated that the sensitization effect of acrA-PPMO was sequence-specific, and our strategy has the potential for being used against other pathogens. We quantified the AcrA protein expression in E. coli in increasing concentrations of acrA-PPMO (Fig 4A, top panel). AcrA protein levels decreased nearly 30-fold at acrA-PPMO concentrations greater than 3 μM (Fig 4A, middle panel). Control-PPMO had no effect on AcrA protein levels at 2 and 10 μM (S5 Fig). Residual expression of AcrA (~2% compared to untreated cells) is still detected even at the highest acrA-PPMO dose. We have also verified this effect by measuring growth rates of E. coli at different subinhibitory clindamycin concentrations using increasing concentrations of acrA-PPMO. Growth of E. coli, incubated with constant clindamycin concentrations, gradually decreased as acrA-PPMO concentration was increased (Fig 4A, bottom panel). This indicated that clindamycin susceptibility was correlated with the AcrA expression in E. coli (Fig 4A). Clindamycin sensitivity of E. coli, even at the highest concentrations of acrA-PPMO, was still lower than the sensitivity of the E. coli mutant with acrA deletion (Fig 4A, bottom panel), which is consistent with the residual AcrA expression even at the highest concentrations of acrA-PPMO (Fig 4A, top panel). To directly demonstrate that the acrA-PPMO's inhibition of AcrA translation leads to reduced antibiotic efflux, we measured efflux of a DNA-binding dye, Hoechst 33342, which is also a substrate for the AcrAB-TolC complex [28]. The rate of fluorescence accumulation inside bacterial cytoplasm reflects the difference between concentration-dependent influx of Hoechst dye and the AcrAB-TolC-related efflux of the Hoechst dye. We found that E. coli cells treated with 2 and 10 μM of acrA-PPMO had significant increases in final fluorescence levels and fluorescence accumulation rates, comparable to the acrA deletion mutant (S6 Fig). This surrogate measure suggests that the efflux of antibiotic compounds is qualitatively similar to the efflux of the Hoechst dye; however, the magnitude of efflux will be specific to the chemical structure of particular antibiotics. Finally, we tested the toxicity of acrA-PPMO against human lung epithelial cells using a cell viability assay (Fig 4B). Even at 19.2 μM, acrA-PPMO had no significant toxicity at the end of 4 d. These observations provide clear evidence that acrA-PPMO is a promising agent that works as an efficient antibiotic adjuvant by preventing AcrA translation and therefore preventing efflux in a sequence-specific way. One strategy often employed for treatment of severe bacterial infections is the combined use of two or more antibiotics with different mechanisms of action [29]. Particularly, the use of antibiotic pairs that display synergy is considered to be advantageous in clinical practice [30,31]. One risk of this approach is that several synergistic antibiotic pairs may promote evolution of multidrug resistance if they have overlapping resistance mechanisms [5,32,33]. Conversely, several antibiotic pairs that are less likely to promote resistance cannot be used in combination due to antagonistic drug-drug interactions [5,30]. Therefore, strategies that could rescue the use of antagonistic drug combinations could be of significant benefit [34]. Successfully enhancing antibiotic susceptibility by blocking efflux activity has three potential outcomes in antibiotic combination therapies (Fig 5A). It could increase susceptibility to either antibiotic independently (Fig 5A, left and middle), or it could increase susceptibility to both drugs simultaneously (Fig 5A, right). We tested the acrA-PPMO together with antibiotic pairs to see if we could improve their antimicrobial efficacy. Here, we demonstrate that sensitizing bacteria against antibiotics by targeting acrA with the acrA-PPMO can increase sensitivity to both synergistic and antagonistic pairs. We quantified pairwise interactions between trimethoprim and sulfamethoxazole versus trimethoprim and piperacillin-tazobactam, in the presence and absence of acrA-PPMO (Fig 5B). Trimethoprim and sulfamethoxazole are antifolate antibiotics that block the activity of dihydrofolate reductase and dihydropteroate synthase, respectively. Trimethoprim is often used together with sulfamethoxazole due to their synergistic interaction [30]. Conversely, using trimethoprim with piperacillin-tazobactam could be problematic since these two drugs were previously reported to antagonize each other's activities [30]. We created two-dimensional gradients of trimethoprim-sulfamethoxazole (Fig 5B, left) or trimethoprim-piperacillin/tazobactam (Fig 5B, right) and determined MIC values for the wild-type E. coli in the presence (Fig 5B, blue lines) and absence of acrA-PPMO (Fig 5B, black lines) or with the acrA deletion (Fig 5B, cyan lines). We compared the enhancement of drug combinations by acrA-PPMO by integrating the AUC of the resulting MIC curves (Fig 5C). We found that acrA-PPMO increases the efficacy of both synergistic and antagonistic pairs by nearly 5-fold and 15-fold for trimethoprim-sulfamethoxazole (Fig 5C, left) and trimethoprimpiperacillin/tazobactam combinations (Fig 5C, right), respectively. This observation clearly indicates that even though trimethoprim and piperacillin-tazobactam have antagonistic interactions, acrA-PPMO significantly (p < 0.001) increases the efficacy of the trimethoprim-piperacillin-tazobactam combination in E. coli. This finding has the potential of making the trimethoprim-piperacillin-tazobactam combination a promising candidate for treating infections since trimethoprim and piperacillin-tazobactam have independent resistance mechanisms that make the emergence of cross-resistance less likely [5,7]. PPMO treatment did not change the shape of the MIC curves for both of the trimethoprim-sulfamethoxazole (Fig 5B, left) and trimethoprim-piperacillin/tazobactam (Fig 5B, right) combinations, but rather rescaled the MIC curves towards the origin compared to the wild type (Fig 5B), as was previously demonstrated for other antibiotics by Chait et al. [34]. We conclude that the acrA-PPMO did not affect the drug interaction mechanisms, but rather, the decreased efflux of both antibiotic compounds resulted in increased effective antibiotic concentrations inside bacterial cells. Discussion In this study, we demonstrate that we can strategically induce antibiotic hypersensitivity in pathogenic bacteria by targeting the genes that encode for the AcrAB-TolC efflux system with PPMOs. Antibiotic molecules that traverse the bacterial membrane can therefore remain intracellular for longer time periods leading to increased antibiotic activity. This could have also been achieved by using efflux pump inhibitor molecules, such as phenyl-arginine-β-naphthylamide, that block AcrAB-TolC activity [35][36][37][38][39]. However, efflux pump inhibitors are known to have significant toxicities and currently have limited use as therapeutic agents [35,36]. The method we introduced for inducing antibiotic hypersensitivity with the use of PPMOs does not exhibit cytotoxicity in the human cell line we tested and does not require editing the genome of the targeted bacteria in the human host. There are several advantages associated with increasing antibiotic susceptibility of pathogens by using PPMOs. First, though this is an in vitro demonstration, it suggests the potential of using lower doses of existing antibiotics, which may lead to fewer adverse effects of those agents. Second, being able to sensitize a specific bacterial pathogen and inhibit its growth by lower antibiotic doses could have the potential to minimally perturb beneficial members of healthy human microbiota. Third, by using PPMOs, we may have the opportunity to use several drugs for treating infections against which they are normally not effective, such as oxacillin against gram-negative bacteria (Fig 3B and S4 Fig). Finally, increasing antibiotic efficacy with PPMOs may change the way we typically design combinatorial antibiotic therapies: by using PPMOs to minimize the antibiotic doses necessary in antibiotic combinations, we may be able to use antibiotic pairs even if the two drugs somewhat dampen each other's inhibitory effects. However, pharmacokinetics of PPMOs and potential antibiotic combinations should be considered when designing combination therapies for maximum antimicrobial activity [40]. In addition to the benefits described above, the sequence-specificity of PPMOs allows for the ability to target a single genus, or multiple genera, if the PMO target sequence is conserved ( Fig 3C). As we have previously reported in Burkholderia, significant reductions in efficacy (> 8-fold) can occur with even single base mismatches in the PMO sequence [11]. Additionally, Tilley et al. demonstrated that four silent mutations were sufficient to render a targeted PMO ineffective in E. coli [41]. However, the relationship between mismatches, including number and where they occur spatially in the oligomer sequence, and impact on efficacy have not been thoroughly described and warrant future study. We conclude that targeting resistance genes with PPMOs is a plausible strategy to increase antibiotic susceptibility in pathogenic bacteria. Further studies are needed to extend our in vitro experiments to animal models of infection to bridge the gap between our in vitro experiments and translational studies in humans. Utilizing sequence-specific PPMOs that do not have antimicrobial activity when used alone has the potential advantage of avoiding classic selection pressure exhibited by traditional antimicrobials. Importantly, acquired bacterial resistance to PPMOs has thus far only been described in the context of PPMOs designed against essential genes and was found to be related to the peptide moiety and not the oligomer sequence [42]. Attachment of a different peptide to the same oligomer was able to rescue PPMO activity, indicating possible paths towards dealing with the development of resistance. Given the narrow pipeline for new antibiotics and the increasingly urgent worldwide problem of antibiotic resistance, innovative therapeutic approaches such as utilizing PPMOs could serve an important medical need in the future. Future studies will be conducted to systematically test the strategies we propose in this paper in preclinical in vivo models to bridge the gap between in vitro experiments and human studies. Antibiotic Compounds The antibiotics used in this study are: Ampicillin (A1593, Sigma-Aldrich), Carbenicillin PPMOs Used for Targeting Efflux Genes PMOs were synthesized as previously described [46]. The cell-penetrating peptide (RXR) 4 XB, where R is arginine, X is aminohexanoic acid, and B is beta-alanine, was synthesized using standard FMOC chemistry and purified to >95% purity at CPC Scientific (Sunnyvale, CA) and used without further purification. The peptide was conjugated to the nitrogen of a piperadine ring at the 5 0 -terminus of the PMO. First, a C-terminally reactive peptide-benzotriazolyl ester was prepared by dissolving the peptide acid with O-(Benzotriazol-1-yl)-N,N,N',N'-tetramethyluronium tetrafluoroborate (TBTU) in 1-methyl-2-pyrrolidinone (NMP). The concentration of the peptide was 50 mM. Diisopropylethylamine (DIEA) was added to the peptide solution at molar ratios of peptide acid:TBTU:DIEA of 1.0:1.5:1.5, respectively. Immediately after the addition of DIEA, the peptide solution was added to a DMSO solution containing the PMO (20 mM) at a 1:0.8 molar ratio. After stirring at 25°C for 3 h, the reaction was stopped by adding a 4-fold volumetric excess of water. 1 M H 3 PO 4 was added to crude conjugated PMO in 50 μL aliquots until pH 3 was reached. After stirring at 25°C for 30 min, the reaction was neutralized by adding 1 M Na 2 HPO 4 in 100 μL aliquots until pH 7 was reached. The resulting solution was loaded onto a Source 30s (Sigma, St. Louis, MO) column. The unconjugated PMO and other reaction products were purified by elution with a 1.5 M Guanidine-HCl solution in 20 mM NaH 2 PO 4 with 25% MeCN in Milli-Q water at pH 6.5 from 0%-50% over 12 columns volumes. Fractions were selected and pooled based on UV absorbance. Pooled fractions were diluted by adding a 5-fold volumetric excess of water and the conjugate/salt solution was then loaded onto a SPE column (Amberchrom CG300M, Dow Chemicals, MI), which was subsequently washed three times with two-column volumes of water to remove salt. Finally, the (RXR) 4 XB-PMO conjugate was eluted off the SPE column with two-column volumes of 50% MeCN and lyophilized. The final products were analyzed by matrix-assisted laser desorption ionization time of flight mass spectrometry and HPLC. The purities of the final products were >85%. The nucleotide sequence for the control-PPMO is ATCGTTGCATC, for acrA-PPMO is GTTCATATGT A, for acrB-PPMO is TAGGCATGTCT, and for tolC-PPMO is TTCATTTGCAT. MIC Determinations Master plates of each bacterial strain were prepared in a 96-well plate format using overnight cultures in~15% glycerol (~5 x 10 8 CFU/mL) and stored at −80°C. The master plates were thawed prior to experiments and then used to inoculate the antibiotic plates with a pinner (VP Scientific, VP409), which transfers~5 x 10 4 CFUs into each well containing~200 μL of growth media. MIC values were determined using either end-point (final OD600) analysis or calculating the AUC [47]. Briefly, for end-point MIC determination (Figs 1 and 3 and S2 and S3 Figs), the 96-well plates were incubated for 22 h in a shaker operated at 37°C and then OD600 of each plate was measured using a plate reader (Infinite M200 Pro, Tecan). For each strain and antibiotic pair, the MIC value was defined as the lowest antibiotic concentration at which the final OD600 was below~0.04 after background correction, which is slightly above the lower detection limit of our plate reader. Preliminary experiments were conducted with four replicates for clindamycin and fusidic acid. The remaining antibiotics were run twice with biological replicates. A Pearson correlation coefficient test was used to confirm the repeatability of the measurements and the p-values for MIC reduction significance were calculated using Wilcoxon rank sum test. For MIC determination using AUC values, plates were incubated under similar environmental conditions, but in an automated robotic system so that OD600 of growing cultures were recorded as a function of time (Fig 2D and 2E). Linear interpolations of the resulting growth curves (OD600 versus time, Fig 2D) were then integrated to calculate the AUC as a metric for growth using a custom MATLAB code (r2016a, MathWorks). MIC values were defined as the concentration of antibiotic where the AUC was reduced by at least 95% compared to the AUC without antibiotic. Although both methods gave qualitatively similar results, we used the AUC method whenever possible because it is more robust to experimental noise [47]. Quantifying AcrA Expression with Western Blots Western blots measuring AcrA levels after PPMO addition (1 to 12 μM) were performed following standard procedures using an AcrA antibody (1:30,000; generous gift from Dr. Helen I. Zgurskaya, University of Oklahoma) and cAMP receptor protein antibody (1:1,000; BioLegend: 664304). E. coli cells (BW25113) were grown overnight, and final OD600 was adjusted to unity. These cells were then diluted by 10 4 fold in 5 mL of M9 minimal media (supplemented with 0.4% glucose and 0.2% amicase) and grown for 6 h at 37°C (220 rpm) in the presence of increasing acrA-PPMO concentrations (1-12 μM final concentration). Cells were then washed three times with cold PBS buffer (pH 7.4), and bacterial pellets were lysed in 1X Laemmli sample buffer (5 mL/O.D.). Equivalent amounts of the cell lysates (10 μL of the above sample) from each set were electrophoresed in a 4%-15% precast polyacrylamide gel (561081; BIO-RAD), and western blotting was performed following standard procedures. IR-labeled secondary antibodies (IRDye 800CW (926-32213) and IRDye 680RD (925-68072); Li-COR) were used for detection. AcrA protein amount was quantified using an ODYSSEY infrared imaging system (LI-COR). Efflux Inhibition Assay E. coli (BW25113) and the acrA gene deletion E. coli strain were grown overnight, and final OD600 was adjusted to unity. The cells were then diluted by 10 3 -fold in M9 minimal media (with 0.4% glucose and 0.2% amicase) and grown for 6 h at 37°C (100 rpm) in the presence of 0, 2, and 10 μM acrA-PPMO concentrations until the OD600 reached~0. 25 Cloning and Expression of Rescue Plasmids Carrying Efflux Genes For reversing the antibiotic sensitivity phenotype of E. coli with efflux gene deletions, we cloned acrA, acrB, and tolC genes into the arabinose inducible pJMK001 plasmid (Addgene) and introduced them into gene deletion strains. The efflux pump genes (acrA, acrB, and tolC) were PCR amplified from the wild type (BW25113) E. coli strain using the following primer sets (acrA-forward: CATGCCATGGGGATGAACAAAAACAGAGGGTTTACG, acrA-reverse: AGCTTTGTTTAAACTTAAGACTTGGACTGTTCAGGCTG); (acrB-forward: CATCAGTC ATGATGCCTAATTTCTTTATCGATCG, acrB-reverse: AGCTTTGTTTAAACTCAATGAT GATCGACAGTATG) and (tolC-forward: CATGCCATGGGGATGAAGAAATTGCTCCCC ATTC; tolC-reverse: AGCTTTGTTTAAACTCAGTTACGGAAAGGGTTATGA). These fragments were cloned into pJMK001 after restriction digestion (NcoI and PmeI) followed by ligation. These plasmids were then transformed into E. coli strains that had relevant gene deletions. For expression of the efflux genes, bacterial cultures with and without plasmids were grown in the presence of 0.2% arabinose in M9 minimal medium. Supporting Information S1 Fig. Single and combinatorial gene deletions were verified by PCR amplification of the chromosomal regions that span the genes of interest. The approximate PCR product sizes are 3 kb, 1.2 kb, 200 b, 1.5 kb, and 1 kb for acrB (ΔA), cmr (ΔC), marB (ΔM), emrB (ΔE), and ompF (ΔO), respectively. As can be seen from the gel images, we also deleted marA (ΔR) and tolC (ΔT) alone and in combination with select genes. We did not perform further deletions in combination with marA or tolC since the phenotypic effect of marA deletion did not have a significant effect on antibiotic resistance, and tolC deletion was indistinguishable from the deletion of acrB. (TIF) S2 Fig. (A) Heat map showing the normalized MIC values of every gene deletion strain, for the 27 tested antibiotic compounds. All measurements were done in at least duplicate. Measurements with the wild-type strain were done with eight replicates. Measurements with clindamycin and fusidic acid were done with four replicates. Statistically significant (p < 0.05) changes in MIC compared to the wild type strain are depicted colorimetrically, with red representing decreases in efficacy, blue representing increases in efficacy, and white representing nonsignificant changes in efficacy. Intensities of the blue and red colors indicate the magnitude of efficacy changes. The actual MIC values can be found in S1 Table. (B) MIC measurements of the gene deletion strains across duplicates were highly reproducible. The Pearson correlation coefficient between MIC values for the replicate measurements is 0.97 (p < 0.001), demonstrating the reproducibility of our measurements. Mean value for the ratios between MIC replicate1 and MIC replicate2 measurements is 1.07 ± 0.55 (mean ± standard deviation).
2016-09-29T08:41:17.449Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "d2bea625641bda66b1b0dbb0d2d32ae28f144490", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.1002552&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5a4c08e93e2bb2b78d341a16c62e40ded853858", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
133796027
pes2o/s2orc
v3-fos-license
Water-table and discharge changes associated with the 2016–2017 seismic sequence in central Italy: hydrogeological data and a conceptual model for fractured carbonate aquifers A seismic sequence in central Italy from August 2016 to January 2017 affected groundwater dynamics in fractured carbonate aquifers. Changes in spring discharge, water-table position, and streamflow were recorded for several months following nine Mw 5.0–6.5 seismic events. Data from 22 measurement sites, located within 100 km of the epicentral zones, were analyzed. The intensity of the induced changes were correlated with seismic magnitude and distance to epicenters. The additional post-seismic discharge from rivers and springs was found to be higher than 9 m3/s, totaling more than 0.1 km3 of groundwater release over 6 months. This huge and unexpected contribution increased streamflow in narrow mountainous valleys to previously unmeasured peak values. Analogously to the L’Aquila 2009 post-earthquake phenomenon, these hydrogeological changes might reflect an increase of bulk hydraulic conductivity at the aquifer scale, which would increase hydraulic heads in the discharge zones and lower them in some recharge areas. The observed changes may also be partly due to other mechanisms, such as shaking and/or squeezing effects related to intense subsidence in the core of the affected area, where effects had maximum extent, or breaching of hydraulic barriers. Introduction The seismic sequence recorded in central Italy in 2016-2017 included nine main events (Table 1) with moment magnitude (Mw) ≥5.0 (four of which were Mw ≥ 5.5) occurring on four separate days (August 24th 2016, October 26th and 30th 2016 and January 18th 2017), as described in detail in Chiaraluce et al. (2017) and ISIDe Working Group (2016). The main events caused several observed changes in groundwater dynamics, including spring discharge variation, water-table anomalies and river discharge alteration in different basins located up to 100 km from the epicentral zone. The fractured and locally fissured carbonate nature of the aquifers outcropping in the earthquake area favors a quick co-seismic response in terms of pore pressure propagation; however, the observed sustained changes, which developed during several days after the main shocks, affected groundwater dynamics for several months after the seismic events. Hydrogeological changes caused by earthquakes have been historically reported. Instrumentally measured responses, however, have become available only in the last few decades. These responses include changes in water level (Leggette and Taylor 1935;Cooper et al. 1965;Roeloffs 1998;Brodsky et al. 2003;Roeloffs et al. 2003;Lachassagne et al. 2011;Shi et al. 2015), temperature (Mogi et al. 1989), chemical composition (Claesson et al. 2004;Skelton et al. 2014), stream flow Montgomery and Manga 2003;Manga and Rowland 2009;Muir-Wood and King 1993;Rojstaczer et al. 1995), and spring attributes . Understanding the origin of these hydrological and hydrogeochemical phenomena may have significant impacts on the comprehension of the occurrence of liquefaction (Cox et al. 2012), water supply and quality (Gorokhovich and Fleeger 2007), underground storage (Wang et al. 2013) and porepressure triggered seismicity . The effects of earthquakes on groundwater are commonly divided into Btransient oscillations^ (Cooper et al. 1965) and Bsustained offset^, which include abrupt rises or falls and sustained gradual rise lasting for several days after the shock (Roeloffs 1998;Yan et al. 2014). The most frequent consequences of earthquakes are spring and river discharge increase and water-table rise, which are generally attributed to four general classes of possible explanations: (1) co-seismic static strain increases pore pressure that may contribute to change permeability (e.g. Wakita 1975;Jonsson et al. 2003); (2) earthquake-related dynamic strains may increase permeability, permitting a more rapid flow, which in fractured aquifers can be enhanced by fracture cleaning, eventually increasing discharge (e.g. Briggs 1991;Rojstaczer and Wolf 1992;Rojstaczer et al. 1995;Sato et al. 2000;Wang et al. 2004a;Curry et al. 1994;Amoruso et al. 2011); (3) breaching of hydraulic barriers or seals (e.g. Sibson 1994;Brodsky et al. 2003;Wang et al. 2004a); (4) the excess of water discharged after the earthquake lies in the shallowest subsurface where water is liberated by the consolidation or even liquefaction of near-surface unconsolidated materials (e.g. Manga 2001;Manga et al. 2003;. Looking at the relationships between tectonic framework, hydrogeological setting and earthquakes from a wider point of view, recent research activities have highlighted the role of fluids at crustal scale during the seismic cycle. Doglioni et al. (2014), for instance, suggest that fluid flow rates differ during the different periods of the seismic cycle (inter-seismic, pre-seismic, co-seismic and post-seismic periods), also in connection with the tectonic style. In particular, they hypothesize that in extensional tectonic settings like central Italy, the wedge of crust above the brittle ductile transition remains Bsuspended^while a dilated area forms during the interseismic period. This area would trap deep fluids, which when the wedge of crust above the brittle ductile transition starts to drop in the pre-seismic period, would be squeezed above due to the progressive fracture closing. Consequently, in the coseismic period, aquifers can host changes in hydrochemistry (Barberio et al. 2017) and in water levels, independently from changes due to the previously listed local mechanisms, which can affect the hydrodynamics of the struck aquifers during and after the seismic sequence. Such a comprehensive tectonic model allows looking at changes induced in groundwater after earthquakes in a general framework of crustal deformation, suggesting the role of deep inputs in triggering the aforementioned well-known processes (as pore pressure changes, permeability increase, liquefaction/consolidation, etc.) acting at the aquifer scale. The effects of past earthquakes on groundwater in central Italy have been described by previous papers. Esposito et al. (2001) describe the effects of four earthquakes in southern Apennines including the 1980 Irpinia earthquake, which generated important hydrogeological changes as far as 200 km from the epicenter, including a significant increase of Caposele spring flow. Amoruso et al. (2011) describe the hydrogeological changes in a fractured aquifer after the L'Aquila 2009 earthquake, inferring that those changes were probably connected with the increase of bulk hydraulic conductivity at the aquifer scale, mainly due to fracture cleaning, raising hydraulic heads in the discharge zones, and correspondingly lowering them in the recharge areas (Adinolfi Falcone et al. 2012;Galassi et al. 2014). The aim of this paper is to present an overview of the effects of the 2016-2017 seismic sequence on the dynamics of groundwater flow in central Apennines, analyzing the extent of the impacted area, possible relations between tectonic environments, geological-hydrogeological setting, and groundwater changes, and providing preliminary considerations on the possible causes of the observed phenomena. Geological and hydrogeological framework The central Apennines (Italy) is a Meso-Cenozoic ENEdipping thrust-and-fold belt mainly developed during upper Miocene-Quaternary, composed by a pre-orogenic Triassic-Miocene sedimentary succession overlain by Miocene and Pliocene synorogenic sediments, resulting in a highly variable facies and thickness distribution. A Meso-Cenozoic carbonate platform domain extends in the SE part of the study area (Latium Abruzzi Apennine, Fig. 1), consisting of a 5,000-m-thick sequence of limestone and subordinate dolomite of Upper Triassic to upper Miocene age (Brandano and Loche 2014 and references therein). In the western side of the area (Umbro Marchean Apennine), a Lower Jurassic carbonate shelf unit is overlain by stratified pelagic sediments (middle Lias-lower Miocene), with an overall thickness of 2,500-3,000 m (Marchegiani et al. 1999). The Apennine orogenesis overthrusts the Umbria-Marche succession onto the Latium-Abruzzi platform along the main regional thrust fault system named Olevano-Antrodoco line (Pierantoni et al. 2005 and references therein). From the upper Miocene to lower Pliocene, thrust migration towards the east was coupled with the progressive development of fore-deeps in front of the migrating fold-and-thrust belt (Cipollari and Cosentino 1995). Since the upper Miocene-lower Pliocene, extensional faulting connected with the opening of the back-arc Tyrrhenian Basin has been dissecting the compressive structures (Boncio and Lavecchia 2000 and references therein), leading to the development of intermontane basins filled with thick continental sequences of Quaternary alluvial, detrital and lacustrine deposits (Cavinato and De Celles 1999). Some normal faults show evidence of Holocene activity, suggesting that they may be responsible for the seismic activity occurring in this sector of the Apennines (Cello et al. 1998), mainly confined within the upper part of the crust (<16 km; Lavecchia et al. 1994;Boschi et al. 1995). In the study area, the 2016-2017 seismic sequence includes some of the largest instrumental earthquakes of the last 40 years (Norcia 1979Mw = 5.9, Irpinia 1980Mw = 6.9, Gubbio 1984Mw = 5.2, Colfiorito 1997Mw = 5.9, L'Aquila 2009Pantosti and Valensise 1990;Boncio and Lavecchia 2000;Deschamps et al. 2000;Chiarabba et al. 2009). The 2016-2017 sequence and its main shocks (Table 1) were generated by the Gorzano Mt.-Vettore Mt.-Bove Mt. faults (Galadini and Galli 2003, LMF and MVF in Fig. 2). The seismic crisis started with the August 24th 2016 event (Mw 6.0) and a further significant event on October 26th. The Vettore Mt. fault experienced a rupture with tectonic segments~10 km long and a surfacedisplacement of~30 cm (Smeraglia et al. 2017). The October 30th 2016 event (Mw 6.5) was generated by the rupture of the central zone of the fault by a normal movement. The focal mechanism, identical to the previous earthquakes, was a strike-angle of N155°, a WSW dip slip and a dip angle about of 50°in depth (RCMT 2016). During the October 30th 2016 event, the entire Vettore Mt.-Bove Mt. fault system gave origin to important surface faulting occurrences, reusing the pre-existing fault plane and redisplacing the fault segments previously broken. The acquisition of the interferometric satellite data ALOS-2 from JAXA (Japan Aerospace Exploration Agency) and further interferometric analyses (INGV Central Italy Earthquake Team 2016) provided an estimation of the co-seismic subsidence along the NW-SE component reaching a maximum of~80 cm (LOS: Satellite Observation Line). The horizontal co-seismic maximum movements consist of~40 cm towards NE,~30 cm towards SW as well as a maximum vertical movement of about 20-40 cm, when considering also the October 30th event (INGV Working Group GPS 2016; INGV Central Italy Earthquake Team 2016). In the study area, the fractured carbonate ridges host the main aquifers, feeding several perennial springs (Nanni and Vivalda 2005;Martarelli et al. 2008;Mastrorillo et al. 2009;Mastrorillo and Petitta 2014) with steady regimen, located mostly at the external boundaries of the aquifers (Fig. 1). Groundwater flows in fissured to locally karstified carbonates. The Miocene-Pliocene synorogenetic silicoclastic sediments surrounding the carbonate aquifers, as well as the Plio-Quaternary deposits, filling the intermontane plains and the river valleys, act like aquitards . Widespread karst development, including endorheic basins, ensures high infiltration rates, from 500 to 700 mm/year, in the Umbria Marchean aquifers and up to 900 mm/year in the Latium-Abruzzi aquifers, collectively feeding a total discharge of about 300 m 3 /s (Boni et al. 1986(Boni et al. , 2010. Fractures and karst conduits allow for fast vertical flow in the vadose zone, while the large thickness of the saturated zone facilitates a steady flow towards the basal springs that show outstandingly high and steady discharge (Petitta 2009;Amoruso et al. 2014;Fiorillo et al. 2015). Methodology Co-seismic changes were examined in several observation sites located within the area affected by the earthquakes. Within the framework of a continuous monitoring, the collected data refer to piezometric heads in wells and piezometers, spring discharges and river hydrometric levels or discharges. Collected data Altogether, 22 automatic records from continuous monitoring sites were collected, plus one manually measured. The monitoring site locations are shown in Fig. 2, distinguishing piezometric heads in monitoring wells (W1-W5), spring discharges (S1-S12) and hydrometric levels or discharge in river gauging stations (R1-R6). In addition, in February 2017, water-table levels in the porous local aquifer of the Norcia Plain (NP in Fig. 2) were recorded in 16 wells and compared to a piezometric map realized in 2011. The monitoring wells tap the basal aquifers at depths from 20 to 250 m. In some cases (W1, W2 and W3), the water level recordings were occasionally affected by the operations for the related water-supply systems. In detail, W1 and W2 were affected by operational changes in the tunnel drainage system of the near San Chiodo spring, whereas the disturbances prior to August 2016 in W3 were due to the works for the construction of a new aqueduct. The discharge of the monitored rivers can be considered an indicator of hydrogeological changes at the basin scale, because a significant amount of groundwater directly feeds the rivers' baseflow by streambed springs. The rivers with catchment areas less than 100 km 2 (R1, R2, R3 and R5) have a steady regimen too and are predominantly fed by baseflow, whereas the runoff may be considered negligible. In the widest river basins (>1,000 km 2 ; R4 and R6) the runoff contribution cannot be disregarded. It follows that the river discharge is more variable, despite the clear dominant role of baseflow. Data have been recorded by regional hydrographic services, water supply companies or directly by the research teams monitoring the earthquake zones. All available data from June 1st 2016 to February 28th 2017 were considered. Public service data are available on-line (ARPA Umbria 2017; Regione Marche 2017; Regione Umbria 2017). Methods of data measurement Water-table depths have been recorded in wells and piezometers by downhole data loggers with atmospheric compensation. The horizontal piezometer W4, located in the underground National Institute for Nuclear Physics (INFN) laboratory, measures the hydraulic head (pressure in MPa) by a 3-channel 24-bit ADC (Analog to Digital Converter; De Luca et al. 2016). To quantify local head changes during each seismic event, only W4 original pressure data were converted (approximately) to water-table elevation by multiplying the pressure (MPa) by 100 and adding the obtained elevation to the elevation of the top of the borehole (987 m a.s.l.). Spring discharges have been measured by automatic water level sensors in weirs or in Venturi tubes, and converted into discharge through the related rating curve. Only for the Torbidone spring, was the discharge manually measured with a portable flow meter, starting on November 11th 2016 at a frequency of one measurement about every 5 days. River gauging stations are equipped with water-height data loggers or automatic ultrasonic measurement sensors. Rating curves, where available, have been used in conjunction with stage measurements to determine the river discharges. Methods of data processing Because of the different nature of data sources, the time series from continuous monitoring may have different measurement frequency, with intervals ranging from 0.05 s to 24 h. To ensure uniformity, data have been aggregated and analyzed at daily scale. The mean discharge of each data series was calculated considering the time intervals before, between and after the four major seismic events of Mw ≥ 5.5. The first interval corresponds to the period before the first main seismic event, and the second, the third and the fourth ones identify the time intervals between the first-second, second-third and third-fourth main events; the last one corresponds to the period after the fourth main event. The discharge/level variation associated with each one of the four main events was calculated as the difference between the daily value prior to and after each event. In cases where the changes were very abrupt and the difference between the daily values was not appreciable, hourly values were considered. Where even the hourly difference was not evident, changes have been considered as Bnot significant^(NS). All calculated values are shown in Tables 2, 3 and 4. Results Figures 3, 4, 5 and 6 show the time plots of the available data from monitoring sites (location in Fig. 2), summarized in Tables 2, 3 and 4, which refer respectively to water levels in piezometers (W1-W5), spring discharge (S1-S12) and river discharge or levels (R1-R6). Red bars in Figs. 3, 4, 5 and 6 indicate the four main Mw > 5.5 events. For the sake of simplicity, the August 24th 2016, October 26th 2016, October 30th 2016 and January 18th 2017 earthquakes will be named hereinafter 1st, 2nd, 3rd and 4th events respectively. See Fig. 2 for the locations of epicenters and the main active fault systems. Both the 1st and the 2nd events were clearly perceived in the S1, S2 and S3 northern sites (Fig. 3a), with abrupt step-like variations. The 3rd event, the strongest of the sequence, only slightly affects the discharge at S1 and S3. Further south, Forca Canapine spring and Pescara spring (S6 and S7, Fig. 3b) show a clear increase of discharge after the 1st event, more step-like for S6 but gradual and sustained for S7. These two springs are located at high elevation (1,350 and 850 m a.s.l.) and they both suffered for a sharp decrease after the 3rd event, which completely dry up at S6. Aso River nearby (R2, Fig. 3b), NA not available value, NS not significant change monitored at the spring outlet, is also clearly influenced by the 1st event with a step-like increase and limitedly affected by the following events. Different effects were registered in the area closest to the 2nd and 3rd events, in the San Chiodo spring area (Fig. 4a) and in the Norcia area (Fig. 4b). In the San Chiodo area (Fig. 4a) a water supply system is operating by a tunnel drainage; periodical operational changes, opening and closing of different drainage tunnels, produce clear variations in the water levels (black vertical dashed lines in Fig. 4a). The responses of two of the 14 available piezometers (W1 and W2), considered representative of the entire monitoring network, and the discharge of the Upper Nera River downstream at Castelsantangelo (R1), are shown. After the 1st event, the system reacted with a sharp step-like increase in the downgradient part of the aquifer (W1, 779 m a.s.l.) and a clear step-like decrease in the upper part of the aquifer (W2, 823 m a.s.l.). The same happened on the 2nd event, while on the 3rd event the water level in W1 increased gradually by nearly 7 m, while in W2 it firstly decreased, then steadily increased for several days up to 6 m in height. After the 3rd event, the discharge of the Upper Nera River (R1) doubled; this quick increase reached a steady state in December 2016. In the Norcia area (Fig. 4b) the Torbidone spring (S5), reactivated after the 3rd shock, shows a gradual increase in discharge up to 1.5 m 3 /s in about 3 months after the shock. The Sordo River (R3), receiving the Torbidone discharge, also reacted with a clear gradual and sustained increase lasting several months, due also to a different direct groundwater inflow. In addition, the water table of the porous clastic aquifer of the Norcia Plain has shown a hydraulic head increase, reaching +15 m at the contact with the carbonate aquifer, with respect to the water table recorded in 2010-2011. The western-located S4, S8 and W3 (Fig. 4c) registered abrupt positive step-like increases for the 1st, 2nd and 3rd events, with some differences among them. The Nera River at Torre Orsina (Fig. 4d) receives the entire inflow of the aforementioned flow systems, and others not described here due to the absence of recordings; however, its discharge is clearly influenced by the 1st, 2nd and 3rd events, as shown by the abrupt step-like increases coincident with the first three red lines. Although no significant precipitation is recorded, the river discharge did not decrease for the 3 months following the 3rd event. Overall, the Nera River suffered for a total discharge increase of about 9 m 3 /s considering the first three main events, corresponding to about 30% surplus of its natural baseflow. The 4th event, which is the southernmost, was not perceived at all so far to the north, but locally affected the southern monitoring points (Fig. 5), located very close to its epicenter (Fig. 2). All four springs (Fig. 5a) show significant abrupt step-like increase of the discharge after the 1st and the 3rd event, while the 2nd event does not at all modify the hydrographs. The 4th event, in spite of being so close, only slightly influences the discharge of the sole S9 station. Similar evidence has been recorded at a horizontal borehole (W4, Fig. 6) located in the underground laboratories of the INFN in the Gran Sasso massif (Petitta and Tallini 2002;Amoruso et al. 2013). The time plot of the pressure head variation shows sudden increases in the hydraulic pressure (MPa) with a sharp rise of about 2 m recorded on the 1st and on the 3rd events, while no evidence was recorded after the 2nd and 4th events (Fig. 6). Further south, changes in the water table were clearly recorded in the monitoring well at Bussi sul Tirino (W5, Barberio et al. 2016 ; Fig. 5b). The magnitude of the water level variation is about 20 cm for the first three events, with a gradual sustained type variation, while an abrupt step-like increase up to 90 cm is observed for the closest 4th event. The river monitoring sites R5 and R6 do not show any significant variation of the hydrometric level in the 1st, 2nd and 3rd events (Fig. 5b). However, after the 4th earthquake, both the Aterno River (R6) and, most clearly, the Tirino River (R5) hydrometric level responses show a sharp and sudden increase of the hydrometric levels, which drop rather quickly to nearly the prior level than the earthquake discharge. Discussion The hydrogeological changes caused by the 2016-2017 seismic sequence are of remarkable intensity specially if compared to the relatively limited magnitude of the events; similar or larger hydrological responses are very rare (Mohr et al. 2015). The estimated amount of extra discharge drained by springs and rivers since August 24th 2016 for the following 6 months exceeds 0.1 km 3 . This amount has been obtained looking at the discharge of the entire Nera Basin (R4, Nera at Torre Orsina) before and after the seismic sequence: the additional discharge was about 1.5 m 3 /s between the 1st and 3rd event (about 8 × 10 6 m 3 ), and about of 9 m 3 /s after the 3rd event, which until the end of February 2017 correspond to more than 0.095 km 3 . This estimation does not consider other changes observed in other basins, which released a minor amount of discharge. Other documented examples of earthquake induced groundwater release include (ordered by decreasing earthquake magnitude): the Maule Mw 8.8 earthquake in Chile (1.1 km 3 , Mohr et al. 2016), the Chi-Chi Mw 7.5 earthquake in Taiwan (0.7 km 3 , Wang et al. 2004b), and the Hebgen Lake (Mw 7.3, 0.5 km 3 ), Borah Peak (Mw 6.9, 0.3 km 3 ; Muir-Wood and King 1993,;USGS 2017), and Loma Prieta earthquakes in the USA (Mw 6.9, 0.01 km 3 ; Rojstaczer et al. 1995); however, these are earthquakes larger than the central Italy Tables 3 and 4 Fig. 2 for location and Tables 2, 3 and 4 for site characteristics. Vertical red bars locate the four main seismic events (Table 1) events. At a comparable magnitude, the Mw 6.0 South Napa earthquake (USA) produced extra water of about 0.001 km 3 . The extent of the area affected by hydrogeological changes approaches 10,000 km 2 . The recorded hydrological variations fall in the known fields of abrupt and sustained water level changes in groundwater due to earthquakes (Fig. 7, modified from Wang and Chia 2008). Nevertheless, peculiar responses in the study area have been observed in sites very close to the epicenters (<10 km), where abrupt changes have been frequently followed by a sustained increase with time, especially after the 3rd event (October 30th), having the highest magnitude. This behaviour could be due to the fractured nature of the aquifer, where pressure changes can easily and quickly propagate along the effective porosity Table 2 for site characteristics. The reported data are 1-min-averaged. Monitoring period a from July 1st 2016 to September 16th 2016 and b from October 22nd 2016 to January 31st 2017. The red lines refer to the August 24th, October 30th and January 18th earthquakes (Table 1) network, reaching the boundaries of the aquifers, causing change in hydrodynamic independently from seismicinduced stresses (Amoruso et al. 2011). Furthermore, the occurrence of several subsequent events repeatedly impacted the aquifers, as happened on October 30th 2016 for the third time in 2 months. Similar events have been not reported in the literature so far. In addition, the characteristics of the fractured aquifers impacted by the shocks may also have influenced the entity of the response. Differently from the Latium-Abruzzi carbonate aquifers, where a basal aquifer is usually governing groundwater flow, in the Umbria-Marchean aquifers, a network of interconnected faults plays a key role in determining the dynamic groundwater divide location, seepage velocity and extent of the recharge area. It follows that groundwater flow can be easily influenced by seismic events; furthermore, during the 2016-2017 sequence, the study area suffered from many earthquakes whose epicenters were differently located along the central Apennine chain; therefore, numerous aquifers and springs were hit by repeated events, or by a single group of seismic events only. Generally, a decreasing intensity of the recorded effects was observed moving from the earthquake epicenters towards the distal areas. Figure 8 shows the spring discharge/water level variation, as a function of the distance between the monitored points and the epicenters of each seismic event. The result is a cloud of values from about 10-200% in the nearfield (less than 10 km from epicenters), clearly decreasing with distance. Up to +50% in discharge and +2 m water level increase have been recorded between 20 and 30 km away, while minor changes have been observed for sites located between 60 and 100 km from the epicenters. Locally, discharge decreases have been encountered at less than 10-km distance from the epicentral area, but only for the October 30th event, which was the more energetic one. The mechanism causing both sustained negative and positive hydraulic changes is frequently related to static stress modifications evidenced by comparison between pre-seismic and post-seismic conditions (Jonsson et al. 2003;Montgomery and Manga 2003;Parvin et al. 2014;Mohr et al. 2016). The difference between transient dynamic oscillations (Cooper et al. 1965) and the offset-type Fig. 7 Distribution of earthquake-triggered hydrogeological changes as a function of earthquake magnitude (horizontal axis) and epicentral distance (vertical axis). Also plotted are the contours (oblique lines) of constant seismic energy density (Wang and Manga 2010) and the domains where different types of coseismic water level responses occur (Wang and Chia 2008). The triangles represent the water level changes in wells or in rivers, the dots the discharge changes in spring or rivers. The four main events are distinguished by different colors: August 24th 2016 (Mw = 6) event is in purple, the October 26th 2016 (Mw = 5.9) event in red, the October 30th 2016 (Mw = 6.5) event in cyan and the January 18th 2017 (Mw = 5.5) event in orange Fig. 8 Epicentral distance (horizontal axis) vs the discharge changes, expressed as a percentage of the pre-event period mean discharge (left vertical axis) and the water level changes (right vertical axis). Torbidone spring (S5), whose ratio to the pre-event discharge would be infinite (as the spring was dry prior to the earthquake) is marked by an asterisk at water-level changes affecting, in a more permanent way, groundwater flow, is also well described in Yan et al. (2014). Whereas the mechanism for explaining the first type of effects is substantially accepted as being transient oscillations due to crustal dynamic poro-elastic deformation in an aquifer during the passage of seismic waves (Rexin et al. 1962;Kitagawa et al. 2006;Yan et al. 2014), the cause of permanent offsets, corresponding to sustained changes, is still debated. In this study case, the distribution of changes observed after each main event, and their positive or negative effects, are synthetized in Fig. 9. The impact on most of the springs is the increase of discharge with varying magnitude. Most of the recordings in wells and in rivers clearly indicated a sudden and sharp increase simultaneous with the earthquakes, generally followed by a steady increase lasting for a few days after the shock, and a subsequent smooth decrease. Few points show a decrease of water level or discharge: the disappearance of one tapped spring (S6) and a significant discharge decrease of the spring S7, both located at high elevation (Fig. 3b). Similar decreases or drying-up have been observed in other minor notmonitored springs located in the same recharge area. Another monitored site experiencing post-earthquake decrease in the water table is in Upper Nera Valley, where only the highest-elevation monitoring well (W2, Fig. 5a) suffered from a sharp lowering, quickly balanced in the following days. The decrease of discharge or water levels and the disappearance of springs have been also been associated in literature to proximity to the active faults. With respect to the waterlevel changes observed in the footwall area of the fault activated by the Chi-Chi earthquake, Chia et al. (2001) report that water-level rise was the predominant effect in most of the area, whereas water-level fall prevailed in a narrow zone adjacent to the fault trace. Amoruso et al. (2011) report that after the L'Aquila Mw 6.3 earthquake, the two highest springs, located on the trace of the activated fault, suddenly dried up after the main shock. The opposite phenomenon, the reactivation of dry springs or stream, as observed in this case for the Torbidone spring (S5), is also well known. After the 2014 Mw 6.0 South Napa earthquake, many streams and springs, which were dry or nearly dry, started to flow after the earthquake . The possible explanation given by the mentioned authors is the enhanced permeability in the recharge areas. Generally, after the co-seismic peak, discharge and water levels remained on higher values with respect to pre-seismic conditions. A similar mechanism was observed after the 1980 Irpinia earthquake at Caposele spring (Esposito et al. 2001) and, more recently, in the Abruzzi region after the L'Aquila 2009 earthquake (Adinolfi Falcone et al. 2012). This last case has been explained by a double effect: (1) pore pressure propagation due to dynamic stresses caused by the seismic waves, which determined the sudden peak, and (2) an increase of bulk hydraulic conductivity of the fractured aquifer due to fracture cleaning triggered by the pore pressure propagation, which induces mobilization by shaking fine particles that block fracture throats (Amoruso et al. 2011). Recorded hydrological changes for the 2016-2017 earthquakes may be preliminarily attributed to the pore pressure propagation in the aquifers, which would be followed by a sustained discharge increase attributable to fracture cleaning, mobilizing fine particles from fractures Wang and Manga 2010;Adinolfi Falcone et al. 2012), as reflected by turbidity increase, clearly recorded in several monitoring points (e.g. R1, S5, S11, S12). In this case, the superimposition of the post-seismic changes on a recession phase makes the post-earthquake evolution clearer. The dynamic stress due to pore pressure propagation has been clearly observed at the high-frequency-monitoring site W4 (Fig. 10), where the short-term (30 s) changes in hydraulic head are the response to the seismic wave of August 24th, whose effects on groundwater levels ended after a few minutes. In this study, the sustained response of several monitored sites to the subsequent seismic events, considering the fractured nature of the struck aquifers, support the hypothesis of fracture cleaning and the consequent increase of the bulk hydraulic conductivity. In many cases, discharge increase has reached its peak a few days after one of the main events, followed by a decreasing trend due to the recession phase. Accordingly, in several cases, mainly located far from epicenters, the next earthquakes did not cause permanent changes, but only a temporary discharge increase, favoring the model of fracture cleaning. The location of negative effects at high-elevation sites confirms this hypothesis, as water-table decrease is expected in recharge zones of struck aquifers, while increase in discharge is common in lowelevation zones. A more complex response has been recorded in the core area, i.e. between the epicenters of August 24th and October 30th events. In the Upper Nera River Valley (R1, W1 and W2), in the Norcia Plain (S5, R3) and partially in the eastern side of the Sibillini Mts. (S7, R2), the effects of both events is still evident in the following months. The discharge of springs and water-table elevations remained very high with respect to pre-earthquake conditions, as testified by the Nera main gauging station (R4), not simply justifiable by seismic stresses. The new conditions of groundwater flow are testified by the observed changes in the recession curve of San Chiodo spring (R1, W1, Fig. 5a): previous values of α = 0.003 calculated for the 2011-2015 period, strongly decrease to α = 0.001 after the earthquakes, testifying to a continuous additional contribution from the aquifer with time. Consequently, other possible additional factors may have influenced the post-seismic response of the area of Sibillini Mts. The impressive increase of spring and river discharge observed in the Upper Nera River valley and Norcia plain in the mid-term responses, may be correlated with the subsidence induced by the toe of Vettore Mt. faulting, which might have created in the core of the Sibillini Mts. aquifer an additional Bsqueezing effect^. Other possible mechanisms, not further investigated in this paper, could be related to shaking of the aquifer, tilting, settlement and uplifting of the seismogenetic fault-bounded structures and the consequent dislocation of permeability thresholds induced by faulting. Further, a possible decrease in aquifer storativity due to fracture-width reduction could be envisaged, which would have directly triggered the additional volume of groundwater released in the months following the shocks, modifying groundwater dynamic divides and groundwater flow directions. This hypothesis, still not verified, should be carefully evaluated in the future, for prevision purposes. The response of the monitored fractured aquifers to the earthquakes poses questions in terms of future management of groundwater resources of central Apennines. Furthermore, the discharge increase recorded along the Nera River basin has raised flood risks for the urban areas struck by the earthquake, as in Castelsantangelo sul Nera and Norcia. As several of the monitored springs are tapped for drinking purposes, it is necessary to carefully evaluate the consequences and the mid-tolong-term evolution of the spring discharge. Conclusions The abrupt and sustained variations of spring discharge and groundwater levels, observed in carbonate fractured aquifers in central Italy by a wide selection of water points during the 2016-2017 central Apennine seismic sequence, cannot be attributed to natural hydrological drivers and have therefore to be related to the earthquakes. The main findings obtained by analyzing data from more than 20 monitoring sites are the following: 1. The main shocks affected groundwater as far as 100 km from the epicenters, with instrumentally perceivable, sometimes dramatic, effects. A generalized decrease of the magnitude of the effects with distance was observed. The observed effects may be summarized as follows: (1) increase (rarely decrease) of heads measured in wells/piezometers, (2) positive (rarely negative) variations of spring discharge, (3) positive variation of river baseflow, (4) activation of historically intermittent springs, and (5) drying up of high-elevation springs. A quick oscillation, correlated with dynamic stresses, was observed in a few sites equipped with high-frequency recordings. 2. Within 6 months from August 24th 2016, more than 0.1 km 3 of groundwater has been additionally discharged in the area (about +25% of the natural discharge). Comparison with similar case studies highlights the relatively high amount of discharge increase with respect to the limited magnitude of the seismic events; this peculiarity could be explained by the succession of four main events having Mw > 5.5, which continuously struck the fractured aquifers. 3. The observed response at regional scale is compatible with the cleaning of fractures and an overall mid-term increase of the bulk permeability due to the co-seismic pore-pressure propagation. 4. Eventually, the dramatic rise of the water table and of discharge in the core area could be the result of a Bsqueezing effect^due to the co-seismic subsidence in the Sibillini Mts. area, which would act on the storativity of the aquifer. A change in groundwater flow directions due to the tilting of the structure and the consequent dislocation of the permeability threshold induced by faulting could be an additional factor. Based on the suggested conceptual models, two different evolutions of the groundwater flow could possibly be faced in the near future: more impulsive, leading to a higher seasonal variation. Nevertheless, changes in the total amount of recharge, and consequently of discharge, are not expected, because possible storativity reduction does not influence the infiltration from rainfall.
2019-04-27T13:08:41.201Z
2018-01-24T00:00:00.000
{ "year": 2018, "sha1": "7cfe51ff385f45f68043cb6e8f0bb60fce9319e5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10040-017-1717-7.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "4eec159039deffd26c672053b9410e3939379a84", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Geology" ] }
220629889
pes2o/s2orc
v3-fos-license
First Search for Bosonic Superweakly Interacting Massive Particles with Masses up to 1 MeV = c 2 with GERDA We present the first search for bosonic superweakly interacting massive particles (super-WIMPs) as keV-scale dark matter candidates performed with the GERDA experiment. GERDA is a neutrinoless double- β decay experiment which operates high-purity germanium detectors enriched in 76 Ge in an ultralow background environment at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN in Italy. Searches were performed for pseudoscalar and vector particles in the mass region from 60 keV =c 2 to 1 MeV =c 2 . Further distribution of work must maintain attribution to the author(s) published No evidence for a dark matter signal was observed, and the most stringent constraints on the couplings of super-WIMPs with masses above 120 keV =c 2 have been set. As an example, at a mass of 150 keV =c 2 the most stringent direct limits on the dimensionless couplings of axionlike particles and dark photons to electrons of g ae < 3 × 10 − 12 and α 0 = α < 6 . 5 × 10 − 24 at 90% credible interval, respectively, were obtained. No evidence for a dark matter signal was observed, and the most stringent constraints on the couplings of super-WIMPs with masses above 120 keV=c 2 have been set. As an example, at a mass of 150 keV=c 2 the most stringent direct limits on the dimensionless couplings of axionlike particles and dark photons to electrons of g ae < 3 × 10 −12 and α 0 =α < 6.5 × 10 −24 at 90% credible interval, respectively, were obtained. DOI: 10.1103/PhysRevLett.125.011801 The evidence for the existence of nonbaryonic dark matter (DM) in our Universe is overwhelming. In particular, recent measurements of temperature fluctuations in the cosmic microwave background radiation yield a 26.4% contribution of DM to the overall energy density in the ΛCDM model [1]. However, all evidence is gravitational in nature, and the composition of this invisible form of matter is not known. Theoretical models for particle DM yield candidates with a wide range of masses and scattering cross sections with standard model particles [2][3][4]. Among these, so-called bosonic superweakly interacting massive particles (super-WIMPs) with masses at the keV-scale and ultraweak couplings to the standard model can be cosmologically viable and produce the required relic abundance [5,6]. Direct DM detection experiments, as well as experiments built to observe neutrinoless double-β (0νββ) decay, can search for pseudoscalar [also known as axionlike particles (ALPs)] and vector (also known as dark photons) super-WIMPs via their absorption in detector materials in processes analogous to the photoelectric effect (also known as axioelectric effect in the case of axions). The ALP and dark photon energy is transferred to an electron, which deposits its energy in the detector. The expected signature is a full absorption peak in the energy spectrum, corresponding to the mass of the particle, given that these DM candidates have very small kinetic energies [7]. For ALPs the coupling to electrons is parametrized via the dimensionless coupling constant g ae [8], while for dark photons a kinetic mixing α 0 with strength κ [9] is introduced in analogy to the electromagnetic fine structure constant α, such that α 0 ¼ ðeκÞ 2 =4π. Here we describe a search for super-WIMP absorption in the germanium detectors operated by the GERDA Collaboration, extending for the first time the mass region to 1 MeV=c 2 . At masses larger than twice the electron mass, vector particles can decay into ðe þ ; e − Þ pairs and their lifetime would be too short to account for the DM [6]. The primary goal of GERDA is to search for the 0νββ decay of 76 Ge, deploying high-purity germanium (HPGe) detectors enriched up to 87% in 76 Ge. The experiment is located underground at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, Italy, at a depth of about 3500 m water equivalent. The HPGe detector array is made of 7 enriched coaxial and 30 broad energy germanium (BEGe) diodes, with average masses of 2.2 kg and 667 g, respectively, leading to a higher full absorption efficiency for the larger coaxial detectors. It is operated inside a 64 m 3 liquid argon (LAr) cryostat, which provides cooling and a highpurity, active shield against background radiation. The cryostat is inside a water tank instrumented with photomultiplier tubes (PMTs) to detect Cherenkov light from muons passing through, and thus reduces the muon-induced background to negligible levels. A detailed description of the experiment can be found in Ref. [19], while the most recent 0νββ decay results are presented in Ref. [20]. Because of its ultralow background level [21] and excellent energy resolution [∼3.6 and ∼3.0 keV full width at half maximum (FWHM) for coaxial and BEGe detectors at Q ββ ¼ 2039 keV, respectively], the GERDA experiment is well suited to search for other rare interactions, in particular for peaklike signatures as expected from bosonic super-WIMPs. Here we make the assumption that super-WIMPs constitute all of the DM in our Galaxy, with a local density of 0.3 GeV=cm 3 [22]. The absorption rate for dark photons and ALPs in an Earth-bound detector can be expressed as [5] and respectively. Here g ae and α 0 =α are the dimensionless coupling constants, A is the atomic mass of the absorber, σ pe is the photoelectric cross section on the target material (germanium), and m v and m a are the DM particle masses. The linear versus inverse proportionality of the rate with the particle mass is due to the fact that rates scale as flux times cross section, where the cross section is proportional to m 2 a and α 0 =α in the pseudoscalar and vector boson case, respectively [5]. We perform the search for super-WIMPs in the (200 keV=c 2 -1 MeV=c 2 ) mass range on data collected between December 2015 and April 2018, corresponding to PHYSICAL REVIEW LETTERS 125, 011801 (2020) 011801-2 58.9 kg yr of exposure. The energy threshold of the HPGe detectors was lowered in October 2017 and enabled a search in the additional mass range of (60-200 keV=c 2 ), corresponding to 14.6 kg yr of exposure accumulated until April 2018. The individual exposures for BEGe and coaxial detectors above (below) 200 keV are 30.8 (7.7) and 28.1 (6.9) kg yr, respectively. The lower energy bounds for our analysis were motivated by the energy thresholds of the Ge detectors and the shape of the background spectrum (dominated by 39 Ar decays) and the size of the fit window, as explained in the following. In GERDA the energy reconstruction of events is performed through digital pulse processing [23]. Events of nonphysical origin such as discharges are rejected by a set of selection criteria based on waveform parameters (i.e., baseline, leading edge, and decay tail). The efficiency of these cuts for accepting signal events was estimated at > 98.7%. Since super-WIMPs would interact only once in a HPGe diode, events tagged in coincidence with the muon or LAr vetos, or observed in more than one germanium detector, were rejected as due to background interactions. We use the same set of cuts as in the GERDA main analysis for the 0νββ decay [20], with the exception of the pulse shape discrimination cut, which had been tailored to the high-energy 0νββ decay search. The muon and LAr veto accept signal events with efficiencies of 99.9% [24] and 97.5% [20], respectively. The total efficiency to observe a super-WIMP absorption in the HPGe diodes was determined as where the efficiency of the event selection criteria ϵ cuts and the exposure E of each dataset were taken into account. The index i runs over the individual detectors of that dataset, containing N det detectors, E i is the exposure, f av;i the active mass fraction, and ϵ fep;i the efficiency for detection of the full-energy absorption of an electron emitted in the interaction. With the exception of ϵ fep;i all parameters were identical to those in the analysis presented in Ref. [20]. The full-energy absorption efficiency ϵ fep;i accounts for partial energy losses, for example, in a detector's dead layer. This efficiency was estimated for each detector at energies between 60 and 1000 keV with a Monte Carlo simulation of uniformly distributed electrons in the active volume of the detector using the MaGe framework [25]. Table I shows the average full-energy absorption detection efficiencies ϵ fep and the total efficiencies ϵ tot at the lower and upper boundaries of the search region. At 60 keV, the full-energy absorption was estimated as 99.5% for all detectors, while at 1000 keV it is 95.1% and 96.2% on average for BEGe and coaxial detectors, respectively. The energy dependence of the efficiency is caused by the photoabsorption cross section and the different size of the germanium diodes. The events which survived all selection criteria (with total efficiencies between 85.7% and 81.4%; see Table I) are shown in Fig. 1 for the coaxial and BEGe detector datasets. The expected signal from super-WIMPs has been modeled with a Gaussian peak broadened by the energy resolution of the HPGe detectors. To estimate the potential signals from these particles we performed a binned Bayesian fit (with a 1 keV binning, while the systematic uncertainties on the energy scale are estimated at 0.2 keV) FIG. 1. The energy spectra of the BEGe and coaxial datasets, normalized by exposure. Only events with energies up to 1 MeV were considered in the analysis. The coaxial dataset shows a significantly higher event rate (mainly from 39 Ar decays) at energies below 500 keV due to the larger surface area of the signal read-out electrodes [20]. The dashed lines indicate the positions of the main known background γ lines, also listed in Table II. of the signal and a background model of the data. The fit was performed within a window of 24 keV in width, centered on the energy corresponding to the hypothetical mass of the particle and sliding with 1 keV step to examine each mass value. The total number of counts from signal and background was determined as follows: where the Gaussian function G 0 models the peak signal of super-WIMPs at a fixed energy E 0 , corresponding to their mass. The Gaussian G γ models the background γ lines with energy E γ listed in Table II in case it is found within the sliding fit window. For more than one background γ line, Eq. (4) is modified accordingly to model all the peaks. N 0 and N γ are the counts in the fitted signal and background peaks, respectively. The effective energy resolutions σ 0 and σ γ of the detectors from the combined spectra are fixed to the values obtained from the regularly acquired calibration data, with systematic uncertainties around 0.1 keV [20]. Finally, the polynomial fit function FðEÞ describes the continuous background, and was chosen as a first-and second-order polynomial for energies above and below 120 keV, respectively. The higher-order polynomial at lower energies is motivated by the curvature of the 39 Ar β spectrum; see Fig. 1. At other energies, the spectrum has an approximately linear shape, and thus a first-order polynomial was judged sufficient. The Bayesian fit was performed with the Bayesian analysis toolkit BAT [26] using the Markov chain Monte Carlo technique [27] to compute the marginalized posterior probability density function (PDF) given energy values of the data E, PðR S jEÞ, where R S is the signal rate, i.e., the number of counts normalized by exposure. The probability for the signal count rate PðR S ; θjE; MÞ, given data E and a model M, is described by Bayes' theorem as PðR S ;θjE;MÞ ¼ PðEjR S ;θ;MÞπðR S ÞπðθÞ R R PðEjR S ;θ;MÞπðR S ÞπðθÞdθdR S : The denominator defines the overall probability of obtaining the observed data given a hypothetical signal. The numerator includes prior probabilities π for the signal count rate R S and for the nuisance parameters θ (e.g., background shape) estimated before performing the fit. For θ, flat priors were adopted, bound generously according to a preliminary fit with the MINUIT algorithm [28]. For the signal count rate R S the uniform (i.e., constant over the defined range) prior probability was constructed to be positive, with the upper bound defined by the total number of events in the signal region plus 10 times the expected Poisson fluctuations. The conditional probability PðEjR S ; θ; MÞ is estimated according to the super-WIMP interaction model given by Eq. (4) and Poisson fluctuations in the data. The reported results were obtained from the combined fit of the BEGe and coaxial datasets. First, the BEGe dataset was fit using a flat prior for the signal count rate R S , and the obtained posterior was used as a prior for the fit of the coaxial dataset. The results of the latter were then employed to evaluate the corresponding coupling constants of the super-WIMPs. The detection of the signal is ruled out when the significance of the best fit value for the count rate is less than 5σ, estimated as half of the 68% quantile of the posterior PDF. Additionally, if a fitted signal is in close proximity (within 5σ of the energy resolution) to a known background γ line, an upper limit was set irrespective of the mode, as uncertainties in the background rate do not allow us to reliably claim an excess signal above γ lines. An example for the fit using the model described by Eq. (4) for two different background functions FðEÞ is shown in Fig. 2. The obtained posterior PDFs do not show evidence for a signal in the energy range of the analysis. We thus set 90% credible interval (C.I.) upper limits on the signal count rate, corresponding to the 90% quantile of the posterior PDF PðR S jEÞ, accounting for the detection efficiencies according to Eq. (3). The 90% C.I. limits on the signal rate R S were converted into upper limits on the coupling strengths using Eqs. (1) and (2). The results are presented in Fig. 3. We compare these to direct detection limits from CDEX [17], EDELWEISS-III [16], LUX [12], the Majorana Demonstrator [14], PandaX-II [11], SuperCDMS [15], XENON100 [13], and XMASS [10], as well as to indirect limits from horizontal branch and red giant stars [5]. Above 120 keV=c 2 indirect α 0 =α limits from decays of vectorlike particles into three photons (V → 3γ) are significantly lower (ranging from 10 −12 at masses of 100 keV=c 2 to 10 −16 at 700 keV=c 2 ) than the available direct limits (not shown) [6]. The improvement in sensitivity with respect to other crystal-based experiments is due to the much larger exposure in GERDA and the lower background rate over all of the search region. The weakening of our upper limits with increasing mass is primarily due to the steep decrease of the photoelectric cross section from about 45 b at 100 keV to 0.085 b at 1 MeV that overrules both the linear and inverse mass dependence in Eqs. (1) and (2). The fluctuations in the upper limit curves are due to background fluctuations, where prominent peaks come from known γ lines, shown in Table II. To summarize, in this Letter we demonstrated the capability of GERDA to search for other rare events besides the 0νββ decay of 76 Ge. We performed a search for keVscale DM in the form of bosonic super-WIMPs based on data with exposures of 58.9 and 14.6 kg yr in the mass ranges of 200 keV=c 2 -1 MeV=c 2 and 60-200 keV=c 2 , respectively. Upper limits on the coupling strengths g ae and α 0 =α were obtained from a Bayesian fit of a background model and a potential peaklike signal to the measured data. Our limit is compatible with other direct searches in the mass range (60-120 keV=c 2 ) where the strongest limits were obtained by xenon-based DM experiments due to higher exposures and lower background rates in this low-energy region. Our search probes for the first time the mass region up to 1 MeV=c 2 and sets the best direct constraints on the couplings of super-WIMPs over a large mass range from (120 keV=c 2 -1 MeV=c 2 ). As an example, at a mass of 150 keV=c 2 , the most stringent direct limits on the dimensionless couplings of axionlike particles and dark photons to electrons of g ae <3×10 −12 and α 0 =α< 6.5×10 −24 (at 90% C.I.), respectively, were established. The limits are affected by the known background γ lines, listed in Table II, due to higher background rate at these energies. The sensitivity to new physics is expected to improve in the near future with the upcoming LEGEND-200 experiment. The experimental program aims to decrease the background rate and increase the number of HPGe detectors operated in an upgraded GERDA infrastructure at LNGS [29]. FIG. 2. Best fit (red lines) and 68% uncertainty band (yellow bands) from marginalized posterior PDFs of the model parameters assuming a hypothetical signal at E 0 in the BEGe dataset. Top: Fit of a signal assumed at E 0 ¼ 520 keV; the excess is at a level of 2.6σ. A first-order polynomial is used for the continuous background and a Gaussian for the background γ line due to the decay of 85 Kr. Bottom: Fit of a signal assumed at E 0 ¼ 87 keV, using a second-order polynomial for the continuous background. Only part of the data was acquired with a lower energy threshold, resulting in a lower exposure for data below 200 keV=c 2 and causing the steplike feature around this energy. Results from other experiments (see text) are also shown, together with indirect constraints from anomalous energy losses in horizontal branch (HB) and red giant (RG) stars (we refer to Ref. [6] for details).
2020-07-09T09:04:55.467Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "8350fa84d1192e04453aa4d526788fc28f0c101d", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.125.011801", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "84cdfe0f1d621f9f763687c18f4c53dca86af67c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
115365497
pes2o/s2orc
v3-fos-license
Construction of the Course System of Mechanical and Electronic Specialty for Applied Undergraduate Course -- Teaching Direction of Industrial Robot Under the background of “Internet + Made in China 2025” production and education integration innovation promotion plan. According to the professional uniqueness and social needs of applied undergraduate education. Take the teaching direction of industrial robots in the field of mechanical electronics as an example. The paper proposed the teaching of professional basic courses as the basis, the professional direction of curriculum teaching as the focus, professional quality and innovative entrepreneurship education curriculum as a fundamental, three training courses for the innovation of the curriculum system. All kinds of courses include in-class courses (the first class) and extra-curricular courses (the second class), the two classes with other courses, enrich the entire teaching system. And through the series of industrial robotic laboratory construction and management model innovations, it can achieve the true experience of students’ corporate life and the wide range of students’ second classroom activities; it can cultivate excellent applied talents. Introduction With the development of industrial robots in a deeper and more widely direction and the improvement of the level of robot intelligence, the application scope of the robot is constantly expanding. Therefore, there is an urgent need to cultivate professional talents in the field of industrial robots in order to respond to the large demand for applied technology talents driven by technological progress and industrial upgrading in China's manufacturing industry. It truly enhances the ability of local colleges and universities to serve regional economic and social development and to serve the technological advancement of industry and enterprises. We must really train and create a team of manufacturing talents with sufficient numbers, reasonable structure, and high quality and dynamic. It is of great significance and value to study the construction of an applied undergraduate course system for the direction of industrial robots. The current curriculum system has the following two problems: Improve students' learning autonomy and create a strong professional atmosphere:The first classroom, including experimental courses, is mainly teacher-led. There are teachers who organize and monitor the entire teaching process. This may lead students to become too dependent on teachers, lack of autonomy in learning, and consciously develop practical skills. Students choose to participate in activities that they are interested in from the second classroom, so that students have a strong academic research atmosphere after class and receive professional knowledge. Make full use of laboratory equipment resources to enhance the connotation of the second classroom activity: The openness of experimental teaching resources for engineering majors is not high enough, the idle rate of experimental equipment is high, and the teaching methods are still focused on classroom operations. There is a lack of organic integration of experimental teaching and extracurricular activities. In the second classroom activity, we will develop professional and professional literacy training modules, and carry out series of activities aimed at cultivating students' practical hands-on skills, understanding and applying the theoretical knowledge they have learned. Extend the teaching of the first classroom. Improve the transformation efficiency of students' scientific and technological innovation results, make mechanical and electronic professional students can publish scientific research papers and other research learning experience, can apply for invention patents, utility model patents, software copyrights and other intellectual property rights, improve the professional students in the knowledge of the gold content. Provide a certain support material for professional evaluation. For the "Internet + China Manufacturing 2025 Production and Education Integration Promotion Plan", the construction of an application-oriented undergraduate curriculum system can speed up the development of applied technology talents that can adapt to the technological progress and industrial upgrading of the smart manufacturing industry. To meet the large demand for applied technology talents driven by technological progress and industrial upgrading in China's manufacturing industry, the construction of the curriculum system helps to train talents and facilitates the transformation of the company [1] . Curriculum system construction Students in this major are mainly engaged in the basic theory of machinery, electronics, and automatic control, as well as certain professional knowledge, and receive basic training from modern engineers, thus possessing basic capabilities for the application and development of machinery, electronics, electromechanical integration, and related technologies of industrial robots. Master the necessary mechanical design knowledge; Electrical and electronic technology knowledge; PLC programming and application knowledge; Industrial robot technology and applied knowledge; It has preliminary capabilities in the design, manufacture, application and development of machinery, electronic products and systems, and electromechanical integration (industrial robots). According to the training objectives of the mechanical and electrical specialties, we have established three systems with Professional course、Professional practice course、Professional quality course, and integrate the second class into the three major systems [2] . As shown in Table 1. The Second classroom(Student competition, project) Professional quality courses The First classroom(Professional quality courses) The Second classroom ( Enterprise practice, laboratory management) The first course group construction of the three major systems is shown in Table 2. Industrial Robot Enterprise Courses highlights the professional direction. In the mechanical and electrical engineering field, it combines enterprise projects to open multiple corporate courses in the direction of industrial robots, and cultivate students' professional orientation in industrial robots. Three creative courses focus on cultivating students' innovative and entrepreneurial skills. Professional quality courses cultivate students with good professional ethics, positive professional attitude and correct professional values; having good teamwork spirit, establishing correct professional awareness and correct professional behavior habits [3] . Through the second class of the industrial robot enterprise courses, the multi-certificate teaching and training model for students is realized, and on the basis of the academic degree certificate, On the basis of obtaining academic qualifications and degree certificates, through the training of the second class of industrial robots, professional qualification certificates for "robot operation" or industrial robot certificates (ABB, FANUC certificates) are obtained, thereby encouraging students to actively carry out professional courses [4] . As shown in fig 1. Fig. 1 the second class of the industrial robot enterprise courses Through the second classroom of the three creative courses, it can enrich student's three creation courses. In combination with students' three-creation courses, the second-class classroom was developed step by step, and the practical teaching system of mechanical electronics was enriched from interest groups, professional competitions, and etc. As shown in fig 2. Fig. 2 the second class of the three creative courses Through the second classroom of professional quality courses, it's improving students' professional quality. The quality courses develop students' professional qualities, and the professional qualities of applied undergraduate college students are the most missing part at present. How to let students plan their own life blueprint in the university for four years, guide students to innovation and entrepreneurship, professional quality courses play an important role. The enterprise practice can exercise the students' physical and mental accomplishment well. The self-management of the laboratory can improve the utilization rate of the laboratory and the experimental teaching quality while cultivating the students' self-consciousness [5] . As shown in fig 3. Fig. 3 the second class of the professional quality courses Summary The construction of the application-oriented mechanical and electrical specialty curriculum system focuses on highlighting professional orientation, cultivating students' innovative and entrepreneurial abilities as innovations, and cultivating students' professional qualities as the fundamentals. At the same time, the curriculum system aims at cultivating application-oriented talents, and builds the second class of students in combination with business requirements, student professional competitions, and local economic development. The system cultivates students with pertinence and specific characteristics, which ensures that the training of talents meets the needs of the country, society and enterprises. The curriculum system puts forward professionalism and strengthens the spirit of chemical smiths. The curriculum system proposes a double-certificate system and strengthens professional skills. The curriculum system proposes three innovative courses to strengthen student science and technology innovation [6] .
2019-04-16T13:29:05.534Z
2018-11-07T00:00:00.000
{ "year": 2018, "sha1": "826a3f019d17a8c6e6ca50cb226238b53d59695b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/423/1/012109", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1270c13563af00092328e829811c29d6d6bd6647", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Engineering" ] }
13988893
pes2o/s2orc
v3-fos-license
Unitary matrix functions, wavelet algorithms, and structural properties of wavelets Some connections between operator theory and wavelet analysis: Since the mid eighties, it has become clear that key tools in wavelet analysis rely crucially on operator theory. While isolated variations of wavelets, and wavelet constructions had previously been known, since Haar in 1910, it was the advent of multiresolutions, and subband filtering techniques which provided the tools for our ability to now easily create efficient algorithms, ready for a rich variety of applications to practical tasks. Part of the underpinning for this development in wavelet analysis is operator theory. This will be presented in the lectures, and we will also point to a number of developments in operator theory which in turn derive from wavelet problems, but which are of independent interest in mathematics. Some of the material will build on chapters in a new wavelet book, co-authored by the speaker and Ola Bratteli, see http://www.math.uiowa.edu/~jorgen/ . 3.5. Connections between matrix functions and signal processing 53 Appendix A: Topics for further research 55 3. Connection between the discrete signals and the wavelets 3.1. Wavelet geometry in L 2 (R n ) 3.2. Intertwining operators between sequence spaces l 2 and L 2 (R n ) 3.3. Infinite products of matrix functions 3.3.1. Implications for L 2 (R n ) 3.3.2. Wavelets in other Hilbert spaces of fractal measures 3.4. Dependence of the wavelet functions on the matrix functions which define the wavelet filters 3.4.1. Cycles 3.4.2. The Ruelle-Lawton wavelet transfer operator 4. Other topics in wavelets theory 4.1. Invariants 4.1.1. Invariants for wavelets: Global theory Since the mid-1980's wavelet mathematics has served to some extent as a clearing house for ideas from diverse areas from mathematics, from engineering, as well as from other areas of science, such as quantum theory and optics. This makes the interdisciplinary communication difficult, as the lingo differs from field to field; even to the degree that the same term might have a different name to some wavelet practitioners from what is has to others. In recognition of this fact, Chapter 1 in the recent wavelet book [BrJo02b] samples a little dictionary of relevant terms. Parts of it are reproduced here: Terminology • multiresolution: -real world: a set of band-pass-filtered component images, assembled into a mosaic of resolution bands, each resolution tied to a finer one and a coarser one. -mathematics: used in wavelet analysis and fractal analysis, multiresolutions are systems of closed subspaces in a Hilbert space, such as L 2 (R), with the subspaces nested, each subspace representing a resolution, and the relative complement subspaces representing the detail which is added in getting to the next finer resolution subspace. • matrix function: a function from the circle, or the one-torus, taking values in a group of N -by-N complex matrices. • wavelet: a function ψ, or a finite system of functions {ψ i }, such that for some scale number N and a lattice of translation points on R, say Z, a basis for L 2 (R) can be built consisting of the functions N j 2 ψ i N j x − k , j, k ∈ Z. Then dulcet music swelled Concordant with the life-strings of the soul; It throbbed in sweet and languid beatings there, Catching new life from transitory death; Like the vague sighings of a wind at even That wakes the wavelets of the slumbering sea. . . -Shelley, Queen Mab • subband filter: -engineering: signals are viewed as functions of time and frequency, the frequency function resulting from a transform of the time function; the frequency variable is broken up into bands, and up-sampling and down-sampling are combined with a filtering of the frequencies in making the connection from one band to the next. -wavelets: scaling is used in passing from one resolution V to the next; if a scale N is used from V to the next finer resolution, then scaling by 1 N takes V to a coarser resolution V 1 represented by a subspace of V , but there is a set of functions which serve as multipliers when relating V to V 1 , and they are called subband filters. • cascades: -real world: a system of successive refinements which pass from a scale to a finer one, and so on; used for example in graphics algorithms: starting with control points, a refinement matrix and masking coefficients are used in a cascade algorithm yielding a cascade of masking points and a cascade approximation to a picture. -wavelets: in one dimension the scaling is by a number and a fixed simple function, for example of the form 0 1 is chosen as the initial step for the cascades; when the masking coefficients are chosen the cascade approximation leads to a scaling function. • scaling function: a function, or a distribution, ϕ, defined on the real line R which has the property that, for some integer N > 1, the coarser version ϕ x N is in the closure (relative to some metric) of the linear span of the set of translated functions . . . , ϕ (x + 1), ϕ (x), ϕ (x − 1), ϕ (x − 2) , . . . . • logic gates: -in computation the classical logic gates are realized as computers, for example as electronic switching circuits with two-level voltages, say high and low. Several gates have two input voltages and one output, each one allowing switching between high and low: The output of the AND gate is high if and only if both inputs are high. The XOR gate has high output if and only if one of the inputs, but not more than one, is high. • qubits: -in physics and in computation: qubits are the quantum analogue of the classical bits 0 and 1 which are the letters of classical computers, the qubits are formed of two-level quantum systems, electrons in a magnetic field or polarized photons, and they are represented in Dirac's formalism |0 and |1 ; quantum theory allows superpositions, so states |ψ = a |0 + b |0 , a, b ∈ C, |a| 2 + |b| 2 = 1, are also admitted, and computation in the quantum realm allows a continuum of states, as opposed to just the two classical bits. P.E.T. Jorgensen -mathematics: a chosen and distinguished basis for the twodimensional Hilbert space C 2 consisting of orthogonal unit vectors, denoted |0 , |1 . • universality: -classical computing: the property of a set of logic gates that they suffice for the implementation of every program; or of a single gate that, taken together with the NOT gate, it suffices for the implementation of every program. -quantum computing: the property of a set S of basic quantum gates that every (invertible) gate can be written as a sequence of steps using only gates from S. Usually S may be chosen to consist of one-qubit gates and a distinguished tensor gate t. An example of a choice for t is CNOT. An alternative universal one is the Toffoli gate. -mathematics: the property of a set S of basic unitary matrices that for every n and every u ∈ U 2 n (C), there is a factorization u = s 1 s 2 · · · s k , s i ∈ S, with the understanding that the factors s i are inserted in a chosen tensor configuration of the quantum register C 2 ⊗ · · · ⊗ C 2 n times . Note that the factors s i , the number k, and the configuration of the s i 's all depend on n and the gate u ∈ U 2 n (C) to be studied. The quantum wavelet algorithm (2.2.6) is an example of such a matrix u. • chaos: a small variation or disturbance in the initial states or input of some system giving rise to a disproportionate, or exponentially growing, deviation in the resulting output trajectory, or output data. The term is used more generally, denoting rather drastic forms of instability; and it is measured by the use of statistical devices, or averaging methods. -in computation: Let X and Y be functions on a set E, both taking values in {0, 1}. Let Y be the initial state of the bit, and X the final state of the bit. If the process is governed by a probability distribution P , then the transition probabilities p (x, y) := P ({ X = x | Y = y }) are conditional probabilities: i.e., p (x, y) is the probability of a final bit value x given an initial value y, and we have -in wavelet theory: Let N ∈ Z + , and let W be a positive function on T = { z ∈ C | |z| = 1 }, for example W = |m 0 | 2 where m 0 is some low-pass wavelet filter with N bands. (Positivity is only in the sense W ≥ 0, nonnegative, and the function W may vanish on a subset of T.) Then define a function p on T × T as follows: for all other values of w. We arrive at the transfer operator R W , i.e., the operator transforming functions on T as follows: • coherence: -in mathematics and physics: The vectors ψ i that make up a tight frame, one which is not an orthonormal basis, are said to be subjected to coherence. So coherent vector systems in Hilbert space are viewed as bases which generalize the more standard concept of orthonormal bases from harmonic analysis. A striking feature of the wavelets with compact support, which are based on scaling, is that the varieties of the two kinds of bases can be well understood geometrically. For example, the collapse of the wavelet orthogonality relations, degenerating into coherent vectors, happens on a subvariety of a lower dimension. More generally, coherent vectors in mathematical physics often arise with a continuous index, even if the Hilbert space is separable, i.e., has a countable orthonormal basis. This is illustrated by umfwaspw 8 P.E.T. Jorgensen a vector system {ψ r,s }, which should be thought of as a continuous analogue, i.e., a version where a sum gets replaced with an integral For more details, see also Section 3.3 of [Dau92] and Chapter 3 of [Kai94]. In quantum mechanics, one talks, for example, about coherent states in connection with wavefunctions of the harmonic oscillator. Combinations of stationary wavefunctions from different energy eigenvalues vary periodically in time, and the question is which of the continuously varying wavefunctions one may use to expand an unknown function in without encountering overcompleteness of the basis. The methods of "coherent states" are methods for using these kinds of functions (which fit some problems elegantly) while avoiding the difficulties of overcompleteness. The term "coherent" applies when you succeed in avoiding those difficulties by some means or other. Of course, for students who have just learned about the classic complete orthonormal basis of stationary eigenfunctions, "coherent state" methods at first may seem like a daring relaxation of the rules of orthogonality, so that the term seems to stand for total freedom! 1.1.1. Some background on Hilbert space Wavelet theory is the art of finding a special kind of basis in Hilbert space. Let H be a Hilbert space over C and denote the inner product · | · . For us, it is assumed linear in the second variable. If H = L 2 (R), then in the order of hermitian operators. (We say that operators Wavelets in L 2 (R) are generated by simple operations on one or more functions ψ in L 2 (R), the operations come in pairs, say scaling and translation, or phase-modulation and translations. If N ∈ {2, 3, . . . } we set (1. Summary of and variations on the resolution of the identity operator 1 in L 2 or in ℓ 2 , for ψ andψ where ψ r,s (x) = r − 1 2 ψ x−s r , C ψ = R dω |ω| |ψ (ω)| 2 < ∞, similarly forψ and C ψ,ψ = R dω |ω|ψ (ω)ψ (ω): Overcomplete Basis Dual Bases continuous resolution where S 0 , . . . , S N −1 are adjoints to the quadrature mirror filter operators F i , i.e., for a dual operator system S 0 , . . . , S N −1 , S 0 , . . . ,S N −1 A function ψ satisfying the resolution identity is called a coherent vector in mathematical physics. The representation theory for the (ax + b)-group, i.e., the matrix group G = { ( a b 0 1 ) | a ∈ R + , b ∈ R }, serves as its underpinning. Then the tables above illustrate how the {ψ j,k } wavelet system arises from a discretization of the following unitary representation of G: acting on L 2 (R). This unitary representation also explains the discretization step in passing from the first line to the second in the tables above. The functions { ψ j,k | j, k ∈ Z } which make up a wavelet system result from the choice of a suitable coherent vector ψ ∈ L 2 (R), and then setting (1.1.14) Even though this representation lies at the historical origin of the subject of wavelets (see [DGM86]), the (ax + b)-group seems to be now largely forgotten in the next generation of the wavelet community. But Chapters 1-3 of [Dau92] still serve as a beautiful presentation of this (now much ignored) side of the subject. It also serves as a link to mathematical physics and to classical analysis. Since the representation U in (1.1.13) on L 2 (R), when a unitary U is defined from (1.1.13) setting a = 2, b = 0, (U f ) (x) := 2 − 1 2 f x 2 , leaves invariant the Hardy space Comparison of formulas (1.1.13) and (1.1.14) shows that The traditional discrete wavelet transform may be viewed as the restriction to a subgroup H of a classical unitary representation of G. The unitary representations of G are completely understood: the set of irreducible unitary representations consists of two infinite-dimensional inequivalent subrepresentations of the representation (1.1.13) on L 2 (R), together with the onedimensional representations ( a b 0 1 ) → a ik parameterized by k ∈ R. (The two subrepresentations of (1.1.13) are obtained by restricting to f ∈ L 2 (R) with suppf ⊆ −∞, 0] and suppf ⊆ [0, ∞ , respectively.) However, the subgroup H of G has a rich variety of inequivalent infinite-dimensional representations that do not arise as restrictions of (1.1.13), or of any representation of G. The group H considered in (1.1.14) is a semidirect product (as is G): it is of the form show that it is possible to use these nonclassical representations of H for the construction of unexpected classes of wavelets, the wavelet sets being the most notable ones. Recall that a subset E ⊂ R of finite measure is a wavelet set ifψ = χ E is such that, for some N ∈ Z + , N ≥ 2, the func- There is a different transform which is analogous to the wavelet transform of (1.1.13)-(1.1.14), but yet different in a number of respects. It is the Gabor transform, and it has a history of its own. Both are special cases of the following construction: Let G be a nonabelian matrix group with center C, and let U be a unitary irreducible representation of G on the Hilbert space L 2 (R). When ψ ∈ L 2 (R) is given, we may define a transform (T ψ f ) (ξ) := U (ξ) ψ | f , for f ∈ L 2 (R) and ξ ∈ G C. (1.1.18) It turns out that there are classes of matrix groups, such as the ax + b group, or the 3-dimensional group of upper triangular matrices, which have transforms T ψ admitting effective discretizations. This means that it is possible to find a vector ψ ∈ L 2 (R), and a discrete subgroup Λ ⊂ G C, such that the restriction to Λ of the transform T ψ in (1.1.18) is injective from L 2 (R) into functions on Λ. There are many books on transform theory, and here we are only making the connection to wavelet theory. The book [Per86] contains much more detail on the group-theoretic approach to these continuous and discrete coherent vector transforms. Some background on matrix functions in mathematics and in engineering One of our coordinates for the landscape of multiresolution wavelets takes the form of a geometric index. In fact, it involves a traditional operatortheoretic index with values in Z. When it is identified with a winding number or a counting of homotopy classes, it serves also as a Fredholm index of an associated Toeplitz operator. An orthogonal dyadic wavelet basis has its wavelet function ψ satisfying the normalization ψ L 2 (R) = 1, i.e., ψ is a vector of norm one in the Hilbert space L 2 (R). In the lingo of quantum theory, ψ is therefore a pure state, and the x-coordinate is an observable called the position. The integral E ψ (x) = R x |ψ (x)| 2 dx is the expected value of the position. If ψ H denotes the standard Haar function in (1.2.15), then clearly E ψH (x) = 1 2 . Also note the translation formula E ψ( · −k) (x) = E ψ (x) + k. We showed in Corollary 2.4.11 of [BrJo02b], completely generally, that the other orthonormal wavelets ψ have expected values in the set 1 2 + Z. Hence, after ψ is translated by an integer, you cannot distinguish it from the Haar wavelet ψ H in (1.2.15) by looking only at the expected value of its position coordinate. The translation integer k turns out to be a winding number. Our result holds more generally when the definition of E ψ (x) is adapted to a wider wavelet context, as we showed in Chapter 6 of [BrJo02b]; but in all cases, there is a winding number which produces the above-mentioned integer translate k. The issue of connectedness for various classes of wavelets is a general question which has been addressed previously in the wavelet literature; see, e.g., [HLPS99], [HeWe96], [StZh01], and [ReWe98]. Here we bring homotopy to bear on the question, and we identify the connected components when the compact support is fixed and given. We show among other things that for a fixed K 1 -class a homotopy may take place within a variety of wavelets which is specified by a slightly bigger support than the initially given one. An important point of our present discussion, beyond the mere fact of compact support, is the size of the support of the wavelets in question. Consider two wavelets A and B of a certain support size. Then our first results in this section also specify the paths C (t), if any, which connect A and B, and in particular the size of the support of the wavelets corresponding to C (t). In [BrJo02b], we treat connectivity in the wider context of noncompactly supported wavelets, following at the outset [Gar99], which considers scale number N = 2, and wavelets ψ satisfying is an orthonormal basis (ONB) for L 2 (R) . (1.1.19) Garrigós considers, for 1 2 < α ≤ ∞, the class W α of wavelets ψ such that R |ψ (x)| 2 1 + |x| 2 α dx < ∞, (1.1.20) and there is an ε = ε (ψ) such that i.e., the wavelet is supposed to have some degree of smoothness in the sense of Sobolev. We now turn to the group of functions U : T → U (N ), where U (N ) denotes the group of all complex N -by-N matrices. The functions will not be assumed continuous in general. The continuous functions will be designated C (T, U (N )). Each function in C (T, U (N )) has a K 1 -class, also called a winding number; see [BrJo02b]. The functions in C (T, U (N )) with finite Fourier expansion will be called Fourier polynomials, also if they are functions which take values in U (N ). Proposition 1.1.3.1: Let U ∈ C (T, U (N )) be a Fourier polynomial, and assume that K 1 (U ) = d ∈ Z. Then U is homotopic in C (T, U (N )) to where p is the one-dimensional projection onto the first coordinate slot in C N , and if U has the form then U may be homotopically deformed to V in C (T, U (N )) through Fourier polynomials of degree at most |d| + N D. This proposition remains true if the word "Fourier polynomial" is replaced by "polynomial" and a k = 0 for k = −D, −D + 1, . . . , −1. In that case d ∈ Z + and U may be homotopically deformed to V in the loop semigroup of polynomial unitaries in C (T, U (N )) through polynomials of degree at most d. (1.1.24) (See § 2.2.4 for a related, but different, decomposition.) Now, deforming each of the p i 's continuously through one-dimensional projections to the projection p 0 onto the first coordinate direction, and deforming V 0 in U (N ) into 1 N , we see that z D U (z) can be deformed into d+N D k=1 ( (1.1.26) But writing (1 − p 0 ) as a sum of N − 1 one-dimensional projections q 1 , . . . , q N −1 , we have that the unitary that U (z) is deformed into is and next deforming each of the q k in this decomposition into p 0 , we see that U (z) is deformed into The crude estimate |d| + N D on the degree of the Fourier polynomials occurring during the deformation is straightforward. To prove the last statement in the proposition one does not need to multiply U by z D , and the proof simplifies. Note in particular that D ≤ d (assuming a D = 0). ) is a polynomial of degree 1 in z, then A can be homotopically deformed through first-order polynomials in C (T, GL (N )) to a unitary of the form z → zp + (1 N − p) for some projection p, and hence Proposition 1.1.3.1 for C (T, GL (N )) would follow if any polynomial A ∈ C (T, GL (N )) could be factored into first-order polynomials. It is also clear, since any element A ∈ C (T, GL (N )) can be homotopi- through Fourier polynomials. This follows by compactness and the Stone-Weierstraß theorem (Lemma 11.2.3 of [RLL00]). For our purposes in wavelet theory, though, we would need a computable upper bound for the degree of the Fourier polynomials. For ease of reference we will now list the correspondences between the various objects that interest us in this case. These objects are: We did not specify the continuity and regularity requirements of the functions A, m i , ϕ, ψ i above. This will be done differently in different contexts and the classes clearly depend on these added requirements. We will now restrict to the case that the functions ϕ and ψ i have compact support in [0, ∞ , i.e., that A and m i are polynomials in z. Thus z → A (z) is a polynomial function with (1.1.32) Scaling functions/wavelet generators to wavelet filters (ϕ, ψ) → m One defines a n by ) and then m 0 by directly. Then the high-pass filters m i , i = 1, . . . , N −1, can be derived from (2.3.10) below. If we are in the generic case (2.3.6), we may also recover the Fourier coefficients a (1.1.38) The spaces MF (D), WF (D), and SF (D) may be equipped with the obvious topologies, coming in the first two cases from, for example, the L ∞ -norm over z, and in the last case either from the L 2 (R)-norm or, as will be more relevant, the tempered-distribution topology. By virtue of Proposition 3. . Now, let a subindex 0 denote the subsets of these various spaces such that the condition It is known that the set of points such that (1.1.39) does not hold is a lower-dimensional subvariety of the various varieties, see Section 6 of [Jor01b], and hence MF 0 (D), WF 0 (D), and SF 0 (D) contain the generic points in MF (D), WF (D), and SF (D). We now summarize the local connectivity results by stating the following theorem. The proof may be found in [BrJo02b], where this is Theorem 2.1.3. Motivation In addition to the general background material in the present section, the reader may find a more detailed treatment of some of the current research trends in wavelet analysis in the following papers: As a mathematical subject, the theory of wavelets draws on tools from mathematics itself, such as harmonic analysis and numerical analysis. But in addition there are exciting links to areas outside mathematics. The connections to electrical and computer engineering, and to image compression and signal processing in particular, are especially fascinating. These interconnections of research disciplines may be illustrated with the two subjects (1) wavelets and (2) subband filtering [from signal processing]. While they are quite different, and have distinct and independent lives, and even have different aims, and different histories, they have in recent years found common ground. It is a truly amazing success story. Advances in one area have helped the other: subband filters are absolutely essential in wavelet algorithms, and in numerical recipes used in subdivision schemes, for example, and especially in JPEG 2000-an important and extraordinarily successful image-compression code. JPEG uses nonlinear approximations and harmonic analysis in spaces of signals of bounded variation. Similarly, new wavelet approximation techniques have given rise to the kind of datacompression which is now used by the FBI [via a patent held by two mathematicians] in digitizing fingerprints in the U.S. It is the happy marriage of the two disciplines, signal processing and wavelets, that enriches the union of the subjects, and the applications, to an extraordinary degree. While the use of high-pass and low-pass filters has a long history in signal processing, dating back more than fifty years, it is only relatively recently, say the mid-1980's, that the connections to wavelets have been made. Multiresolutions from optics are the bread and butter of wavelet algorithms, and they in turn thrive on methods from signal processing, in the quadrature mirror filter construction, for example. The effectiveness of multiresolutions in data compression is related to the fact that multiresolutions are modelled on the familiar positional number system: the digital, or dyadic, representation of numbers. Wavelets are created from scales of closed subspaces of the Hilbert space L 2 (R) with a scale of subspaces corresponding to the progression of bits in a number representation. While oversimplified here, this is the key to the use of wavelet algorithms in digital representation of signals and images. The digits in the classical number representation in fact are quite analogous to the frequency subbands that are used both in signal processing and in wavelets. The two functions capture in a glance the refinement identities The two functions are clearly orthogonal in the inner product of L 2 (R), and the two closed subspaces V 0 and W 0 generated by the respective integral translates where U is the dyadic scaling operator U f (x) = 2 −1/2 f (x/2). The factor 2 −1/2 is put in to make U a unitary operator in the Hilbert space L 2 (R). This version of Haar's system naturally invites the question of what other pairs of functions ϕ and ψ with corresponding orthogonal subspaces V 0 and W 0 there are such that the same invariance conditions (1.2.3) hold. The invariance conditions hold if there are coefficients a k and b k such that the scaling identity is solved by the father function, called ϕ, and the mother function ψ is given by A fundamental question is the converse one: Give simple conditions on two sequences (a k ) and (b k ) which guarantee the existence of L 2 (R)-solutions ϕ and ψ which satisfy the orthogonality relations for the translates (1.2.2). How do we then get an orthogonal basis from this? The identities for Haar's functions ϕ and ψ of ( There are nontrivial solutions to (1.2.6) and (1.2.7), to be sure, but they are versions of the Cantor Devil's Staircase functions, which are prototypes of functions which are not locally integrable. Since the Haar example is based on the fitting of copies of a fixed "box" inside an expanded one, it would almost seem unlikely that the system (1.2.4)-(1.2.5) admits finite sequences (a k ) and (b k ) such that the corresponding solutions ϕ and ψ are continuous or differentiable functions of compact support. The discovery in the mid-1980's of compactly supported , was paralleled by applications in seismology, acoustics [EsGa77], and optics [Mar82], as discussed in [Mey93], and once the solutions were found, other applications followed at a rapid pace: see, for example, the ten books in Benedetto's review [Ben00]. It is the solution ψ in (1.2.5) that the fuss is about, the mother function; the other one, ϕ, the father function, is only there before the birth of the wavelet. The most famous of them are named after Daubechies, and look like the graphs in Figure 1. With the multiresolution idea, we arrive at the closed subspaces (1.2.9) It is then not difficult to establish the combined orthogonality relations plus the fact that the functions in (1.2.9) form an orthogonal basis for L 2 (R). This provides a painless representation of L 2 (R)-functions where the coefficients c j,k are What is more significant is that the resolution structure of closed subspaces of L 2 (R) facilitates powerful algorithms for the representation of the numbers c j,k in (1.2.12). Amazingly, the two sets of numbers (a k ) and (b k ) which were used * See an implementation of the "cascade" algorithm using Mathematica, and a "cartoon" of wavelets computed with it, at http://www.math.uiowa.edu/˜jorgen/wavelet motions.pdf . P.E.T. Jorgensen in (1.2.4)-(1.2.5), and which produced the magic basis (1.2.9), the wavelets, are the same magic numbers which encode the quadrature mirror filters of signal processing of communications engineering. On the face of it, those signals from communication engineering really seem to be quite unrelated to the issues from wavelets-the signals are just sequences, time is discrete, while wavelets concern L 2 (R) and problems in mathematical analysis that are highly non-discrete. Dual filters, or more generally, subband filters, were invented in engineering well before the wavelet craze in mathematics of recent decades. These dual filters in engineering have long been used in technology, even more generally than merely for the context of quadrature mirror filters (QMF's), and it turns out that other popular dual wavelet bases for L 2 (R) can be constructed from the more general filter systems; but the best of the wavelet bases are the ones that yield the strongest form of orthogonality, which is (1.2.10), and they are the ones that come from the QMF's. The QMF's in turn are the ones that yield perfect reconstruction of signals that are passed through filters of the analysis-synthesis algorithms of signal processing. They are also the algorithms whose iteration corresponds to the resolution sytems (1.2.13) from wavelet theory. While Fourier invented his transform for the purpose of solving the heat equation, i.e., the partial differential equation for heat conduction, the wavelet transform (1.2.11)-(1.2.12) does not diagonalize the differential operators in the same way. Its effectiveness is more at the level of computation; it turns integral operators into sparse matrices, i.e., matrices which have "many" zeros in the off-diagonal entry slots. Again, the resolution (1.2.13) is key to how this matrix encoding is done in practice. Some points of history The first wavelet was discovered by Alfred Haar long ago, but its use was limited since it was based on step-functions, and the step-functions jump from one step to the next. The implementation of Haar's wavelet in the approximation problem for continuous functions was therefore rather bad, and for differentiable functions it is atrocious, and so Haar's method was forgotten for many years. And yet it had in it the one idea which proved so powerful in the recent rebirth (since the 1980's) of wavelet analysis: the idea of a multiresolution. You see it in its simplest form by noticing that a box function B of (1.2.14) may be scaled down by a half such that two umfwaspw in (1.2.16), but also to a construction of the single functions ψ which solve the problem in (1.2.16), and which can be chosen differentiable, and yet with support contained in a fixed finite interval. These two features, the algorithm and the finite support (called compact support), are crucial for computations: Computers do algorithms, but they do not do infinite intervals well. Computers do summations and algebra well, but they do not do integrals and differential equations, unless the calculus problems are discretized and turned into algorithms. In the discussion to follow, the multiresolution analysis viewpoint is dominant, which increases the role of algorithms; for example, the so-called pyramid algorithm for analyzing signals, or shapes, using wavelets, is an outgrowth of multiresolutions. Returning to (1.2.14) and (1.2.15), we see that the scaling function ϕ itself may be expanded in the wavelet basis which is defined from ψ, and we arrive at the infinite series More generally, if n ∈ N, and 2 n−1 < x < 2 n , then So the function ϕ is itself in the space V 0 ⊂ L 2 (R), and ϕ represents the initial resolution. Using the sketch we see for example that the simple step function has the wavelet decomposition into a sum of a coarser resolution and an intermediate detail as follows: (1.2.21) Thus the details are measured as differences. This is a general feature that is valid for other functions and other wavelet resolutions. See, for instance, § 2.2 below. Some early applications While the Haar wavelet is built from flat pieces, and the orthogonality properties amount to a visual tiling of the graphs of the two functions ϕ and ψ, this is not so for the Daubechies (t) = itψ (t), this amounts to poor differentiability properties of well-localized Gabor wavelets, i.e., wavelets built using the two operations translation and frequency modulation over a lattice. But with the multiresolution viewpoint, we can understand the first of Daubechies's scaling functions as a one-sided differentiable solution ϕ to where the four real coefficients satisfy The system (1.2.23) is easily solved: The first applications served as motivating ideas as well: optics, seismic measurements, dynamics, turbulence, data compression; see the book [KaLe95] Actually, it is two books: the first one (primarily by Kahane) is classical Fourier analysis, and the second one (primarily by P.-G. Lemarié-Rieusset) is the wavelet book. It will help you, among other things, to get a better feel for the French connection, the Belgian connection, and the diverse and early impulses from applications in the subject. Enjoy! For a list of more recent applications we recommend [Mey00]. Signal processing If we idealize and view time as discrete, a copy of Z, then a signal is a sequence (ξ n ) n∈Z of numbers. A filter is an operator which calculates weighted averages Since the operators N↓ and N↑ are clearly dual to one another on the Hilbert space ℓ 2 (Z) of sequences (i.e., time-signals), we get the corresponding duality for L 2 (T), i.e., where µ denotes the normalized Haar measure on T, or equivalently the following identity for 2π-periodic functions: (2.5) Quadrature mirror filters with N frequency subbands m 0 , m 1 , . . . , m N −1 give perfect reconstruction when signals are analyzed into subbands and then reconstructed via the up-sampling and corresponding dual filters. In engineering formalism this is expressed in the diagram in Fig. 2, for N = 2, and m 0 , resp. m 1 , are called low-pass, resp. high-pass, filters. In operator language, this takes the form where F 0 and F 1 are the operators in Fig. 2, with dual operators F * 0 and F * 1 . The quadrature conditions may be expressed as In operator theory there is tradition for working instead with the operators S j := F * j . When viewed as operators on L 2 (T) they are therefore isometries with orthogonal ranges, and they satisfy with I now representing the identity operator acting on L 2 (T). The relations on the S j -operators are known as the Cuntz relations because of their use in C * -algebra theory; see [Cun77]. In the present application they take the form and with both of the indices j, k ranging over 0, 1, . . . , N − 1. Filters in communications engineering The coefficients of the functions m j ( · ) are called impulse response coefficients in communications engineering, and when used in wavelets and in subdivision algorithms, they are called masking coefficients. In the finite case, the m j ( · )'s are also called FIR for finite impulse response. The model illustrated in Fig. 2 is used in filter design in either hardware or software: [[1]] Try filters m 0 , m 1 in Fig. 2, and approximate the output to the input; [[2]] Choose a specific structure in which the filter will be realized and then quantize the coefficients, length and numerical values; [[3]] Verify by simulation that the resulting design meets given performance specifications. Once filters are constructed, we saw that they are also providing us with wavelet algorithms. When the steps of Fig. 2 are iterated, we arrive at wavelet subdivision algorithms. Relative to a given resolution (pictured as a closed subspace V 1 , say, in L 2 (R)), signals, i.e., functions in L 2 (R), decompose into coarser ones and intermediate details. Relative to the subspaces W 0 and V 1 , this amounts to Ideally, we wish the decomposition in (2.1.1) to be orthogonal in the sense that f | g = 0 for all f ∈ V 0 and all g ∈ W 0 . (2.1.2) Since the subdivisions involve translations by discrete steps, we specialize the resolution such that both of the spaces V 0 and W 0 are invariant under translations by points in Z, i.e., such that leaves both of the subspaces V 0 and W 0 invariant. The multiresolution analysis case (MRA) corresponds to the setup when V 0 is singly generated, i.e., there is a function ϕ ∈ V 0 such that the closed linear span of is all of V 0 . If N = 2, then there is then also a ψ ∈ W 0 such that the closed linear span of { ψ ( · − n) : n ∈ Z } is all of W 0 . If N > 2, we may need functions ψ 1 , . . . , ψ N −1 in W 0 such that { ψ i ( · − n) : i = 1, . . . , N − 1, n ∈ Z } has a closed span equal to W 0 . Algorithms for signals and for wavelets The pyramid algorithm and the Cuntz relations. Since the two Hilbert spaces L 2 (T) and ℓ 2 (Z) are isomorphic via the Fourier series representation, it follows that the system {S i } For the present, let {m i } 1 i=0 be the low-pass and high-pass wavelet filters, and let ϕ, ψ be the corresponding scaling function, resp., wavelet function, also called father function, resp., mother function. Now introduce the corresponding operators S i and their cousinsŜ i . The adjointsŜ * i are also called filters. Then for all c ∈ ℓ 2 (Z) . Then W maps ℓ 2 isometrically onto V 0 in the orthogonal case and Embedding ℓ 2 into ℓ 2 ⊕ ℓ 2 as ℓ 2 ⊕ 0, extend W to ℓ 2 ⊕ ℓ 2 by putting Then the extended W maps ℓ 2 ⊕ ℓ 2 isometrically onto U −1 V 0 and for all c, d ∈ ℓ 2 , where the left W is the one from (2.2.2) and the right is the extension of W to ℓ 2 ⊕ ℓ 2 . At this point you can use 1 ℓ 2 =Ŝ 0Ŝ * 0 +Ŝ 1Ŝ * 1 to show (2.2.1). Note that if c 0 = a and c 1 = b and c i = 0 for other i, the formula (2.2.1) reduces to (1.2.21). The subdivision relations (2.2.1) are equivalent to the system where the coefficients a n , b n are those of the quantum wavelet algorithm, i.e., the coefficients in the "large" unitary matrix (2.2.5). Thus the quantum algorithm does the wavelet decomposition within a fixed resolution subspace. The scaling function ϕ defines a resolution subspace V 0 ⊂ L 2 (R). Then Let m 0 , m 1 be a dyadic wavelet filter, and let T ∋ z → A (z) ∈ U 2 (C) be the corresponding matrix function, A i,j (z) = 1 2 w 2 =z w −j m i (w). If the low-pass filter m 0 (z) = a 0 + a 1 z + · · · + a 2n+1 z 2n+1 , then a choice for m 1 (z) = 2n+1 , and the following 2 n+2 × 2 n+2 scalar matrix can be checked to be unitary:  Except for the scalar entries in the two extreme left and right columns, all the other entries of the big combined matrix U A are taken from the cyclic arrangements of the 2 × 2 matrices of coefficients A 0 , A 1 , . . . , A n in the expansion of A (z). For the case of n = 1 this amounts to the simple 8 × 8 wavelet matrix Unitary Matrix Functions, Algorithms, Wavelets 35 which is the one that produces the sequence of quantum gates. The quantum algorithm of a wavelet filter is thus represented by a 2 n+2 × 2 n+2 unitary matrix U A acting on the quantum qubit register C ⊗ · · · ⊗ C n+2 times i.e., it acts on a configuration of n + 2 qubits. The realization of a wavelet algorithm in the quantum realm thus amounts to spelling out the steps in factoring U A into a product of qubit gates. By Shor's theorem, we know that this can be done, and U A may be built out of one-qubit gates and CNOT gates following the ideas sketched above. The reader may find more discussion of the matrix U A in Section 3 of [Fre02]. The generalization of classical and quantum wavelet resolution algorithms from N = 2 to N > 2 is immediate: , and [FiWi99] for the quantum computing algorithm naturally generalize to the case N > 2 via (2.2.8). Instead of k-registers C 2 ⊗ · · · ⊗ C 2 k times = C 2 k over C 2 , we will now have to work rather with C N ⊗ · · · ⊗ C N k times The use of the algorithmic relations in engineering and operator algebra theory predates their more recent use in wavelet theory and wavepacket analysis. Pyramid algorithms For N > 2, the algorithm of the previous section takes the following form. The pyramid algorithm and the Cuntz relations revisited. By Fourier equivalence of L 2 (T) and ℓ 2 (Z) via the Fourier series, it follows that the system {S i } be low-pass and high-pass wavelet filters, and let ϕ, ψ 1 , . . . , ψ N −1 be the corresponding scaling function, resp., wavelet functions. Now introduce the corresponding operators S i , and their cousinsŜ i . The adjointsŜ * i are also called filters. Then The scaling function ϕ defines a resolution subspace V 0 ⊂ L 2 (R). For the case N > 2: Discrete vs. continuous wavelets, i.e., ℓ 2 vs. L 2 (R): More refined pyramid algorithms yield wavelet packets as follows. The Haar wavelet is supported in [0, 1], and if j ∈ Z + and k ∈ Z, then the modified function x → ψ 2 j x − k is supported in the smaller interval k 2 j ≤ x ≤ k+1 2 j . When j is fixed, these intervals are contained in [0, 1] for k ∈ 0, 1, . . . , 2 j − 1 . This is not the case for the other wavelet functions. For one thing, the non-Haar wavelets ψ have support intervals of length more than one, and this forces periodicity considerations; see [CDV93]. For this reason, Coifman and Wickerhauser [CoWi93] invented the concept of wavelet packets. They are built from functions with prescribed smoothness, and yet they have localization properties that rival those of the (discontinuous) Haar wavelet. There are powerful but nontrivial theorems on restriction algorithms for wavelets ψ j,k (x) = 2 j 2 ψ 2 j x − k from L 2 (R) to L 2 (0, 1). We refer the reader to [CDV93] and [MiXu94] for the details of this construction. The underlying idea of Alfred Haar has found a recent renaissance in the work of Wickerhauser [Wic93] on wavelet packets. The idea there, which is also motivated by the Walsh function algorithm, is to replace the refinement equation (1.1.33) by a related recursive system as follows: kā 1−k , k ∈ Z, be a given low-pass/high-pass system, N = 2. Then consider the following refinement system on R: Clearly the function W 0 can be identified with the traditional scaling function ϕ of (2.3.7). A theorem of Coifman and Wickerhauser (Theorem 8.1, [CoWi93]) states that if P is a partition of {0, 1, 2, . . . } into subsets of the form I k,n = 2 k n, 2 k n + 1, . . . , 2 k (n + 1) − 1 , then the function system 2 k 2 W n 2 k x − l I k,n ∈ P, l ∈ Z is an orthonormal basis for L 2 (R). Although it is not spelled out in [CoWi93], this construction of bases in L 2 (R) divides itself into the two cases, the true orthonormal basis (ONB), and the weaker property of forming a function system which is only a tight frame. As in the wavelet case, to get the P-system to really be an ONB for L 2 (R), we must assume the transfer operator R |m0| 2 to have Perron-Frobenius spectrum on C (T). This means that the intersection of the point spectrum of R |m0| 2 with T is the singleton λ = 1, and that dim ker((1 − R |m0| 2 )| C(T) ) = 1. Subdivision algorithms The algorithms for wavelets and wavelet packets involve the pyramid idea as well as subdivision. Each subdivision produces a multiplication of subdivision points. If the scaling is by N , then j subdivisions multiply the number of subdivision points by N j . If the scaling is by a d × d integral matrix N, then the multiplicative factor is |det N| j in the number of subdivision points placed in R d . In the discussion below, we restrict attention to d = 1, but the conclusions hold with only minor modification in the general case of d > 1 and matrix scaling. If W is a continuous function on T, the transfer operator or kneading operator R W with the alias in the Fourier transformed space, has an adjoint which is the subdivision operator or chopping operator on functions ξ on T, with the alias on sequences. We will analyze the duality between R W and R * W and their spectra. Specializing to W = |m 0 | 2 , we note that R W is then the transfer operator of orthogonal type wavelets. In the following, W is assumed only to satisfy W ∈ Lip 1 (T) and W ≥ 0. Other conditions are discussed in [BrJo02b]. In the engineering terminology of § 2.2, the operation (2.2.13) is composed of a local filter with the numbers c k as coefficients, followed by the down-sampling N↓ , while (2.2.15) is composed of up-sampling N↑ , followed by an application of a dual filter. In signal processing, N↓ is referred to as "decimation" even if N is not 10. The operator S (= R * W ) is called the subdivision operator, or the woodcutter operator, because of its use in computer graphics. Iterations of S will generate a shape which (in the case of one real dimension) takes the form of the graph of a function f on R. If ξ ∈ ℓ ∞ (Z) is given, and if the differences then we say that ξ represents control points, or a control polygon, and the function f is the limit of the subdivision scheme. It follows that the subdivision operator S on the sequence spaces, especially on ℓ ∞ (Z), governs pointwise approximation to refinable limit functions. The dual version of S, i.e., R = S * (= the transfer operator) governs the corresponding mean approximation problem, i.e., approximation relative to the L 2 (R)-norm. Wavelet packet algorithms The main difference between the algorithms of wavelets and those of wavelet packets is that for the wavelets the path in the pyramid is to one side only: a given resolution is split into a coarser one and the intermediate detail. The intermediate detail may further be broken down into frequency bands. With the operators S j f (z) = m j (z) f z N acting on L 2 (T), the coarser subspace after j steps is modelled on S j 0 L 2 (T), and the projection onto this subspace is S j 0 S * j 0 where S 0 is the isometry of L 2 (T) ∼ = V 0 defined by the low-pass filter m 0 . But in the construction of the wavelet packet, the subspace resulting by running the algorithm j times is S i1 S i2 · · · S ij L 2 (T), and the projection onto this subspace is If n ∈ Z + , the wavelet function W n is computed from the iteration i 1 , . . . , i j corresponding to the representation where i 1 , . . . , i j ∈ {0, 1, . . . , N − 1} are unique from the Euclidean algorithm. Lifting algorithms: Sweldens and more The discussion centers around the matrix functions A : T → GL 2 (C). The case det A ≡ 1. Recall that we call a finite sum n1 k=−n0 A k z k , n 0 , n 1 ≥ 0, a Fourier polynomial both if the coefficients A k are numbers, and if they are matrices. The matrix-valued Fourier polynomials T ∋ z → A (z) ∈ M 2 (C) such that det A (z) ≡ 1 form a subgroup of C (T, GL 2 (C)) which we denote SL 2 . For every A (z) in SL 2 there are m ∈ Z + , K ∈ C\{0}, and scalar-valued Fourier polynomials u 1 (z), . . . , u m (z), l 1 (z), . . . , l m (z) such that . This is the first step in the Daubechies-Sweldens lifting algorithm for the discrete wavelet transform. Thus the case det (A (z)) = 1 gives a constructive lifting algorithm for wavelets, and such an algorithm has not been established in the C (T, GL 2 (C)) case. The decomposition could also be compared with Proposition 3.3 of [BrJo02a], which was mentioned in connection with the proof of (1.1.24). Recall the correspondence between matrix functions and wavelet filters: If A : T → GL 2 (C) is a matrix function, then the corresponding dyadic wavelet filters are It follows that the two matrix functions A and B satisfy Remark. The conclusion is that the wavelet algorithm for a general wavelet filter corresponding to a matrix function, say A, may be broken down in a sequence of zig-zag steps acting alternately on the high-pass and the lowpass signal components. Factorization theorems for matrix functions We mentioned that for matrix functions corresponding to finite impulse response (FIR) filters which are unitary, we need only the constant matrix (which is chosen such as to achieve the high-pass and low-pass conditions) and factors of the form where P is a rank-one projection in C N and N is the scaling number of the subdivision. Unfortunately, no such factorization theorem is available for the nonunitary FIR filters. But the matrix functions take values in the non-singular complex N × N matrices. The Sweldens-Daubechies factorization and the lifting algorithm serve as a substitute. There are still the general nonunimodular FIR-matrix functions where factorizations are so far a bit of a mystery. The matrix functions are called polyphase matrices in the engineering literature. The following summary serves as a classification theorem for the orthogonal wavelets of compact support: the wavelets correspond to FIR polyphase matrices which are unitary. In summary, an algorithm to construct all the wavelet functions ψ of scale 2 with support in [0, 2k + 1] can be established as follows: and define the unitary-valued matrix function A (z) on T by Then each Q j has the form The wavelet function ψ is then defined by where a for z ∈ T, f ∈ L 2 (T). Instead of the usual Cuntz relations, the S i ,S i now satisfy If A,Ã ∈ C (T, GL N (C)) are the matrix-valued functions associated to m i andm i by in the sense that S * i S j is contained in the commutative algebra of multiplication operators on L 2 (T) defined by C (T), and (AA * ) j,i ∈ C (T). Correspondingly,S * iSj = (ÃÃ * ) j,i (2.3.18) so all the operators S * i S j ,S * iS j are contained in the abelian algebra C (T). We may introduce operators S,S from Let us now connect the filters to the wavelets. We have already defined the scaling functions ϕ,φ and wavelet functions ψ i , ψ i , i = 1, . . . , N . The expansions for ϕ andφ converge uniformly on compacts, thusφ andφ are continuous functions on R. To decide that these functions are in L 2 (R) one again forms and fφ similarly, and one deduces again from the nonlinear intertwining relation In the standard case of the good old orthogonal wavelets in L 2 (R) of N subbands, we will look for functions ψ 1 , . . . , ψ N −1 in L 2 (R) such that, if k and n run independently over all the integers Z, i.e., −∞ < k, n < ∞, then the countably infinite system of functions is an orthonormal basis in the Hilbert space L 2 (R). The second half of the word "orthonormal" refers to the restricting requirement that all the functions ψ 1 , . . . , ψ N −1 satisfy or stated more briefly, The functions (2.3.32) are said to be orthogonal if whenever (i, k, n) = (i ′ , k ′ , n ′ ). We say that the two triple indices are different if i = i ′ or k = k ′ or n = n ′ . If, for example, i = i ′ and k = k ′ , then when the same function is translated by different amounts n and n ′ , the two resulting functions are required to be orthogonal. It is an elementary geometric fact from the theory of Hilbert space that if the functions in (2.3.32) form an orthonormal basis, then for every function f ∈ L 2 (R), i.e., every measurable function f on R such that we have the identity where the triple summation in (2.3.36) is over all configurations 1 ≤ i < N , k, n ∈ Z. It is convenient to rewrite (2.3.36) in the following more compact form: Surprisingly, it turns out that (2.3.37) may hold even if the functions ψ i,k,n of (2.3.32) do not form an orthonormal basis. It may happen that one of the initial functions ψ 1 , . . . , or ψ N −1 satisfies ψ i < 1, and yet that (2.3.37) holds for all f ∈ L 2 (R). These more general systems are still called wavelets, but since they are special, they are referred to as tight frames, as opposed to orthonormal bases. In either case, we will talk about a wavelet expansion of the form It follows that the sum on the right-hand side in (2.3.38) converges in the norm of L 2 (R) for all functions f in L 2 (R) if (2.3.37) holds. But there is a yet more general form of wavelets, called biorthogonal. The conditions on the functions ψ 1 , . . . , ψ N −1 are then much less restrictive than the orthogonality axioms. Hence these wavelets are more flexible and adapt better to a variety of applications, for example, to data compression, or to computer graphics. But the biorthogonality conditions are also a little more technical to state. We say that some given functions ψ i , i = 1, . . . , N − 1, in L 2 (R) are part of a biorthogonal wavelet system if there is a second system of functionsψ i , i = 1, . . . , N − 1, in L 2 (R), such that every f ∈ L 2 (R) admits a representation In the standard normalized case where ψ i |ψ i = 1, then you will notice that condition (2.3.37) turns into for all f ∈ L 2 (R). The orthogonal wavelets correspond to matrix functions T → U N (C), while the wider class of biorthogonal wavelets corresponds to the much bigger group of matrix functions T → GL N (C), via the associated wavelet filters. You may ask, why bother with the more technical-looking biorthogonal systems? It turns out that they are forced on us by the engineers. They tell us that the real world is not nearly as orthogonal as the mathematicians would like to make it out to be. There is a paucity of symmetric orthogonal wavelets, and symmetry ("linear phase") is prized by engineers and workers in image processing, where the more general wavelet families and their duality play a crucial role. Now what if we could change the biorthogonal wavelets into the orthogonal ones, and still keep the essential spectral properties intact? Then everyone will be happy. This last chapter shows that it is possible, and even in a fairly algorithmic fashion, one that is amenable to computations. Wavelet filters may be understood as matrix functions, i.e., functions from the one-torus T ⊂ C into some group of invertible matrices. If the scale number is N , then there are three such matrix groups which are especially relevant for wavelet analysis: It is possible to reduce some questions in the GL N case to better understood results for U N (C); see Chapter 6 of [BrJo02b]. The SL 2 case is especially interesting in view of Daubechies-Sweldens lifting for dyadic wavelets; see § 2.2.4. Connection between matrix functions and wavelets Definitions: A function, or a distribution, ϕ satisfying (2.3.7) is said to be refinable, the equation (2.3.7) is called the refinement equation, or also, as noted above, the "scaling identity", and ϕ is called the scaling function. The coefficients a n of (2.3.7) are called the masking coefficients. We will mainly concentrate on the case when the set {a n } is finite. But in general, a function ϕ ∈ L 2 (R) is said to be refinable with scale number N if ϕ (x/N ) is in the L 2 -closed linear span of the translates {ϕ (x − k)} k∈Z ⊂ L 2 (R); see, e.g., [HSS96,SSZ99,StZh98,StZh01]. Since there are refinement operations which are more general than scaling (see for example [DLLP01]), there are variations of (2.3.7) which are correspondingly more general, with regard to both the refinement steps that are used and the dimension of the spaces. The term "scaling identity" is usually, but not always, reserved for (2.3.7), while more general refinements lead to "refinement equations". However, (2.3.7) often goes under both names. The vector versions of the identities get the prefix "multi-", for example multiscaling and multiwavelet. If m 0 satisfies a condition for obtaining orthogonal wavelets, (2.3.45) Multiresolution wavelets We mentioned that there is a direct connection between m 0 = a n z n and the scaling function ϕ on R given in (1.1.34), (2.3.7), and (2.3.44). There is a similar correspondence between the high-pass filters m i and the wavelet generators ψ i ∈ L 2 (R). In the biorthogonal case, there is a second system m i ↔ψ i and the two systems then form a dual wavelet basis, or dual wavelet frame for L 2 (R) in the sense of [Dau92], Chapter 5. We considered this biorthogonal case in more detail in § 2.3.1 above. Much more detail can be found in Chapter 6 of [BrJo02b]. The idea of constructing maximally smooth wavelets when some side conditions are specified has been central to much of the activity in wavelet analysis and its applications since the mid-1980's. As a supplement to [Dau92], the survey article [Stra93] is enjoyable reading. The paper [LaHe96] treats the issue in a more specialized setting and is focussed on the moment method. Some of the early applications to data compression and image coding are done very nicely in [HSS + 95], [SHS + 99], and [HSW95]. An interesting, related but different, algebraic and geometric approach to the problem is offered in [PeWi99]. We now turn to an interesting variation of this setup, which includes higher dimensions, i.e., when the Hilbert space is L 2 R d , d = 2, 3, . . . . Staying for the moment with d = 1, and N fixed, we will take the viewpoint of what is called resolutions, but here understood in a broad sense of closed subspaces: A closed linear subspace V ⊂ L 2 (R) is said to be an N -resolution if it is invariant under the unitary operator i.e., if U maps V into a proper subspace of itself. The subspace V is said to be translation invariant if If there is a function ϕ such that V = V ϕ is the closed linear span of is an orthonormal basis for L 2 R d . This can be checked to be equivalent to the combined set of two tiling properties for E as a subset of R d : (a) the family of subsets N j E : j ∈ Z tiles R d ; We define tiling by the requirement that the sets in the family have overlap at most of measure zero relative to Lebesgue measure on R d . Similarly, the union is understood to be only up to measure zero. It is easy to see that compactly supported wavelets in L 2 R d are MRA wavelets, while most wavelets ψ = (χ E ) ∨ from wavelet sets E are not. Thess wavelets are typically (but not always) frequency localized. The main difference between the GMRA (stands for generalized multiresolution analysis) wavelets and the more traditional MRA ones may be understood in terms of multiplicity. Both come from a fixed resolution sub- for x ∈ R d and n ∈ Z d . (2.3.52) Hence {T n | V0 } n∈Z d is a unitary representation of Z d on the Hilbert space V 0 . As a result of Stone's theorem, we find that there are subsets of T d such that the spectral measure of the (restricted) representation has multiplicity ≥ j on the subset E j , j = 1, 2, . . . . It can be checked that the projection-valued spectral measure is absolutely continuous. Moreover, there is an intertwining unitary operator such that holds for all f ∈ V 0 and z ∈ E j . We may then consider the functions ϕ j ∈ V 0 (⊂ L 2 R d ) defined by holds for all f ∈ V 0 . Treating (ϕ 1 , ϕ 2 , . . . ) as a vector-valued function, denoted simply by ϕ, we see that there is a matrix function where t = (t 1 , . . . , t d ) ∈ R d , and e it := e it1 , e it2 , . . . , e it d . But this method takes the Hilbert space L 2 R d as its starting point, and then proceeds to the construction of wavelet filters in the form (2.3.57). Our current joint work with Baggett, Merrill, and Packer reverses this. It begins with a matrix function H defined on T d , and then offers subband conditions on the matrix function which allow the construction of a GMRA for L 2 R d with generator ϕ = (ϕ 1 , ϕ 2 , . . . ) given by (2.3.57). So the Hilbert space L 2 R d shows up only at the end of the construction, in the conclusions of the theorems. Matrix completion In using the polyphase matrices, one may only have the first few rows, and be faced with the problem of completing to get the entire function A from a torus into the matrices of the desired size. The case when only the first row is given, say corresponding to a specified low-pass filter, is treated in [BrJo02b] [BJMP04]. The unitary extension principle (UEP) of [DHRS03] involves the interplay between a finite set of filters (functions on R/Z), and a corresponding tight frame (alias Parseval frame) in L 2 R d . For the sake of illustration, let us take d = 1, and scaling number N = 2, i.e., the case of dyadic framelets. Naturally, the notion of tight frame is weaker than that of an orthonormal basis (ONB), and it is shown in [DHRS03] that when a system of wavelet filters m i , i = 0, 1, . . . , r is given (m 0 must be low-pass), then the orthogonality condition on the m i 's always gets us a framelet in L 2 (R), i.e., the functions ψ i corresponding to the highpass filters m i , i = 1, . . . , r, generate a tight frame for L 2 (R), also called a framelet. The correspondence m i to ψ i is called the UEP in [DHRS03]. The orthogonality condition for m i , i = 0, 1, . . . , r, referred to in the UEP is simply this: Form an (r + 1)-by-2 matrix-valued function F (x) by using m i (x), i = 0, 1, . . . , r in the first column, and the translates of the m i 's by a half period, i.e., m i (x + 1/2), i = 0, 1, . . . , r in the second. The condition on this matrix function F (x) is that the two columns are orthogonal and have unit norm in ℓ 2 for all x. Note that we still get the unitary matrix functions acting on these systems, in the way we outlined above. But there is redundancy as the unitary matrices are (r + 1)-by-(r + 1). The reader is refered to [DHRS03] for further details. We emphasize that several of these, and other related topics, invite the kind of probabilistic tools that we have stressed here. But a more systematic discussion is outside the scope of this brief set of notes. We only hope to offer a modest introduction to a variety of more specialized topics. Remark 2.3.4.1: The orthogonality condition for m i , i = 0, 1, . . . , r, may be stated in terms of the operators S i from equation (2.9), N = 2. For each i = 0, 1, . . . , r, define an operator on L 2 (R/Z) as in (2.9). Then the arguments from Section 2 show that the orthogonality condition for m i , i = 0, 1, . . . , r, i.e., the UEP condition, is equivalent to the operator identity (2.8) where the summation now runs from 0 to r. Operator systems S i satisfying (2.8) are called row-isometries. Remark 2.3.4.2: There are two properties of the low-pass filter m 0 which we have glossed over. First, m 0 must be such that the corresponding scaling function ϕ is in L 2 (R). Without an added condition on m 0 , ϕ might only be a distribution. Secondly, when the dyadic scaling in L 2 (R) is restricted to the resolution subspace V 0 (ϕ), the corresponding unitary part must be zero. These two issues are addressed in [BJMP03], [BJMP04], and [DHRS03]. Connections between matrix functions and signal processing Since our joint work with Baggett, Merrill, and Packer on the GMRA wavelets is still in progress, we restrict the discussion of matrix functions here to the MRA case. In particular, (2.3.59) The dependence of the L 2 (R)-functions in (iii) on the group elements A from (i) gives rise to homotopy properties. The standard orthogonal wavelets represent the special case when m i =m i , or equivalently, A (z) = (A (z)) * −1 , z ∈ T. Hence, the matrix functions are unitary in this case. The scaling/wavelet functions ϕ, ψ 1 , . . . , ψ N −1 with support on a fixed compact interval, say [0, kN + 1], k = 0, 1, . . . , can be parameterized with a finite number of parameters since the unitary-valued function z → A (z) in (2.3.58) then is a polynomial in z of degree at most k (N − 1). It is well-known folklore from computer-generated pictures that the shape of the scaling/wavelet functions depends continuously on these parameters; see Figures 1.1-1.7 in [BrJo02b] and [Tre01]. 3.66) where A is the 2 × 3 matrix A i,j = √ 2a 4+i−2j constructed from the coefficients in (2.3.7), and f j and f ′ j are the vector functions (2.3.67) The signal processing aspect can be understood from the description of subband filters in the analusis and synthesis of time signals, or more general signals for images. In either case, we have two subband systems Originally we had anticipated adding two more chapters to these tutorials, but time and space prevented this. Instead we include the table of contents for this additional material. The details for the remaining chapters will be published elsewhere. But as the items in the list of contents suggest, there are still many exciting open problems in the subject that the reader may wish to pursue on his/her own. We feel that the following list of topics offers at least an outline of several directions that the reader, could take in his/her own study and research on wavelet-related mathematics. set L, we say that (D, L) is a spectral pair. We recall from [JoPe99] that if D is an n-cube, then the sets L in (1) are precisely the sets T in (2). This begins with work of Jorgensen and Steen Pedersen [JoPe99] where the admissible sets L = T are characterized. Later it was shown, [IoPe98] and [LRW00] that the identity T = L holds for all n. The proofs are based on general Fourier duality, but they do not reveal the nature of this common set L = T . A complete list is known only for n = 1, 2, and 3, see [JoPe99]. We then turn to the scaling IFS's built from the n-cube with a given expansive integral matrix A. Each A gives rise to a fractal in the small, and a dual discrete iteration in the large. In a different paper [JoPe98], Jorgensen and Pedersen characterize those IFS fractal limits which admit Fourier duality. The surprise is that there is a rich class of fractals that do have Fourier duality, but the middle third Cantor set does not. We say that an affine IFS, built on affine maps in R n defined by a given expansive integral matrix A and a finite set of translation vectors, admits Fourier duality if the set of points L, arising from the iteration of the A-affine maps in the large, forms an orthonormal Fourier basis (ONB) for the corresponding fractal µ in the small, i.e., for the iteration limit built using the inverse contractive maps, i.e., iterations of the dual affine system on the inverse matrix A −1 . By "fractal in the small", we mean the Hutchinson measure µ and its compact support, see [Hut81]. (The best known example of this is the middle-third Cantor set, and the measure µ whose distribution function is corresponding Devil's staircase.) In other words, the condition is that the complex exponentials indexed by L form an ONB for L 2 (µ). Such duality systems are indexed by complex Hadamard matrices H, see [JoPe99] and [JoPe98]; and the duality issue is connected to the spectral theory of an associated Ruelle transfer operator, see [BrJo02b]. These matrices H are the same Hadamard matrices which index a certain family of quasiperiodic spectral pairs (D, L) studied in [Jor82] and [JoPe92]. They also are used in a recent construction of Terence Tao [Tao04] of a Euclidean spectral pair (D, L) in R 5 for which D does not a tile R 5 with any set of translation vectors T in R 5 ; see also [IKT03]. We finally report on joint research with Dorin Dutkay [DuJo03], [DuJo04a], [DuJo04b], [DuJo04c] where we show that all the affine IFS's, and more general limit systems from dynamics and probability theory, admit wavelet constructions, i.e., admit orthonormal bases of wavelet functions in Hilbert spaces which are constructed directly from the geometric data. A substantial part of the picture involves the construction of limit sets and limit measures, a part of geometric measure theory.
2014-10-01T00:00:00.000Z
2004-03-06T00:00:00.000
{ "year": 2004, "sha1": "2116a3e0fffdd4f8944085b1d94c543d568eec1c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "07bd329d8760fb2115c46fbf467b5756e7de2799", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
262046283
pes2o/s2orc
v3-fos-license
Proximal immune-epithelial progenitor interactions drive chronic tissue sequelae post COVID-19 The long-term physiological consequences of SARS-CoV-2, termed Post-Acute Sequelae of COVID-19 (PASC), are rapidly evolving into a major public health concern. The underlying cellular and molecular etiology remain poorly defined but growing evidence links PASC to abnormal immune responses and/or poor organ recovery post-infection. Yet, the precise mechanisms driving non-resolving inflammation and impaired tissue repair in the context of PASC remain unclear. With insights from three independent clinical cohorts of PASC patients with abnormal lung function and/or viral infection-mediated pulmonary fibrosis, we established a clinically relevant mouse model of post-viral lung sequelae to investigate the pathophysiology of respiratory PASC. By employing a combination of spatial transcriptomics and imaging, we identified dysregulated proximal interactions between immune cells and epithelial progenitors unique to the fibroproliferation in respiratory PASC but not acute COVID-19 or idiopathic pulmonary fibrosis (IPF). Specifically, we found a central role for lung-resident CD8+ T cell-macrophage interactions in maintaining Krt8hi transitional and ectopic Krt5+ basal cell progenitors, thus impairing alveolar regeneration and driving fibrotic sequelae after acute viral pneumonia. Mechanistically, CD8+ T cell derived IFN-γ and TNF stimulated lung macrophages to chronically release IL-1β, resulting in the abnormal accumulation of dysplastic epithelial progenitors and fibrosis. Notably, therapeutic neutralization of IFN-γ and TNF, or IL-1β after the resolution of acute infection resulted in markedly improved alveolar regeneration and restoration of pulmonary function. Together, our findings implicate a dysregulated immune-epithelial progenitor niche in driving respiratory PASC. Moreover, in contrast to other approaches requiring early intervention, we highlight therapeutic strategies to rescue fibrotic disease in the aftermath of respiratory viral infections, addressing the current unmet need in the clinical management of PASC and post-viral disease. INTRODUCTION SARS-CoV-2 infection can lead to long-term pulmonary and extrapulmonary symptoms well beyond the resolution of acute disease, a condition collectively termed post-acute sequelae of SARS-CoV-2 (PASC) (1,2).With effective treatment strategies and vaccines to tackle acute COVID-19, the emerging challenge is to manage chronic sequelae in the 60+ million people currently experiencing PASC (3,4).Given the extensive damage to the respiratory tract during primary infection, the lungs are particularly susceptible to sustained impairments including dyspnea, compromised lung function, and radiological abnormalities which persist up to 2 years post infection in contrast to the majority of extrapulmonary sequelae (1,2,4).Some individuals also develop a non-resolving fibroproliferative response -PASC pulmonary fibrosis (PASC-PF) and typically require persistent oxygen supplementation and eventual lung transplantation (4)(5)(6)(7)(8). Currently, mechanisms underlying the maintenance of these dysplastic progenitors and their contributions to post-viral respiratory sequelae remain largely elusive.By comparing the pathological, immunological, and molecular features of human respiratory PASC and mouse models of post-viral lung sequelae, here we have discovered that spatially defined interactions among CD8 + T cells, macrophages, and epithelial progenitors drive chronic tissue sequelae after acute viral injury.Furthermore, we identify nodes for therapeutic intervention, which may be adopted in the clinic to mitigate chronic pulmonary sequelae after COVID-19. Ethics and biosafety All aspects of this study were approved by the Institutional Review Board Committee at Cedars-Sinai Medical Center (IRB# Pro00035409) and the University of Virginia (IRB# 13166).Work related to SARS-CoV-2 was performed in animal biosafety level 3 (ABSL-3) facilities at the University of Virginia and influenza-related experiments were performed in animal biosafety level 2 (ABSL-2) facilities at the Mayo Clinic and the University of Virginia. Cells, viruses, and mice African green monkey kidney cell line Vero E6 (ATCC CRL-1587) were maintained in Dulbecco modified Eagle medium (DMEM) supplemented with 10% fetal bovine serum (FBS), along with 1% of penicillin-streptomycin (P/S) and L-glutamine at 37°C in 5% CO 2 .The SARS-CoV-2 mouse-adapted strain (MA-10) was kindly provided by Dr. Barbara J Mann (University of Virginia School of Medicine).The virus was passaged in Vero E6 cells, and the titer was determined by plaque assay using Vero E6 cells. Aged mice were received at 20 to 21 months of age from the National Institutes of Aging and all mice were maintained in the facility for at least 1 month before infection.All mice were housed in a specific pathogen-free environment and used under conditions fully reviewed and approved by the institutional animal care and use committee guidelines at the Mayo Clinic (Rochester, MN) and University of Virginia (Charlottesville, VA). For primary influenza virus infection, influenza A/PR8/34 strain [75 plaque-forming units (PFU) per mouse] was diluted in fetal bovine serum (FBS)-free Dulbecco's modified Eagle's medium (DMEM) (Corning) on ice and inoculated in anesthetized mice through intranasal route as described previously (34).For SARS-CoV-2 infections, mice were infected with 5 x 10 4 PFU for C57/BL6 and 1000PFU for BALB/c virus of mouse-adapted (MA-10) intranasally under anesthesia as described previously.Infected mice were monitored daily for weight loss and clinical signs of disease for 2 weeks, following once a week for the duration of the experiments. The mortality rate of mice calculated as "dead" were either found dead in cage or were euthanized as mice reached 70% of their starting body weight which is the defined humane endpoint in accordance with the respective institutional animal protocols.At the designated endpoint, mice were humanely euthanized by ketamine/xylazine overdose and subsequent cervical dislocation. Evaluation of respiratory mechanics and lung function Lung function measurements using FOT and the resulting parameters have been previously described (35).In brief, animals were anesthetized with an overdose of ketamine/xylazine (100 and 10mg/kg intraperitoneally) and tracheostomized with a blunt 18-gauge canula (typical resistance of 0.18 cmH 2 O s/mL), which was secured in place with a nylon suture.Animals were connected to the computer-controlled piston (SCIREQ flexiVent), and forced oscillation mechanics were performed under tidal breathing conditions described in (35) with a positive-end expiratory pressure of 3 cm H 2 O.The measurements were repeated following thorough recruitment of closed airways (two maneuvers rapidly delivering TLC of air and sustaining the required pressure for several seconds, mimicking holding of a deep breath).Each animal's basal conditions were normalized to their own maximal capacity.Measurement of these parameters before and after lung inflation allows for determination of large and small airway dysfunction under tidal (baseline) breathing conditions.Only measurements that satisfied the constant-phase model fits were used (>90% threshold determined by software).After this procedure, mice had a heart rate of ~60 beats per minute, indicating that measurements were done on live individuals. Human lung tissue specimens Human lung samples were obtained from patients enrolled in the IRB-approved Lung Institute BioBank (LIBB) study at Cedars-Sinai Medical Center, Los Angeles, CA.All participants or their legal representatives provided informed written consent.Lung tissues were processed within 24 hours after surgical removal.Specifically, the lung tissues were cut and immediately fixed in 10% normal-buffered formalin for 24 hours before tissue processing using the HistoCore PEARL -Tissue Processor, Leica Biosystem, Deer Park, IL, and embedded in paraffin for histological studies.The formalin-fixed paraffin-embedded cassettes were properly stored at room temperature until further sectioning. Mouse tissue processing and flow cytometric analysis: Animals were injected intravenously with 4 μg of CD45 or 2 μg of CD90.2 Ab labeled with various fluorochromes.Two minutes after injection, animals were euthanized with an overdose of ketamine/xylazine and processed 3 min later.After euthanasia, the right ventricle of the heart was gently perfused with chilled 1X PBS (10 mL).Right lobes of the lungs were collected in 5 mL of digestion buffer (90% DMEM and 10% PBS and calcium with type 2 collagenase (180 U/ml) (Worthington) and DNase (15 μg/ml) (Sigma-Aldrich) additives).Tissues were digested at 37°C for 1 hour followed by disruption using gentleMACS tissue dissociator (Miltenyi).Singlecell suspension were obtained by hypotonic lysis of red blood cells in ammonium-chloridepotassium buffer and filtration through a 70m mesh.Cells were washed with FACS buffer (2% of FBS and 0.1% of NaN 3 in PBS) and FC- receptors were blocked with anti-CD16/32 (2.4G2). Surface staining was performed by antibody (details provided in Supplementary Table 2) incubation for 30 min at 4°C in the dark.After PBS wash, cells were resuspended with Zombiedye (Biolegend) and incubated at RT for 15 min.For IL-1β staining, cells were incubated with monesine (Biolegend) for 5 hours at 37°C and then stained with surface markers antibodies. After washing with FACS buffer, cells were fixed with fixation buffer (Biolegend) and permeabilized with intracellular staining permeabilization wash buffer (Biolegend).The cells were then stained with anti-IL-1β at RT for 1hr and samples were acquired on an Attune NxT (Life Technologies).The data were analyzed with FlowJo software (Tree Star). Caspase 1 FLICA analysis Caspase 1 was detected by FAM-FLICA Caspase-1 Assay Kit (ImmunoChemistry Technologies) according to the manufacturer's instructions.Briefly, lung single cells or macrophages from in vitro culture were stained with fluorochrome-conjugated Ab cocktail for cell surface markers.After staining, cells were incubated with FLICA for 30min at 37°C, washed and detected by flow cytometry. Cell isolation and ex vivo co-culture To isolate myeloid and CD8 + T cells, single cell suspension of the lung was generated from influenza infected mice as described above and labeled and enriched with CD11c and CD11b or CD8 microbeads (Miltenyi Biotec, Auburn, CA) according to the manufacturer's instructions. Purified myeloid cells were seeded in 96-well (200,000 per well) plates and incubated for 2 hours at 37°C 5% CO 2 to facilitate attachment.Wells were washed with 1X PBS to select for macrophages, following which selected wells were seeded with naïve or memory CD8 + T cells (40,000 per well) and further incubated for 16-18 hours.Supernatant and cell pellets were collected for cytokine measurement and gene expression analyses, respectively. AT2 cell isolation and culture AT2 cells were isolated from naïve mice as previously described (26,36,37).Briefly, mouse lungs were perfused with chilled PBS and intratracheally instilled with 1mL of dispase II (15U/mL, Roche), tying off the trachea and cutting away the lobes from the mainstem bronchi. Lungs were incubated in 4mL of 15U/mL dispase II for 45min while shaking at room temperature, followed by mechanical dissociation with an 18G needle.Following passage through a 100μm filter, lungs underwent 10min of DNase I digestion (50μg/mL) and filtered through 70μm filter prior to RBC lysis.Single-cell suspensions were subject to CD45 depletion using microbeads (Miltenyi), incubated with anti-FcgRIII/II (Fc block) and stained with CD45, EpCAM, MHC-II, and viability die (see antibody details in Supplementary Table 2). Fluorescence assisted cell sorting was performed on the BD Influx cell sorter to isolate AT2 cells as described previously (36) and collected in 500μL DMEM + 40% FBS + 2% P/S.Sorted AT2 cells (2x10 5 /well) were plated in a 96-well plate in DMEM/F12 + 10% FBS and cultured at 37°C, 5% CO 2 for 3 days prior to harvest. RNA isolation and real time-quantitative polymerase chain reaction (RT-qpCR) Cells were lysed in Buffer RLT and RNA was purified using the RNeasy Plus Mini Kit (Qiagen) according to the manufacturer's instructions.Random primers (Invitrogen) and MMLV reverse transcriptase (Invitrogen) were used to synthesize first-strand complementary DNAs (cDNAs) from equivalent amounts of RNA from each sample.cDNA was used for real-time PCR with Fast SYBR Green PCR Master Mix (Applied Biosystems).Real-time PCR was conducted on QuantStudio 3 (Applied Bioscience).Data were generated with the comparative threshold cycle (ΔCt) method by normalizing to hypoxanthine-guanine phosphoribosyltransferase (HPRT) transcripts in each sample as reported previously (38). COVID-19 convalescents cohort Peripheral blood specimens were obtained from patients presenting to the University of Virginia Post-COVID-19 clinic.All participants provided written informed consent.Pulmonary function testing (PFT: spirometry, lung volume testing, diffusing capacity of the lungs for carbon monoxide [DLCO]) at the time of blood draws was used to define normal and abnormal lung function. IL-1β cytokine evaluation Plasma from COVID-19 convalescents, supernatant from ex vivo culture and murine BAL fluid following flushing of airway 3X with 600uL of sterile PBS were used to quantify IL-1β levels by ELISA as per manufacturer's instructions (R&D systems).Samples were first concentrated 5X (BAL) and 2X (cell culture supernatant) using Microcon-10kDa centrifugal filters (Millipore Sigma). Immunofluorescence Mouse lung tissues were routinely perfused with ice cold 1X PBS, fixed with 10% formalin and embedded in paraffin.Lung tissue sections (5μm) were deparaffinized in xylene and rehydrated.Heat-induced antigen retrieval was performed using 1X Agilent Dako target retrieval solution (pH 9) in a steamer for 20min (mouse lungs) or 45min (human lungs), followed by blocking and surface staining.For intracellular targets, tissues were permeabilized with 0.5% Triton-X 0.05% Tween20 for 1 hour at room temperature.Sections were stained with primary antibodies as listed in Supplementary Table 2 overnight at 4°C.Subsequently, samples were washed and incubated with fluorescent secondary antibodies as listed in Supplementary Table 2 for 2 hours at room temperature.Sections were counterstained with DAPI (1:1000, ThermoFisher Scientific) for 3 minutes and mounted using ProLong Diamond Antifade mountant (ThermoFisher Scientific).After 24 hours of curing at room temperature, images were acquired using the Olympus BX63 fluorescent microscope and pseudocolours were assigned for visualization.For each lung section, images were taken in at least 10-12 random areas in the distal lung.All images were further processed using ImageJ Fiji, OlyVIA, and/or QuPath software. Spatial Transcriptomics analyses Spatial transcriptomics (ST) data (generated from 10X Visium platform) was pre-processed using the spaceranger package (v2.0, genome version mm10).The Rpackage Seurat (v4.3.0) was used for quality control (QC) and preliminary analysis.Only high-quality spots with sufficient gene coverage (>=2000) were retained for downstream analysis.Like scRNA-seq data, the spatial expression data was first normalized to a log scale using the SCTransform method (v0.3.5).Top 2000 highly variable genes were then identified based on the variance of expression across spots for the principal component analysis (PCA) input.UMAP embeddings were generated for visualization at the reduced dimensionality (top 30 PCs).Spots were clustered with a shared nearest neighbor (SNN) modularity optimization-based clustering algorithm (built in Seurat).Samples were integrated and batch-effect-removed using the Harmony package (v0.1).In addition to the spatial spot view of the expression pattern, we used UMAP to visualize the expression pattern of spots associated with the identified cell types in a lower dimension using the visualization pipeline in the Seurat package.We determined each spot's potential cell type composition based on the following protocol: 1.We defined AE spots based on the k-means clustering analysis (using knowledge based key genes from different cell types).2. For non-AE spots, we used the average expression of Krt5, Krt8, Krt17, Cldn4, and Trp53 to estimate a Krt score, and similarly, average expression of Cd8a, Cd8b1, Itgae, Trbc1, Trbc2, Sell, Ccl5, Cd69, and Cd3d for Cd8 score.We set -0.4 as cutoff for both the Cd8 and Krt score: spots with Krt/Cd8 score >-0.4 are assigned as Krt/Cd8 associated spot, and assigned blue/red color, respectively in the spatial map.In this way, a spot can be Krt spot and Cd8 spot at the same time, and we use purple color to represent this category.For a given gene list, captured the expression of relevant genes in Krt spots and AE spots and compared the distribution of the expression index.The significance of the difference was estimated by Wilcoxon rank sum test.The p-values were then log 10 transformed and displayed as bar plot. Statistical analyses Quantitative data are presented as means ± SEM.Unpaired two-tailed Student's t test (twotailed, unequal variance) was used to determine statistical significance with Prism software (GraphPad) for two-group comparison.For multiple groups, analysis of variance (ANOVA) corrected for multiple comparisons was used when appropriate (GraphPad).Log-rank (Mantel-Cox) test was used for survival curve comparison and multiple t test was used to analyze differences in weight loss.We considered P < 0.05 as significant in all statistical tests and denoted within figures as a * for each order of magnitude P value. Spatial association between dysplastic epithelial progenitors and CD8 + T cells is a hallmark of human PASC-PF. We examined diseased lung sections from a cohort of PASC-PF patients that underwent lung transplantation at Cedars-Sinai Medical Center.Patients had a mean age of 52.5 years and exhibited persistent pulmonary impairment and hypoxemia, requiring oxygen supplementation (Supplementary Table 1).Lung histology was notable for extensive immune cell infiltration and collagen deposition in the alveolar epithelium (Fig. 1a).Consistent with the observed fibrotic sequelae, we found reduced levels of AT1 and AT2 cells in PASC-PF lungs compared to controls, suggesting a persistent defect in alveolar regeneration (Fig. 1b,c) (39).We also observed chronic persistence of ectopic Krt5 + basal cells and Krt8 hi transitional cells in PASC-PF lungs compared to controls, which is concordant with recent reports (5, 8, 30, 40) (Fig. 1d,e). PASC-PF lungs also harbored widespread expression of alpha smooth muscle actin (αSMA), indicative of myofibroblast activity in addition to pockets of Krt5 -Krt17 + aberrant basaloid cells previously found in IPF lungs (Extended data Fig. 1a) (41,42).Therefore, PASC-PF is characterized by the sustained loss of functional alveolar epithelial cells, and the persistence of dysplastic Krt5 + pods and Krt8 hi transitional cells, which is histologically akin to other fibrotic lung diseases such as idiopathic pulmonary fibrosis (IPF) (28,29). Previously, we reported that increased CD8 + T cell levels in the bronchoalveolar lavage (BAL) fluid were associated with impaired lung function of COVID-19 convalescents (9,10).In accordance, CD8 + T cell numbers were elevated in PASC-PF patient lungs (Fig. 1f).Moreover, CD8 + T cell abundance was significantly higher in acute COVID-19 but not in IPF lungs when compared to controls (Fig. 1g,h).Interestingly, a striking spatial association was observed between CD8 + T cells, and Krt8 hi and Krt5 + areas representing dysplastic repair upon analysis of their distribution in PASC-PF lungs (Fig. 1i,j Extended data Fig. 1b,c,h).Notably, this correlation between CD8 + T cells, and Krt8 hi and Krt5 + dysplastic areas was unique to PASC-PF lungs but not seen in lungs from control, acute COVID-19 or IPF conditions (Fig. 1k-m, Extended data Fig. 1d-g,i,j).Collectively, these data indicate that the spatiotemporal colocalization of CD8 + T cells and areas of dysplastic repair is a unique feature of post-viral pulmonary fibrosis and supports immune-epithelial progenitor interactions potentially contributing to the observed defects in alveolar regeneration and chronic pulmonary sequelae. A mouse model of post-viral lung sequelae recapitulating features of human PASC-PF. To investigate the role of immune-epithelial progenitor interactions, we aimed to develop a mouse model to capture the cellular and pathological features observed in PASC-PF lungs.We used a mouse-adapted (MA-10) strain of the SARS-CoV-2 virus to productively infect WT mice. Notably, SARS-CoV-2 MA-10 infection is known to induce acute lung disease and pneumonia in mice, characterized by substantial damage to the airway epithelium, fibrin deposition, and pulmonary edema (43).Since aging is associated with an increased propensity to develop lung fibrosis post viral injury as well as severe disease after SARS-CoV-2 infection in mice, we included both young and aged mice in our study (34,44,45).As expected, aged C57BL/6 mice infected with SARS-CoV-2 MA-10 had increased morbidity and mortality compared to young mice (Fig. 2a, Extended data Fig. 2a).Indeed, we observed marked inflammation and tissue damage acutely (at 10 days post infection (dpi)) (Extended data Fig. 2b) (43,46).However, irrespective of age, the majority of the lungs recovered from the acute damage and only moderate pathology primarily restricted to the subpleural regions was observed at the chronic phase (35dpi) of infection (Fig. 2b). Next, we tested SARS-CoV-2 MA-10 infection in aged BALB/c mice, which was previously reported to induce more robust inflammation and tissue sequelae compared to C57BL/6 mice (44) (Fig. 2c, Extended data Fig. 2e).Consistent with the report, we observed persistent immune cell infiltration, which was largely restricted to the peri-bronchiolar regions at 35dpi (Fig. 2d).However, similar to the aged C57BL/6 mice, minimal signs of collagen deposition were observed at later time points in the alveolar epithelium (Fig. 2d).Consistent with what was previously reported (44) , BALB/c mice failed to maintain pulmonary CD8 + T cells (Extended data Fig. 2c,f), a prominent feature of human respiratory sequelae (Fig. 1d).Moreover, no significant difference was observed in the development and persistence of Krt8 hi transitional cells between aged naïve and infected (35dpi) mice in both genetic backgrounds in spite of substantial alveolar damage during acute disease (Fig. 2e,f, Extended data Fig. 2g-i).Thus, SARS-CoV-2 infection in these two mouse models failed to recapitulate key features of tissue pathology and dysplastic lung repair observed in human PASC-PF. Previously, we and others reported persistent lung inflammation and tissue pathology after influenza viral infection, particularly in aged C57BL/6 mice (9,34,47).Therefore, we infected young and aged WT C57BL/6 mice with influenza H1N1 A/PR8/34 strain, which causes substantial viral pneumonia and alveolar damage during acute disease (34,48).Similar to SARS-CoV-2 infection, aged mice exhibited increased morbidity and mortality post influenza infection (Fig. 2g, Extended data Fig. 2j).In contrast to SARS-CoV-2 infection however, we observed persistent immune cell infiltration and collagen deposition in the alveolar epithelium, particularly in aged mice that persisted to 60 dpi (Fig. 2h).Moreover, lungs from aged mice harbored significantly larger Krt5 + and Krt8 hi areas of dysplastic repair as well as higher levels of CD8 + T cells post influenza viral pneumonia, similar to human PASC-PF lungs (Fig. 2i, 3b,c).Of note, we observed a sustained defect in pulmonary function in aged mice up to 60dpi following influenza viral pneumonia, mimicking the impaired lung function observed in patients with severe pulmonary sequelae after acute COVID-19 (Fig. 2j,k Exuberant tissue CD8 + T cell responses impair alveolar regeneration and promote dysplastic lung repair following viral pneumonia. To further investigate the dynamics of the induction of lung fibrosis, we infected young and aged mice with influenza virus and characterized the immune and epithelial progenitor compartments over time (Fig 3a, Extended data Fig. 3a).The induction of epithelial progenitor activity was comparable in young and aged mice during the acute phase of infection (up to 14dpi) but diverged at later timepoints with increased Krt5 + and Krt8 hi progenitors in aged mice (Fig. 3a-c). These trends correlated with a persistent age-associated defect in alveolar regeneration, exemplified by a sustained reduction in AT2 cell numbers (Extended data Fig. 3b,c). Consistent with our previous study, lungs from aged mice harbored significantly higher levels of CD8 + T cells post influenza infection compared those of young mice (Fig. 3d) (9,34).Similar to human PASC-PF lungs, we also observed a spatial association between CD8 + T cells and Krt8 hi areas of dysplastic repair, reinforcing the relevance of this model to study chronic pulmonary sequelae (Fig. 3e, 1i).Furthermore, the association between CD8 + T cells and Krt8 hi areas was seen only at post-acute timepoints and strengthened over time, recapitulating features of human lungs after severe SARS-CoV-2 infection and suggesting these immune-epithelial progenitor interactions are primarily a feature of chronic sequelae of viral infections (Fig. 3f). To understand the role of the persistent CD8 + T cells at this stage, we treated aged influenzainfected mice with CD8 + T cell-depleting Ab (αCD8) or isotype control Ab starting from 21dpi (Fig. 3g).This post-acute timepoint was chosen to ensure no interference with the essential antiviral activities of CD8 + T cells during acute infection as the virus is completely cleared by 15 dpi in this model (34).CD8 + T cell depletion improved histological evidence of disease in aged but not young mice (Fig. 3h, Extended data Fig. 3d).Importantly, Krt5 + and Krt8 hi areas were significantly reduced by depletion of CD8 + T cells (Fig. 3i,j), suggesting that CD8 + T cells are essential for the maintenance of dysplastic repair areas after recovery from acute disease. Interestingly, AT1 and AT2 cells were markedly increased, and the alveolar architecture was restored after CD8 + T-cell depletion (Fig. 3k,l).We also observed a concomitant improvement in lung function after αCD8 treatment, suggesting that exuberant CD8 + T cell responses in the aftermath of acute disease affected the restoration of the alveolar spaces resulting in chronic impairment of alveolar gas-exchange function (Fig. 3m,n).Whether the observed decrease in Krt5 + and Krt8 hi cells post CD8 + T cell depletion is a result of complete differentiation into mature alveolar cell types or apoptosis of the progenitors remains to be elucidated using lineage-tracing studies, which are logistically challenging to perform in aged mice.Moreover, since the expansion of ectopic Krt5 + basal cells and Krt8 hi transitional cells was observed in the distal lung as early as 9dpi, it is unlikely that the depletion of CD8 + T cells 21dpi onwards affected the induction of the dysplastic repair program.Notably, depletion of CD8 + T cells resolved Krt5 + but not Krt8 hi areas in young mice and did not dramatically affect lung pathology or alveolar regeneration (Extended data Fig. 3d-i), suggesting that CD8 + T cells may specifically influence age-associated dysplastic lung repair. To dissect the roles of circulating CD8 + T cells and lung resident memory CD8 + T cells in impairing alveolar regeneration, we used low and high dose αCD8 treatment to deplete circulating and pulmonary CD8 + T cells respectively, in aged influenza-infected mice (34,49). We found that the resolution of areas of dysplastic repair only occurs upon depletion of pulmonary CD8 + T cells but not circulating CD8 + T cells (Extended data Fig. 3j,k) (34), suggesting that lung-resident CD8 + T cells are required for the maintenance of dysplastic lung repair.Taken together, our results strongly implicate the persistent tissue CD8 + T cell responses in the development of chronic pulmonary sequelae post viral pneumonia in aged animals. Spatial transcriptomics reveal proximal interactions between CD8 + T cells, macrophages, and Krt5 + and Krt8 hi -rich areas of dysplastic repair. Given the heterogenous distribution of tissue pathology and the importance of capturing the spatially confined interactions between immune and epithelial progenitor cells within areas of dysplastic repair, we performed spatial transcriptomics on aged influenza-infected mouse lungs (60dpi) treated with control Ab or αCD8 (Fig. 4a).Following UMAP visualization and clustering of the capture spots (Extended data Fig. 4a,b), we observed a strong association between gene expression signatures of CD8 + T cells, and Krt5 + and Krt8 hi areas of dysplastic repair, similar to immunostaining results (Fig 4b , 3e).Consistent with previous data, gene expression signatures of Krt5 + and Krt8 hi progenitors were dramatically reduced in αCD8-treated lungs, with a concomitant increase in healthy alveolar epithelial cells (Fig 4b , 3i-k, Extended data Fig. 4d,e).This also corresponded with the expression pattern of fibrosis, with the strongest enrichment within Krt5 + and Krt8 hi areas and a similar reduction in αCD8-treated lungs (Extended data Fig. 4c).We further performed an agnostic evaluation of signaling pathways differentially regulated within the healthy alveolar epithelium compared to Krt5 + and Krt8 hi -rich areas of dysplastic repair using Gene Set Enrichment Analysis (GSEA) (Fig. 4c).Several pathways associated with inflammatory responses were highly active within areas of dysplastic repair, whereas growth factor responses and restoration of the vasculature were prominently observed in the healthy alveolar epithelium. Upon investigation of specific gene expression-signatures, we observed an enrichment of Krt8 hi transitional cells (also known as alveolar differentiation intermediates (22), damage-associated transitional progenitors (21), and pre-alveolar type-1 transitional cell state ( 20)) (Fig. 4d) and aberrant basaloid cells (42) (Fig. 4e), particularly within areas of dysplastic repair.Evaluation of various immune and epithelial cell marker signatures revealed prominent enrichment of monocyte-derived macrophages in Krt5 + and Krt8 hi -rich areas in addition to other immune cells including CD4 + T cells, interstitial macrophages, B-cells, and natural killer cells (Fig. 4f, Extended data Fig. 4f,g).In contrast, gene expression signatures associated with pro-repair tissue-resident alveolar macrophages, AT1, and AT2 cells were primarily observed within the healthy alveolar epithelium (Fig 4f, Extended data Fig. 4e,h).To further characterize the immune-epithelial progenitor niche within areas of dysplastic repair, we performed a correlation analysis and found that CD8 + T cells and monocyte-derived macrophages were physically clustered around Krt5 + and Krt8 hi areas of dysplastic repair, and excluded from areas enriched with alveolar macrophages and mature alveolar epithelial cells (Fig. 4g). To validate our findings from the spatial transcriptomics data, we immunostained aged influenza-infected mouse lungs and identified a similar enrichment of CX3CR1 + monocytederived macrophages within Krt8 hi areas of dysplastic repair (Fig. 4h,i).Interestingly, CD8 + T cell depletion resulted in a decrease in CX3CR1 + macrophage numbers, suggesting a potential role for their recruitment and maintenance in lungs (Fig. 4j,k, Extended data Fig. 4g).Similar to our mouse model of post-viral sequelae, human PASC-PF lungs exhibited an increase in macrophage populations compared to controls (Fig. 4l,m), which were also strongly enriched within areas of dysplastic repair (Fig. 4n,o).Together, our data revealed a conserved finding in both mouse and human post-viral lungs, where CD8 + T cells are present in fibrotic regions and in proximity to fibroproliferative mediators such as monocyte-derived macrophages, and Krt5 + and Krt8 hi dysplastic epithelial progenitors, representing a pathological niche after acute respiratory viral infections. A CD8 + T cell-macrophage axis induces IL-1β release to arrest AT2 trans-differentiation in the transitional cell state. We postulated the interactions between CD8 + T cells, macrophages, and epithelial progenitors within this pathological niche generated molecular cues to create a profibrotic microenvironment.We observed an enrichment of IL-1R signaling as well as inflammasome components in Krt5 + and Krt8 hi -rich areas of dysplastic repair compared to the healthy alveolar epithelium in aged influenza-infected mouse lungs at 60dpi (Fig. 5a, Extended data Fig. 5a). IL-1β has been shown to promote the expansion of transitional Krt8 + cells upon bleomycin injury (21,50).Therefore, we investigated if IL-1β mediated the development of post-viral pulmonary fibrosis.First, we found that CD64 + macrophages were major producers of pro-IL-1β compared to CD64 -cells in influenza-infected aged lungs (Fig. 5b and Extended data Fig. 5b).Since mature IL-1β release from cells requires caspase-1 mediated pro-IL-1β cleavage (51), we examined caspase-1 activity in lung macrophages after CD8 + T cell depletion.Strikingly, CD8 + T cell depletion significantly reduced caspase-1 activity in lung macrophages (Fig. 5c).Moreover, inflammasome gene signatures were attenuated with αCD8 treatment (Extended data Fig. 5c), supporting the notion that CD8 + T cells persisting in the lungs post infection promote chronic inflammasome activation and IL-1β release by macrophages.CD8 + T effector and memory T cells are known express high levels of IFN-γ and TNF (52).Indeed, infected aged lungs harbor high levels of TNF (Fig. 5d) and IFN-γ (Fig. 5e) expressing CD8 + T cells.To determine whether IFN-γ and TNF mediate the observed inflammasome activation, we treated aged-influenza infected mice with neutralizing antibodies towards both IFN-γ and TNF starting 21dpi and observed a decrease in caspase-1 activity (Fig. 5f).As this effect was similar to αCD8-treatment (Fig. 5c, Extended data Fig. 5c), we directly tested if CD8 + T cell-derived IFN-γ and TNF regulated macrophage inflammation activation.We used an in vitro coculture of macrophages and CD8 + T cells isolated from mice previously infected with influenza (42 dpi) to assess IL-1β release (Fig. 5g).Indeed, we observed CD8 + T cells augmented macrophage Il1b mRNA expression (Extended data Fig. 5d), caspase-1 activity (Fig. 5h, Extended data Fig. 5e), and IL-1β release (Fig. 5i).The synergistic activity between macrophages and CD8 + T cells to produce IL-1β in vitro was not observed when isolated from naïve mice, suggesting that prior infection is necessary to prime the cells (Extended data Fig. 5f).Moreover, IL-1β released into the supernatant was significantly reduced upon treatment with IFN-γ and TNF neutralizing Ab in the coculture system, confirming the role of IFN-γ and TNF in promoting IL-1β release by macrophages (Fig. 5j).Consistently, we observed a similar decrease in BAL fluid IL-1β levels following treatment with either αCD8, or αIFN-γ + αTNF treatment (Fig. 5k,l). Since CD8 + T cells, monocyte-derived macrophages, and Krt8 hi progenitors accumulate within dysplastic areas after influenza infection, we tested whether macrophage-derived IL-1β is a negative regulator of AT2 to AT1 trans-differentiation.Using a 2D primary AT2 cell culture model known to induce spontaneous differentiation into AT1 cells through the transitional cell stage, we examined if conditioned media from cocultured CD8 + T cells and macrophages influenced AT2 trans-differentiation (Fig. 5m) (18,37).AT1 cell marker expression was reduced upon exposure to conditioned media from CD8 + T cell-macrophage coculture compared to macrophages alone, which was rescued upon treatment with αIL-1β (Fig. 5n).In contrast, transitional cell marker expression exhibited the opposite pattern, with increased levels within the coculture group and a dramatic reduction following αIL-1β treatment (Fig. 5o).Conversely, rIL-1β treatment inhibited AT1 marker expression and promoted the expression transitional cell markers akin to previous reports (21).Collectively, our results suggest that exuberant CD8 + T cell-macrophage interactions promote chronic IL-1β release to inhibit AT2 cell trans-differentiation by arresting the cells in the transitional state. Therapeutic neutralization of IFN-γ and TNF, or IL-1β activity enhances alveolar regeneration and restores lung function. Our data thus far indicate a pathological role for CD8 + T cells persisting in human PASC-PF or virus-induced lung sequelae in an animal model in the development of chronic pulmonary sequelae.Since depletion of CD8 + T cells is not a clinically feasible treatment strategy, we explored the therapeutic efficacy of neutralizing the cytokines that are effectors of the profibrotic CD8 + T-cell.As expected, blocking IFN-γ and TNF activity in aged influenza-infected mice ameliorated fibrotic sequelae when compared to isotype controls (Fig. 6a,b).Moreover, IFN-γ and TNF neutralization attenuated Krt5 + and Krt8 hi areas of dysplastic repair (Fig. 6c,d), enhanced alveolar regeneration as evidenced by the increased numbers of AT1 and AT2 cells (Fig. 6c,e).The observed cellular changes also reflected in physiological benefit, with improved lung function following treatment (Fig. 6f). Next, we tested the efficacy of neutralizing IL-1β in influenza-infected aged mice and observed dramatic attenuation of lung fibrosis, which phenocopies the results of CD8 + T cell depletion, and neutralization of IFN-γ and TNF (Fig. 6g,h, Extended data Fig. 6a,b).Improved alveolar regeneration was also observed, as evidenced by reduced Krt5 + and Krt8 hi areas and increased AT1 and AT2 cells (Fig. 6i-l).Further confirming the therapeutic efficacy of IL-1β blockade post infection, we found improved lung function in Ab-treated mice (Fig. 6m).Notably, improved outcomes following IL-1β neutralization were only seen in aged mice but not young mice, which is likely due to the elevation of IL-1β observed exclusively in aged mice (Extended data Fig. 6c,d).Thus, these data suggest that neutralization of IFN-γ and TNF, or IL-1β in the post-acute stage of viral infection may serve as viable therapeutic options to augment alveolar regeneration and dampen fibrotic sequelae observed following respiratory viral infections (Extended data Fig. 7).Consistent with the observed improvement in outcomes and a recent study identifying increased BAL IL-1β levels in respiratory PASC (7), we found that circulating IL-1β levels were elevated in individuals exhibiting persistent abnormal pulmonary function compared to those that had fully recovered (Fig. 6n), suggesting that chronic IL-1β activity may impede the restoration of normal lung function after acute SARS-CoV-2 infection. DISCUSSION Recent efforts have revealed several histopathological features conserved across various cohorts of respiratory PASC and PASC-PF patients including the prolonged reduction in alveolar epithelial cells (39), maintenance of Krt8 hi and Krt5 + dysplastic progenitors (5,8), and the persistence of various immune cell populations in the lungs (9,10,53).Although immunederived cues have recently been shown to influence lung repair (11,21,48,50,54), their exact interactions, if any, with the alveolar epithelium and role in post-viral fibrosis remain unexplored. Here, we link these independent observations in PASC-PF lungs and an animal model of postviral fibrosis to describe spatially defined microenvironments composed of a dysregulated immune-epithelial progenitor niche that underlie dysplastic lung repair and tissue fibrosis after acute COVID-19.Intriguingly, these niches were observed specifically in PASC-PF lungs but not in acute COVD-19 or IPF lungs, indicating that these interactions are a unique feature of postviral fibrosis. Understanding the mechanistic basis of respiratory PASC requires suitable animal models, however, SARS-CoV-2 MA-10 infection of neither C57BL/6 nor BALB/c mice resulted in severe alveolar pathology and fibrosis beyond acute disease.Although previous studies indicate that aged BALB/c mice develop significant pulmonary inflammation and prolonged pathology after SARS-CoV-2 infection (55), our data suggest these murine SARS-CoV-2 infection models may not fully recapitulate the pathophysiology leading to PASC-PF in humans.A major deficiency of the SARS-CoV-2 mouse infection model is the absence of persistent Krt8 hi and Krt5 + areas -a hallmark of human PASC-PF.Moreover, CD8 + T cells, which are enriched in human PASC-PF lungs to potentially promote the maintenance of Krt8 hi and Krt5 + dysplastic areas, are not appreciably increased post SARS-CoV-2 infection in BALB/c mice (9), which may be a reflection of a genetically programmed bias towards T H2 responses (56).Thus, it is imperative to develop clinically relevant animal models of SARS-CoV-2 post-viral fibrosis, validated by comparative analyses with human PASC, to uncover the underlying mechanisms and identify therapeutic targets.Nevertheless, in this study we show that influenza infection in aged C57BL/6 mice induced chronic pulmonary sequelae that faithfully recapitulate the immunopathological features of human PASC-PF lungs. Excessive infiltration and accumulation of profibrotic monocyte derived macrophages has been reported in the context of severe acute COVID-19, IPF, and PASC (12,53,57,58).Here, we elucidate a previously unknown role for pulmonary CD8 + T cells in impaired recovery and fibrotic remodeling in PASC-PF but not acute COVID-19 or IPF lungs.This distinction is likely the product of a dysregulated and protracted antiviral response, originally aimed towards the clearance of the virus and virus-infected cells.Following viral infection-mediated alveolar injury, lung resident CD8 + T cells are recruited and maintained at sites of severe damage in order to protect these vulnerable sites in case of reinfection -previously termed as repair associated memory depots (59).These pulmonary CD8 + T cells typically gradually contract with successful alveolar regeneration in individuals that successfully recover from acute COVID-19 (9).However, long-term persistence of CD8 + T cells in human PASC, and aged influenza-infected mice impairs lung recovery post infection and drives the development of fibrotic disease.Given the overlap in fibrogenic pathways, it has been proposed that PASC-PF may represent an intermediate state prior to potential progression towards IPF (8,60).Whether CD8 + T cells primarily dictate the balance between functional recovery and PASC-PF post infection, or also play a pivotal role in the development of IPF is an open question (61).It is also currently unclear if the prolonged maintenance and activity of CD8 + T cells in lungs is a result of excessive TGF-β signaling reported to occur in PASC (8), chronic persistence of viral remnants (62,63), or other independent mechanisms.By combining imaging and spatial transcriptomics modalities, we show that respiratory sequelae post viral infections is, at least in part, a result of chronic IL-1β signaling downstream of aberrant interactions between CD8 + T cells and monocyte-derived macrophages, mediated by IFN-γ and TNF.Although chronic IL-1β was found to impair AT2 trans-differentiation in vitro, the use of aged mice in our experiments posed a logistical challenge to directly test the effect of IL-1β on Krt8 hi and Krt5 + progenitors in vivo using transgenic mice.Thus, there exists a distinct possibility for IFN-γ and TNF as well as IL-1β and relevant downstream mediators to influence epithelial progenitor cell fate through their actions on other lung immune and non-immune cells to ultimately result in fibrotic remodeling.Nevertheless, we show that neutralization of IFN-γ and TNF, or IL-1β activity in the post-acute phase of infection can augment alveolar regeneration and dampen fibrotic sequelae.Given the observed benefits in adults and pediatric patients hospitalized with acute COVID-19, United States Food and Drug Administration has already granted emergency use authorization to the IL-1 receptor antagonist, Anakinra and the JAKinhibitor, Baricitinib, in acute COVID-19.Our data strongly suggest that these drugs may also serve as promising candidates to treat ongoing respiratory PASC in the clinic.Representative immunofluorescence images staining CD8 + T cells (CD8α) and epithelial ) (9, 40).The exact mechanisms underlying the divergent trajectories in recovery following acute alveolar injury due to SARS-CoV-2 MA-10 and influenza viral infections remain unclear.Nevertheless, extensive evaluation of various mouse viral pneumonia models indicates influenza infection in aged mice has closer histopathological alignment with features of chronic pulmonary sequelae observed in PASC-PF lungs and can serve as a clinically relevant model to study the mechanisms of viral infectionmediated lung fibrosis. Fig.1
2023-09-19T13:10:12.080Z
2023-09-14T00:00:00.000
{ "year": 2023, "sha1": "9ee4513098e458eeff9bffb27bf9f644fe5d9961", "oa_license": "CCBYNCND", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/09/14/2023.09.13.557622.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "9ee4513098e458eeff9bffb27bf9f644fe5d9961", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
19366573
pes2o/s2orc
v3-fos-license
Implementation and validation of the extended Hill-type muscle model with robust routing capabilities in LS-DYNA for active human body models Background In the state of the art finite element AHBMs for car crash analysis in the LS-DYNA software material named *MAT_MUSCLE (*MAT_156) is used for active muscles modeling. It has three elements in parallel configuration, which has several major drawbacks: restraint approximation of the physical reality, complicated parameterization and absence of the integrated activation dynamics. This study presents implementation of the extended four element Hill-type muscle model with serial damping and eccentric force–velocity relation including \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Ca^{2+}$$\end{document}Ca2+ dependent activation dynamics and internal method for physiological muscle routing. Results Proposed model was implemented into the general-purpose finite element (FE) simulation software LSDYNA as a user material for truss elements. This material model is verified and validated with three different sets of mammalian experimental data, taken from the literature. It is compared to the *MAT_MUSCLE (*MAT_156) Hill-type muscle model already existing in LS-DYNA, which is currently used in finite element human body models (HBMs). An application example with an arm model extracted from the FE ViVA OpenHBM is given, taking into account physiological muscle paths. Conclusion The simulation results show better material model accuracy, calculation robustness and improved muscle routing capability compared to *MAT_156. The FORTRAN source code for the user material subroutine dyn21.f and the muscle parameters for all simulations, conducted in the study, are given at https://zenodo.org/record/826209 under an open source license. This enables a quick application of the proposed material model in LS-DYNA, especially in active human body models (AHBMs) for applications in automotive safety. for saving time and costs during the design phase, but also to model and predict future product lifecycle. One of the most demanding and regulated domains is vehicle safety and therefore crash simulations. For more than a quarter of a century, complete vehicles are modelled virtually as finite element models with all significant details, including material and geometrical properties. The same approach was then applied to the human body, and in the last decade several detailed finite element human body models (HBMs) were presented [1][2][3]. Joint simulations with a combined application of car and human body models allow for prediction of in-crash behaviour and possible injuries for occupants or pedestrians with a sufficient accuracy throughout all stages in development, replacing expensive crash tests using car prototypes and Anthropomorphic Test Devices (ATDs). These so-called virtual testing methods will gain importance in the future, as the current trends of active safety systems and autonomous vehicles become available on the market. Active safety systems, in contrast to traditional passive safety systems, react preventively prior to a crash to avoid or mitigate a possible impact. This requires active HBMs (AHBMs) that are able to reproduce human behaviour in normal driving situations, as well as the behaviour in the in-crash phase. The same requirements exist for the second trend of autonomous vehicles, where generic driving or sitting positions no longer exist, but where occupants can move freely. From these requirements, three challenges for AHBM modelling arise. The first being the implementation of active muscles as mathematical models of the muscle-tendon complex (MTC) including the activation dynamics, which will be addressed in the contribution. The second challenge is to model biologically relevant neural controllers to enable accurate forward dynamics (FD) simulations of human reactions and voluntary motion in all kind of traffic scenarios. The third challenge is the choice of parameters for AHBMs, as only a correct representation of both the passive components and active components will result in an accurate representation of a living human. Most current HBMs have a passive stiffness which is too high, see e.g. [4,5]. On the one hand, this is to compensate for the active components still missing. On the other hand, this is because the source for the parameters are almost exclusively post mortem human subjects, where tissue modulus and no-load strain differ from living tissue depending on the postmortem time and post-mortem rigor [6]. In the state of the art finite element AHBMs [7,8] for car crash analysis the LS-DYNA material named *MAT_MUSCLE (*MAT_156) is used for modeling active muscles. This 16:109 material is an advanced version of the previous model *MAT_SPRING_MUSCLE [9] for discrete elements, that is no longer being supported. *MAT_156 represents a Hill-type muscle model which consists of three parallel elements: contractile element (CE), parallel elastic element (PEE) and a parallel damping element (PDE), see also Fig. 1a. The implementation was done by Dr. J. A. Weiss based on prior studies and reviews on different Hilltype model element configurations by [10][11][12]. The implemented configuration was chosen due to its simplicity, ease of parameters derivation from the experiments and computational efficiency. However, in the publication [12] it was pointed out, that an element configuration with better approximation of the physical reality should be used in simulations if possible. Such an extended Hill-type muscle model should have a clear separation between muscle fibres and tendon structures. For a correct representation of the MTC dynamics an additional internal degree of freedom is required to decouple active muscle fibre and elastic tendon dynamics. Subsequent studies investigating the role of the serial elastic element have shown, that such simplifications and assumptions can lead to instabilities produced by force-velocity or force-length relation formulations [13], incorrect energy storage and release in the interaction with the environment [14,15], unrealistic high-frequency oscillations [16] and differences in muscle force magnitude [17]. All these effects, mentioned in publications above, directly influence the explicit integration scheme used in LS-DYNA thus impacting speed, accuracy and robustness of simulations with AHBMs. Usually, muscles and tendons wrap around bones or joints in both steady state conditions and while performing movements, consequently a physiological muscle path representation (muscle routing) is essential for FD simulations [18,19]. Slight changes in the muscle line of action will lead to inaccurate muscle forces and resulting moments due to incorrect lever arms and muscle length. To model physiological muscle paths in finite element HBMs different muscle routing methods can be used. Fixed lever arms or the via-point method [20,21] are the most simple options and the usage of contact detection [22] would be the most sophisticated method. According to [23] a via-point method should be preferred for *MAT_156 using the *ELEMENT_BEAM_PULLEY keyword in LS-DYNA. However, it is unclear if this method is applicable, as there exist so far no successful implementation of this method in AHBMs to the authors knowledge. Additional disadvantages result from the way parameters are set for *MAT_156 in LS-DYNA. For a number of parameters predefined curves are required, e.g. muscle activation level vs. time or stress vs. the stretch ratio. These curves need to be defined beforehand or might be calculated during the runtime through the *DEFINE_CURVE_ FUNCTION keyword and PIDCTL [24] options. This approach is limited, cumbersome and error prone. Instead, muscle parameters and constants found in anatomico-physiological literature should be used directly. Also, some disadvantages exist for muscle activation dynamics. Predefined muscle activation level vs. time curves cannot represent the activation dynamics correctly and to model the dynamics more precisely the activation level has to depend on the muscle length. The complete activation dynamics can be included efficiently in the material model for the muscle itself. This study addresses the problems mentioned above and presents an implementation of the extended Hill-type muscle model with serial damping and eccentric force-velocity relation proposed by Haeufle et al. [25] into LS-DYNA code. The muscle model is additionally extended by activation dynamics and a method for physiologial muscle routing. The complete description of the model, its implementation, verification and validation are given in the next sections. Methods In this section the complete model, its implementation in LS-DYNA, and the verification and validation set-up are described. Muscle model One of today's most popular and widely used macroscopic muscle model was proposed by Hill in 1938 [26] on the basis of experiments with frog muscles. The most important feature is a direct relation between muscle force and contraction velocity. Furthermore the model is also referred to as a first order muscle model, which means, that the muscle elements have neither mass nor inertia, and only an axial force is applied on the skeletal model between origin and insertion point of the muscle. During the past years some disadvantages of the original Hill model were found and there have been many publications with the aim of further developing and improving this model. In the publication of Haeufle et al. [25], a modified Hill-type muscle model was proposed with improved serial damping and eccentric force-velocity relation. This model consists of four simple mechanical elements: an active contractile element (CE), which is controlled by the activation level q; parallel (PEE) and serial (SEE) nonlinear spring elements and a serial damping element (SDE). The model's structure was based on a previous study by Günther et al. [16] which determined, that the model with a force-dependent SDE provides the best results in a comparison with constant parallel, constant serial, and forcedependent parallel damping elements. The structure of the MTC of the Haeufle model is shown in Fig. 1b, with two clearly separated parts modeling the active muscle fibres (CE + PEE) and the passive tendon and aponeurosis structures (SEE + SDE). The main equations of the muscle model are presented in the following. They were taken from [16,19,25,27], where more detailed explanations can be found if needed. Furthermore, a comprehensive study on the influence of individual parts and their model formulation is given in [28]. As shown in Fig. 1b the muscle model features an internal degree of freedom which is described by l CE . The lengths of the passive elements are equal to Then the total MTC length is The force equilibrium at point P between the muscle fibre and the tendon part is described in [16] as: Contractile element (CE) The contractile element represents the active fibre bundles of the MTC. The force of the contractile element F CE is therefore dependent on the muscle activity q, the contraction velocity l CE as well as the length-dependent isometric force F isom (l CE ). It is expressed by the equation The factors A rel and B rel are so-called normalized Hill parameters, where A rel is normalized with the maximum isometric force F max and B rel with the optimal fibre length l CE,opt [16, p. 64]. The subscript 'rel' , for relative, indicates the normalization. The optimal muscle fibre length at which the isometric force reaches the maximum value is l CE,opt . The isometric force F isom depends on the length of the contractile element and is calculated as follows: This equation represents the bell-shaped force-length relationship of the CE element. The width of the normalized bell curve W limb and the exponent ν CE,limb may be chosen differently for the ascending and descending limb of the force-length curve. When calculating the Hill parameters, it is distinguished between an eccentric l CE > 0 (lengthening fibres) and a concentric case l CE ≤ 0 (shortening fibres). Please note, that in physiological muscle experiments, where shortening work of muscle fibres is examined, the sign convetion for the contraction velocity is usually the opposite (shortening fibres have a positive velocity) to ensure that the work of shortening muscles is positive. In the concentric case, the Hill parameters are: The auxiliary variables for the calculation of the Hill parameters are divided into lengthand activation-dependent components. The length-dependent parameters are defined as: and the activation-dependent as: The equations in the eccentric case can be derived from Eq. (5) as mentioned in [25] and thus they are formulated as: and Parallel elastic element (PEE) The PEE represents passive properties of the muscle fiber and the collagenous connective tissue surrounding the muscle belly. As soon as the length of the contractile element exceeds the resting length of the parallel elastic element l PEE,0 , it also contributes to the force developed by the MTC. Mathematically this is expressed as: The spring stiffness K PEE is influenced by the optimal fibre length, the width of the bell curve and the maximum isometric force. It is calculated by: The resting length is defined as l PEE,0 = L PEE,0 · l CE,opt , hence L PEE,0 is the resting length normalized by l CE,opt , W desc is width of F isom (l CE ) on a descending limb. Serial elastic element (SEE) Since structures similar to muscle tissue are also present in the tendon, their elastic properties are similar. The serial elastic element has a nonlinear or linear spring behaviour depending on the deflection l SEE . When l SEE < l SEE,0 the tendon is relaxed and does not generate any force. In the range of l SEE,0 < l SEE < l SEE,nll it has a nonlinear characteristic, and a linear characteristic for l SEE ≥ l SEE,nll : The length l SEE,nll of the SEE at the transition from nonlinear to linear characteristic, the exponent ν SEE , and the nonlinear and linear stiffness factors K SEE,nl and K SEE,l are defined by the following formulas: The complete description of these independent parameters can be found in [16, Fig. 4, p. 69]. Serial damping element (SDE) The force-dependent serial damping element reduces unphysiological high-frequency oscillations in the tendon part of the muscle model. As a side effect this also increases numerical efficiency [16]. The force-dependent damping of the material damping element is calculated as: with the maximum absorption value of using the dimensionless scaling factor D SDE and minimum damping value R SDE . Contraction dynamics Inserting Eq. (5) into Eq. (4) for the force equilibrium yields a quadratic equation of the form: This equation must be solved for the contraction velocity l CE at each time step. Subsequent integration gives the solution for the internal muscle model degree of freedomthe length of the contractile element l CE . Since the coefficients C 1 and C 0 are always less than zero for our configuration, the solution for the contraction dynamics is given as: In this equation, the index e denotes that eccentric Hill parameters must be computed from Eqs. (12,13). The coefficients C 0 , C 1 and C 2 are determined as follows: with the additional coefficient Activation dynamics In the application of muscle models, not only the muscle dynamics itself, but also muscle activation dynamics needs to be considered. Activation dynamics is the link between stimulation input from the nervous system and the activity level of a muscle. For the proposed muscle model, two different muscle activation strategies are implemented: one 16:109 depending only on the neural activation level (STIM) by Zajac [11] and another, which takes into account length-dependent sensitivity of Ca 2+ level change by Hatze [29]. These two activation dynamics are outlined below. The first implemented activation dynamics by Zajac [11] was extended by [16] by adding a minimum muscle activity level q 0 to represent the fact that in reality a muscle is never physiologically completely inactive (q � = 0). The differential equation for the activation dynamics therefore is noted as: In this equation STIM is the input. It is the neural stimulation that emanates from the nervous system and varies from 0 to 1. The output is q, the CE element activation level with a possible range of q 0 ≤ q ≤ 1. It represents the concentration of free Ca 2+ ion in the muscle. τ act is the activity time constant and β q is the ratio between time constant on activation and deactivation. Thus, for β q > 1 the deactivation time constant is less than that of the activation. The second activation dynamics implemented is a two-step approach introduced by Hatze [29]. In this approach, the activity level q depends on both the length of the contractile element l CE and the free Ca 2+ ion concentration. The activity level is calculated as follows: The Ca 2+ ion concentration is accounted for in the differential equation as γ rel : and the relative CE length is included in ρ: Here m, c and η are constants and l CE,rel is the ratio between the contractile element length l CE and the optimal fibre length l CE,opt . Thus the length-dependent Ca 2+ ion sensitivity is taken into account, namely the relation that the longer the contractile element the higher the Ca 2+ sensitivity. In other words, stretched muscles produce a larger force at the same stimulation level compared to an already contracted muscle [30]. In addition, the Ca 2+ sensitivity contributes to low-frequency stiffness of the muscle, which is defined as the change in the equilibrium muscle force relative to a change in the equilibrium length with constant stimulation [31]. Muscle length offset and muscle routing To enable physiological muscle path representation for the extended Hill-type muscle model several routing methods could be considered. In advanced modelling frameworks the muscle path is usually redirected either by specific points, so called via-points, or by surfaces of geometrical objects (e.g. in OpenSim [32]). See [33] additionally for an (22) in-depth review and comparison of routing methods in biomechanical models. It was decided to use the via-point approach as described in [20], because it is possible to implement this method with the standard routing elements available for seatbelts in LS-DYNA. This method has proven to be reliable, as it is used in almost every crash simulation involving occupant models. To implement this, it is necessary to divide the MTC into muscle element and seatbelt elements, as only the latter can be routed. Therefore, an offset in length l offset is introduced, defined as the difference between the actual length of the muscle beam element length l beam,mus in the model and the length of the entire MTC l MTC : If necessary an offset can be added on both ends of the muscle beam element, to allow for two or more via-points, see Fig. 2. Standard seatbelt elements can be attached to the end of the muscle beam element and all the standard routing methods of LS-DYNA e.g. sliprings can be used. The seatbelt elements can move through a slipring node freely, while at the same time, the muscle model internally works with the correct length and dynamics of the entire MTC. To preserve the muscle dynamics it is required, that the stiffness of the seatbelt elements is orders of magnitude higher compared to the stiffness of the muscle elements. For the example in "Application in the ViVA OpenHBM Arm with routing" a stiffness of 1 × 10 6 N/m was used for the seatbelt elements. LS-DYNA implementation It is possible to include self-written code in the LS-DYNA FE solver through so-called 'User Subroutines' . These subroutines have to be written in FORTRAN and can, among other options, be used to define user materials [24]. The muscle model described in "Muscle model" was implemented in LS-DYNA as a user material for truss elements (26) l MTC = l beam,mus + l offset = l beam,mus + l offset,1 + l offset,2 . l offset,1 l offset,2 l beam mus l MTC = l beam mus a b c Beam muscle element Seatbelt via-point using a slipring l MTC = l beam mus + l offset,1 + l offset,2 Fig. 2 Comparison of the length relations for the muscle routing approach with the via-point method. a A full beam element with Hill-type muscle material, where the beam element represents the entire MTC. b A shortened beam element with Hill-type muscle material extended by seatbelt elements, and c the via-point routing method with two via-points. For the latter two, the muscle force is still calculated based on the entire MTC length (muscle + seatbelt), however, it is acting only in the beam element. This approach allows to use the slipring routing method of LS-DYNA to simulate the active contraction behaviour, as well as the passive spring and damping effects of human muscles. The FORTRAN code is available at https://zenodo.org/ record/826209. The explicit integration scheme in LS-DYNA, shown in Fig. 3, is updating the element strain �ǫ in each timestep based on the nodal displacement u. Material models translate the strain �ǫ to stress σ, which yield nodal forces f i . These forces result in nodal acceleration ü, which are integrated to nodal velocity u and displacement u for the next timestep. It should be pointed out, that material subroutines require an element stress as a return value. In the concept of the muscle model, only forces are calculated. Since truss elements can only have axial stress, the stress was calculated from the muscle force and the element cross-section area via If the material card for user-defined material models is specified in an input deck, LS-DYNA internally calls the routine usrmat, which starts the corresponding element routine, depending on the element type. In the case of beam elements this is urmatb and for truss elements urmatt. Finally, the actual material routine is called, which the user can program himself. It is possible to have up to ten user materials defined in the subroutines umat41 to umat50. The user can implement arbitrary material models in these routines and, among other things, access the material parameters specified in the material card. In addition, the programmer may call further subroutines, which then return, for example, nodal coordinates or various element properties. Verification and validation set-up For the Hill-type model parameters identification, a general test procedure requires three experimental set-ups: (a) concentric contraction, (b) isometric contraction and (c) quick release [12]. They are depicted in Fig. 4 and explained in detail below. A complete set of muscle parameters is almost never found in a single source since it is hard 16:109 to perform all three tests in a row with the same muscle specimen, so a short literature survey is always required. The validation conducted for the muscle model is based on mammalian muscle experiments. As there is no experimental data available for actual human muscle tissue, the model validation is based on piglet [16], cat [34] and rat [35] muscle experiments. The verification is done in comparison with an already existing implementation of the same muscle model in the Matlab based multi-body code Neweul-M 2 [36]. Additionally, a comparison with the *MAT_156 muscle material from LS-DYNA is shown for the concentric contraction experiment. Data sets from all three experimental set-ups are available for the piglet muscle. For the other two species, a specific set-up for an isometric contraction experiment is applied, which was shown to be sufficient to determine all necessary Hill-type parameters for simulations [37]. An overview of all experimental setups is presented in Table 1. Furthermore all model parameters are given in tabular form for each validation case including references. Concentric contraction experiment In a concentric contraction, the muscle is shortened, which means that the distance between muscle origin and insertion point decreases, e.g. elbow flexion to lift a weight, see Table 1 Overview of all set-ups used for validation Piglet Cat Rat Concentric contraction X Isometric contraction X X X Quick release X stimulated (STIM = 1) and starts to contract. At first no external motion is recorded as only the internal length l CE is decreasing. Once F MTC > F Gravity the mass is lifted and the contraction velocity is recorded and compared to the experimental results from the piglet muscles. Isometric contraction experiment In an isometric contraction the muscle force is increased, while the length of the muscle is kept constant, see Fig. 4b. This contraction mode occurs, for example, when attempting to hold a heavy weight. In the simulation set-up both nodes of the muscle element are fixed. At the beginning of the test, there is no stimulation (STIM = 0), thus the muscle experiences only a minimal activity q 0 . Starting from 0.1 s, the muscle is stimulated completely (STIM = 1) and relaxed again completely after 1.1 s (STIM = 0) in the piglet [16] and cat [34] experiments. In the rat experiments the muscle is only activated for a shorter time period of 300 ms [35]. In the piglet muscle experiments the isometric contraction was carried out for different fixed muscle lengths between 5.1 and 6.6 cm around the anatomical resting length of l MTC,0 = l CE,opt + l SEE,0 . In the other experiments the isometric contraction was only tested for the anatomical resting length. In the results, the force vs. time curves are compared and also the differences resulting from the two activation dynamics implemented are analyzed. Quick release experiment The quick release is a combination of isometric and concentric contraction. In this setup, the muscle is fixed at both ends at the beginning, it is then stimulated (STIM = 1) and isometric contraction occurs Fig. 4c. After 1 s the lower end of the muscle carrying a mass is released and is pulled up quickly due to the force built up during the isometric contraction. After a total time of 1.5 s the stimulation is switched off again (STIM = 0). As in the concentric case, the influence of the different masses is examined (m = 200, 400, 600, 800, 1000, and 1500 g) for the piglet muscles only [16]. Simulation results The verification and validation simulation results are shown in the following sections for piglet, rat and cat muscles. Using the piglet data, an additional comparison to the muscle model *MAT_156 already existing in LS-DYNA is shown and the differences resulting from the two distinctive muscle activation dynamics implemented are illustrated. To demonstrate the application of the model in AHBMs an example illustrating the routing capabilities is given using an elbow model extracted from the ViVA OpenHBM [3]. Piglet calf muscle For the piglets calf muscles results for concentric and isometric contraction and quick release experiments are available in [16]. As this is the most complete data set, it was also used for verification and a comparison with *MAT_156 and a comparison of the different activation dynamics available in the extended Hill-type muscle model. In Table 2 the parameters used for the piglet simulations are listed. The material card for LS-DYNA is found in Appendix "Material card for piglet simulations". Concentric contraction The numerical and experimental results are presented in Fig. 5. All curves are shifted in time so that the mass is pulled up or F MTC = F Gravity occurs at t = 0 s. As shown in the figure, the simulation results from LS-DYNA are very consistent with the experiments and the simulations with the muscle model in Neweul-M 2 . A comparison with the simulation data from [16] would give even better results. The differences between the simulation results from Neweul-M 2 and LS-DYNA can presumably be attributed to different computational accuracies and integration methods for the differential equations. These simulations were both run with explicit integrators and a constant time step. Consequently, we can state that we have a correct muscle model implementation for the concentric contraction case. In addition a comparison with the muscle material model *MAT_156, already existing in LS-DYNA, is shown in Fig. 6. The initial contraction velocity provides similar results. Also, the maximum force value for low masses up to about 200 g is well approximated. The *MAT_156 material, however, shows significant weaknesses in speed decay and in the correct representation of the damping properties. At this point, it should be noted that further optimization of the material parameters might achieve better results. For these simulations the *MAT_156 parameters were derived from the previous work of [38]. This comparison shows the potential of the newly implemented muscle model and shows how it can help to deliver more realistic simulation results. Isometric contraction In Fig. 7 The comparison of the LS-DYNA results with the experimental data shows, that the muscle force for the inactive muscle in the time intervals t < 0.1 s and t > 1.1 s is underestimated if the muscle is lengthened considerably relative to the anatomical resting length (h > 1.05). Also, a clear deviation in the force increase for the stretch ratios h = 1.0 and h = 1.03 exists, while the final force at t = 1 s is met. Very similar differences are also present in the simulation results from [16]. According to this source, the proposed muscle model does not represent all internal dependencies of the Hill parameters correctly. Also potential history effects visible in the experimental curves, namely non-steady force plateaus, are made responsible for the differences. Furthermore, possible deficits in the identification of the parameters for the activation dynamics and the rise of F CE play a role. The comparison to Neweul-M 2 shows high agreement with only slight deviations in the muscle activation interval for the ratios h = 0.85, 0.88 and 0.91. As it was the case in the concentric contraction, the differences in the results are larger for higher dynamics, which can probably be attributed to the different integration method for solving the differential equations. The most important point illustrated by the isometric contraction in Fig. 7 is the strong dependence of the maximum isometric force on the muscular length. This finding is decisive for the application in AHBMs, since in this example deviations of approximately 15 mm in muscle length lead to differences in muscle force of more than 30 N. Comparison of activation dynamics for isometric contraction In the extended Hill-type muscle model two different approaches to describe the activation dynamics are implemented. In Figs. 8 and 9 these methods by Zajac [11] and Hatze [29] are compared in an isometric contraction. The muscle force results with Hatze and Zajac activation differ mainly during muscle deactivation after t = 1.1 s, see Fig. 8. The forces of muscles with a high stretch ratio 16:109 decrease significantly later than the muscles that are shortened. The concentration of free Ca 2+ ions γ rel evolves similar to the activity level described by the differential equation of Zajac. γ rel increases slightly slower and takes about 0.1 s longer to decay. The larger influence is the dependence of the activation on the CE length through ρ for the Hatze activation dynamics, see Fig. 9. By stretching the muscle, l CE is significantly longer for high h-ratios. As a result, ρ will increase and the activity is rising faster for the stretched muscles. The stretch ratio and thus ρ also affect the maximum activity level reached in the simulations. It can be seen in Fig. 9, that the maximum activity of the muscle with h = 0.85 is only about 82%. For Zajac's activation dynamics only one curve is found in Fig. 9, this is because Zajac's activation dynamics is length-independent and therefore all activation curves are identical. As the formulation of activation dynamics by Hatze is the more biofidelic and superior option [37], the comparison for rat and cat experiments will be done only with this dynamics. Quick release The quick release experiments are a combination of the two experiments above. Here the force produced by the MTC is analyzed versus the contraction velocity. In Fig. 10 the numerical results from LS-DYNA and Neweul-M 2 as well as the experimental data is shown. The isometric muscle force at zero velocity, i.e. before the muscle is released, is about 2 N lower than in the experiments. However, the results clearly approach the respective maximum contraction velocities. In [16] it is stated, that this is due to history effects within the tendon in the experiments that are not represented in the muscle model. The best agreement is achieved for a mass of 1000 g, where according to [16] the history effects were absent. The difference with Neweul-M 2 is once again negligibly small for the bigger masses and slightly larger for the high velocities or smaller masses. Rat gastrocnemius medialis muscle The experiments were done by Siebert et al [35] on the rat (Rattus norvegicus, Wistar) M. gastrocnemius medialis muscle. The parameters for the Hill-type muscle model are listed in [35, For convenience, they are collected in Table 3 and the LS-DYNA material card is given in Appendix "Material card for rat simulations". As seen in Fig. 11 the simulation results are in good agreement with the experimental results, being a little faster in the muscle deactivation slope. Cat soleus muscle Mörl et al. [34] conducted the experiments on the cat soleus muscle. The parameters for the Hill-type muscle model are found in [34, dynamics parameters once more taken from [37]. They are also collected in Table 4 and a material card for LS-DYNA is provided in Appendix "Material card for cat simulations". The corresponding simulation results depicted in Fig. 12, are in excellent agreement with the experimental results, this time being a little faster in the muscle activation slope. Application in the ViVA OpenHBM Arm with routing The extended Hill-type muscle model is applied in an arm model extracted from the ViVA OpenHBM [3]. The model includes bones, modelled as rigid bodies, and flexible flesh and skin of the upper extremity. We added the main flexors (biceps long and short heads, brachialis, brachioradialis, pronator teres and extensor carpi radialis) and extensors (triceps long, lateral and medial heads) of the elbow joint, which we idealized as a revolute joint. Here the via-point routing method is compared to simple direct line and lever approaches. Moreover a complete description of the set-up of the elbow model [39] and the choice of parameters for all muscles at the elbow [32] is out of scope for this publication. The via-point method allows the selection of anatomical origin and insertion nodes for the muscles. As a result the modelled muscle length is almost identical to the anatomical muscle-tendon length. This enables the usage of anatomical data from literature for the muscle parameters. The routing is done in LS-DYNA using sliprings fixed to bones in certain positions in space. The routing parameters, i.e. the offset length of the muscle and the position of the via-point, can be chosen independently of the muscle parameters to match the anatomy. This approach makes it possible to model the muscle-tendon dynamics correctly and at the same time making the lever arm vs. joint angle curve and thus the resulting elbow torque more realistic. In Fig. 13 different strategies for modeling the triceps are shown. A direct line of action approach, lever arms of 10 and 20 mm, and the via-point routing method are shown. In contrast to the other methods, the application of the via-point method can deliver correct lever arms for the complete range of motion of the elbow and fits the experimental corridor from [40] best, see Fig. 14. Additionally, the proposed via-point routing method improves the numerical stability of the model as it provides correct force application directions. Most importantly, the muscle dynamics is independent from the actual length of the combined elements (muscle + seatbelt, Fig. 2) and their path complexity In comparison with the LS-DYNA *MAT_156 muscle model a 10 times speed-up is achieved for the validation simulations with one single muscle elements. As this setup is clearly not very realistic, the ViVA arm simulations are repeated with *MAT_156 muscles to be comparable to the simulations with the extended Hill-type muscles. Here no speed-up is achieved, since most of the CPU time is used for the processing of the volumetric elements and the time needed to process truss elements is insignificant in comparison. Conclusion and outlook The upcoming challenges in the field of automotive safety, namely active safety systems and autonomous driving, will require and benefit greatly from AHBMs. The Hill-type muscle model already existing in LS-DYNA has a limited accuracy because it lacks an internal degree of freedom and in addition is difficult to parameterize. Here, an extended Hill-type muscle model was implemented, verified and validated successfully. The source code, parameters and an example set-up for LS-DYNA are provided at https://zenodo. org/record/826209. The verification and validation was done in comparison with experimental data sets from piglet, cat and rat muscles. The results are in very good agreement with the experiments and the new muscle model improves the accuracy available for AHBMs in LS-DYNA considerably. Moreover, the muscle model incorporates the activation dynamics, essential for correct simulations on small time horizons of dynamic active movements. Additionally, a convenient option for routing the muscle around joints was proposed. By introducing an offset to the length of the muscle element, it is possible to route the muscle using e.g. the via-point method, while at the same time the 16:109 muscle will display the correct dynamics of the full muscle. This also means, that the parameters for the muscle model can be set independently of the routing. Although the current model allows to predict the gross dynamic contraction characteristics of biological muscles, it has its limitations. For one, it is a force element predicting a scalar force value which is then applied between origin and insertion, or redirected by via-points. Contact forces and resulting shifts in the force direction or their influence on the active muscle force [35] are neglected. Besides that, several physiological effects of the muscle contraction are currently not considered, starting with muscle-morphology specific parameters such as the pennation angle [41] or the fibre composition [42]. Also on the dynamic level, e.g., modelling the force-velocity relation for the eccentric (lengthening) contractions is difficult, as little data is available. Some of the data suggests more complex relations than modelled here for extensive strains [43], which, however, are not reached in our simulations. Furthermore, the experimentally found history effects causing force enhancement and force depression after stretch and shortening are currently not considered, but may be included in more extended approaches [44,45]. Finally, the muscles model considers no mass or mass distribution, which, however, plays a role in dynamic contractions [46]. To utilize the full potential of the AHBMs, a control strategy for the activation of the muscles is needed. As a controller realization is not in the scope of this work, the authors recommend the review by [47] as a reliable source of information for muscle activations schemes and strategies in AHBMs. In principle, controllers are required which either maintain a desired position against perturbations or allow for the generation of a desired movement. Such controllers can be implemented in the current framework and may be easily added to the code provided in the Appendix. With this, we provide a comprehensive and valid approach to implement an extended Hill-type muscle model in LS-DYNA, including muscle-tendon properties, biochemical activation dynamics, and muscle routing. By providing the code and the material cards, we hope that this will allow other researchers to work on more biofidelic AHBMs. In the following sections, the material cards for the simulations of the piglet, cat and rat muscles are given in the SI unit system. The simulations were set up with single muscle elements with a length of l MTC = l CE,opt + l SEE0 where applicable. The parameters are taken from [14,16,25,29,34,35,37]. LS-DYNA uses the density, bulk modulus and shear modulus to determine the time-step for the simulation. The density was set to 1 × 10 −6 kg m −3 and the bulk modulus and shear modulus were adjusted to achieve a time-step smaller than 1 × 10 −4 s.
2017-09-12T22:36:22.103Z
2017-09-02T00:00:00.000
{ "year": 2017, "sha1": "8283cbb257f10e1ee9be2d02a0a1b99f66c25530", "oa_license": "CCBY", "oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/s12938-017-0399-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8283cbb257f10e1ee9be2d02a0a1b99f66c25530", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering", "Medicine" ] }
225593075
pes2o/s2orc
v3-fos-license
Ranitidine : recent regulatory issues Ranitidine is histamine 2 receptor blocker which became commercial in 1981 as an antacid by Glaxosmithkline Pharamceuticals by the brand name of Zantac in various formulations. The drug accelerated in the market as amongst most commonly used drug for peptic ulcer disease, acid reflux and sooner than later it became available as an over-the-counter drug in 1996 for adults and children. Ranitidine’s mechanism of action involves competitive block of histamine 2 receptor leading to decrease cAMP formation which reduces acid secretion from parietal cells of stomach thereby healing the peptic ulcer. The drug came into limelight, when in a routine testing of by Valisure Pharmacy, N-nitrosodimethylamine (NDMA) was discovered in ranitidine Syrup for which valisure first notified the US-FDA in June 2019. On September 13 th 2019, valisure filed a detailed petition with the Food and Drug Administration asking the agency to recall all products containing ranitidine. 1 Ranitidine recall happened due to presence of unacceptable levels of NDMA which is >96 nanograms/0.32 ppm. The calculated acceptable intake for NDMA in drugs is based on methods described in the 2018 International Council for Harmonisation Guidance M7(R1) Assessment and Control of DNA Reactive (Mutagenic) Impurities in Pharmaceuticals to Limit Potential Carcinogenic Risk. 2 NDMA is one of the simplest members of dialkylnitroamine with the chemical formula (CH3) 2 NNO. NDMA is known to be a by-product of alkylamines, pesticides production, dyes and rubber tyres. 3 NDMA is categorized as a probable carcinogen (group 2A) by International Agency For Research on Cancer (IARC) implying that NDMA should be considered carcinogenic in humans for all practical purposes. 4 Today the only role that NDMA plays is to induce cancer in animals as a part of lab experiments. Mechanism of cancer causation can be attributed to oxidative demethylation of NDMA and formation of reactive intermediates. Alkylation of guanine and cytosine to 7-methylguanlic acid and 3-methylcytosine respectively are thought to be the reason for NDMAinduces carcinogenesis. 5 USFDA has been testing H2 blockers and PPIs and NDMA is detected in ranitidine and nizanitidine. FDA's tests of samples of alternatives like pepcid (famotidine), nexium (esomeprazole), prevacid (lansoprazole) and prilosec (omeprazole) show no NDMA impurities in the medicines. 2 Actions have been taken across the globe including ban, suspended registration, recalls on ranitidine products by many pharmaceutical companies and advising the physicians not to prescribe the drug ranitidine and seek other alternative medications like famotidine, esomeprazole, omeprazole as no NDMA has been detected in these, as suggested by US-FDA. US-FDA is alerting the patients and physicians on voluntary recalls of ranitidine tablets and capsules 150mg, 300 mg, syrup 15 mg/ml by various pharmaceuticals like Aurobindo pharma, Amneal, Lannett, Novitium, Dr Reddy's, GlaxoSmithKline Pharmaceuticals and Perrigo. 6 In a previous related development, FDA learnt about the impurities of NDMA in angiotensin II receptor blockers, valsartan and losartan in 2018. Elaborate testing revealed that the raw materials used in some of the batches of these medicines were contaminated and the respective batches were withdrawn. But the scenario with ranitidine is different and appears to be more than just the contaminants. The presence of NDMA in ranitidine might be attributed two plausible mechanisms i.e. metabolic byproduct in human's body and molecular structure of ranitidine. In-vitro studies show that NDMA can be a metabolic byproduct of ranitidine. FDA developed simulates gastric fluid (SGF) and simulated intestinal fluid (SIF) to estimate the significance of these in-vitro findings and found no additional production of NDMA in the stomach. 7 A detailed chemical reaction mechanism for the formation of NDMA from ranitidine in gastric conditions is proposed from a Stanford University paper published in 2016 ( Figure 1). 8 In addition to gastric fluid mechanism, a possible enzymatic reaction via dimethylarginine dimethylaminohydrolase (DDAH) enzyme for liberation of dimethylamine (DMA) group of ranitidine can occur. This liberated DMA can form NDMA when combined with nitrite present on ranitidine molecule, and free nitrite in the body. 9 DDAH enzyme is present in every cell and degrades asymmetric dimethylarginine (ADMA), the endogenous inhibitor of nitric oxide (NO) synthase. 10 DDAH metabolizes endogenous ADMA which is also a putative marker of cardiovascular disease. These results suggest that the enzyme DDAH-1 may increase formation of NDMA in the human body when ranitidine is present. DDAH-1 gene is expressed in many organs mainly in kidneys, small intestine and colon. 11 This offers a general mechanism for NDMA formation in the human body from ranitidine. The structure of ranitidine molecule may also play an important role in NDMA production. Ranitidine contains nitrite group and dimethylamine (DMA) group which can combine and form NDMA (Figure 2). 12 Though the amount of NDMA in ranitidine according to USFDA is almost similar to that of smoked chicken, grilled food, cured meats, 7 tests are still going on to come to a final conclusion of the amount and source of NDMA production from ranitidine. We have tried to compile all the recent information regarding ranitidine and finding out the plausible mechanism of NDMA production from ranitidine, which may be related to its molecular structure. Patients are to be alerted for the known carcinogen in their medicine ranitidine and that any amount of carcinogen consumption is harmful. The voluntary recalls by pharmaceuticals are increasing day by day since September 13 th , 2019 and many countries have stopped the sale of ranitidine until the drug regulatory bodies give a go flag. Few countries (Canada, Bangladesh and Egypt) have banned ranitidine. USFDA has now determined that the impurity in some ranitidine products increases over time and when stored at higher than room temperatures and may result in consumer exposure to unacceptable levels of this impurity. Hence, on 01 April 2020, USFDA requested the manufacturers to withdraw prescription and over the counter ranitidine drugs from the market for patient safety. 13
2020-07-23T09:08:56.200Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "44e0ae2604d3908afb9803603a4435f9677a090c", "oa_license": null, "oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/4210/2971", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cc3644565c21455f41eab7f7ebe6e84c109f8735", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267611398
pes2o/s2orc
v3-fos-license
Convergence and Performance Analysis of a Particle Swarm Optimization Algorithm for Optical Tuning of Gold Nanohole Arrays Gold nanohole arrays, hybrid metal/dielectric metasurfaces composed of periodically arranged air holes in a thick gold film, exhibit versatile support for both localized and propagating surface plasmons. Leveraging their capabilities, particularly in surface plasmon resonance-oriented applications, demands precise optical tuning. In this study, a customized particle swarm optimization algorithm, implemented in Ansys Lumerical FDTD, was employed to optically tune gold nanohole arrays treated as bidimensional gratings following the Bragg condition. Both square and triangular array dispositions were considered. Convergence and evolution of the particle swarm optimization algorithm were studied, and a mathematical model was developed to interpret its outcomes. Introduction Surface plasmons (SPs) are collective charge density oscillations localized at the interface between two materials having the dielectric function with the opposite sign, namely, a metal and a dielectric [1].Due to their bound nature, they are not only extremely sensitive to refractive index changes at the interface but can also enhance different physical phenomena, such as fluorescence or Raman effect, largely exploited for biosensing applications [2][3][4][5].To increase the sensing performances concerning the excitation of SPs, the rational design of plasmonic nanostructures has attracted much interest in recent years [6,7].It has been shown that the SPs' properties and thus sensing performances depend on various factors of the overall system, including both geometrical and morphological features [8].For this reason, there is a significant interest in the optimization of such parameters allowing the design and tailoring of plasmonic nanomaterials with high performance.Uniform metal films supporting propagating SPs, namely surface plasmon polaritons (SPPs), have been the first to be used in this field, conferring sensing performance but generally requiring bulky optical systems to couple the electromagnetic radiation to SPPs [9].For this reason, nanoparticle-based ones have been largely replacing such systems due to low fabrication costs and the easy coupling with SPs that are, in this case, localized surface plasmons (LSPRs) [9,10].In recent years, technological development has provided new and competitive techniques for the nanofabrication of plasmonic nanostructures, such as plasmonic metasurfaces, offering the possibility of manipulating plasmonic features [1,11,12].Among them, gold nanohole arrays (GNAs) have been widely studied due to the exhibited extraordinary optical transmission [13][14][15][16] but also for the peculiar potential to control light coupling with both SPPs and LSPRs.GNAs are hybrid metal/dielectric metasurfaces consisting of periodically arranged air holes in a thick gold layer.It has been shown that GNAs are good candidates for several biosensing applications, like surface plasmon resonance (SPR) [17][18][19][20] and plasmon-enhanced fluorescence (PEF) [4,[21][22][23][24][25][26][27].Nevertheless, the GNAs' optical properties and thus their sensing performance are influenced by the geometrical surface parameters, namely, the array periodicity, the air hole shape, and the thickness of the gold layer [8]. For 20 years, we have been working on plasmonic-based biosensors with GNAs by using colloidal lithography [18,28,29].The main advantage of this technique resides in its low cost and easy implementation.Nevertheless, control over the nanofabrication parameters determining the GNA geometrical features is challenging, limiting the tunability of the plasmonic response for the desired application.Since our plasmonic nanostructure has been shown to be very promising for both SPR and PEF detection-based devices, we are currently looking for an alternative nanofabrication procedure based on UV-lithography to provide reliability and tunability of the optical response together with large-scale fabrication compliance.Consequently, to achieve the desired response, a precise tailoring of all these features is needed.Therefore, Maxwell's equations have to be rigorously solved in both time and space domains, and fortunately, it can be achieved with different types of computational methods.Moreover, it is not straightforward to manage the electromagnetic feature optimization through the evolution of the optical properties.An interesting solution is represented by evolutionary algorithms, such as the genetic algorithm and the particle swarm optimization (PSO) algorithm [30][31][32][33][34][35], or, more recently, artificial intelligence-based ones [36][37][38].The PSO and GA are based on different philosophies: the first relies upon "social" swarm behavior, and the GA relies upon genetic encoding and natural selection [33].In the literature, the PSO has been widely used to optimize RF antenna array properties and applied in photonics for the design and optimization of several dielectric devices, such as grating couplers [39].Generally, due to the presence of a complex dielectric function, the simulations of plasmonic systems are computationally demanding.The intuitive mathematical structure, combined with easy parameter manipulation and a high capacity to control convergence while preventing stagnation issues-common in genetic algorithms (GAs)-has positioned the PSO as a promising tool for optimizing plasmonic systems in recent years, also in combination with other optimization techniques [33,[40][41][42][43].For this reason, in this work, the optical tuning of a GNA was performed by a customized PSO algorithm implemented in the FDTD method [39,44].The structure is made of periodically arranged cylindrical holes drilled in a gold layer deposited on a glass substrate.The consequent geometrical model is defined by the cylinder radius and the array periodicity, considering both square and triangular disposition.The PSO dynamic evolution was studied both in terms of convergence and performance through the tuning and efficiency of the GNA optical response. Materials and Methods Figure 1 schematically depicts the evolution of the PSO algorithm.The GNA structure is geometrically defined by two values: the pitch (p), accounting for the array periodicity, and the radius (r), accounting for the cylinder radius of the air hole. Consequently, a properly limited bidimensional parameter space can be defined where the vector of generic coordinates (r,p) is called agent.At this point, the PSO (i) randomly generates a swarm of agents, (ii) builds the corresponding GNA structure inside Ansys Lumerical FDTD [45], and (iii) runs the simulations to extract the optical response of the main localized plasmonic mode sustained by each GNA structure.The final scope of the PSO is to seek for the GNA structure that enables the maximum energy storage inside the localized plasmonic mode in the interval (770 ± 25) nm.The tuning wavelength was chosen as 770 nm for consistency with the sensing device used in [18].For our sensing purposes, we target the main localized plasmonic mode ability in detecting refractive index variations close to the surface, whose fingerprint is the main minimum in reflectance (R).Practically, this is achieved by calculating through an iterative process the agent's trajectories (swarm evaluation) identified by stochastic vectors, called velocities (v r , v p ), generators of each agent.A proper fitness function (FF) is defined to scan the parameter space and used at each iteration step to find the best global solution implemented inside a feedback process.In our case, the FF is defined as where R and T represent the collected reflectance and transmittance spectra evaluated from the corresponding FDTD simulation.Consequently, the FF values identify the evolution of the dynamical system across the iteration steps.Eventually, the algorithm convergence is determined by two independent but mutually necessary requirements: (i) the velocities tend to vector (0,0), meaning that the agents collapsed in the same particular position of the parameter space, and (ii) the FF reaches the highest value.These two conditions are excellent indicators of the PSO capability in converging to the final better FDTD structure. Materials 2024, 17, 807 3 of 14 Consequently, a properly limited bidimensional parameter space can be defined where the vector of generic coordinates (r,p) is called agent.At this point, the PSO (i) randomly generates a swarm of agents, (ii) builds the corresponding GNA structure inside Ansys Lumerical FDTD [45], and (iii) runs the simulations to extract the optical response of the main localized plasmonic mode sustained by each GNA structure.The final scope of the PSO is to seek for the GNA structure that enables the maximum energy storage inside the localized plasmonic mode in the interval (770 ± 25) nm.The tuning wavelength was chosen as 770 nm for consistency with the sensing device used in [18].For our sensing purposes, we target the main localized plasmonic mode ability in detecting refractive index variations close to the surface, whose fingerprint is the main minimum in reflectance (R).Practically, this is achieved by calculating through an iterative process the agent's trajectories (swarm evaluation) identified by stochastic vectors, called velocities (vr, vp), generators of each agent.A proper fitness function (FF) is defined to scan the parameter space and used at each iteration step to find the best global solution implemented inside a feedback process.In our case, the FF is defined as where R and T represent the collected reflectance and transmittance spectra evaluated from the corresponding FDTD simulation.Consequently, the FF values identify the evolution of the dynamical system across the iteration steps.Eventually, the algorithm convergence is determined by two independent but mutually necessary requirements: (i) the velocities tend to vector (0,0), meaning that the agents collapsed in the same particular position of the parameter space, and (ii) the FF reaches the highest value.These two conditions are excellent indicators of the PSO capability in converging to the final better FDTD structure. The FDTD structural model details are reported in Appendix A1.The FDTD structural model details are reported in Appendix A.1. PSO Algorithm Convergence Figure 2 displays the PSO convergence analysis in the particular case of gold thickness equal to 100 nm, for both square and triangular arrays.The 3D scatter plot of the velocities against the iteration number together with the scatter plot of the FF in the parameter space are shown.In particular, the PSO convergence was studied considering four different gold thicknesses: 100 nm, 80 nm, 60 nm, and 40 nm.The 100 nm value is the standard in the sensing device used in [18].On the other hand, the lower thickness of 40 nm was chosen as the closest to the skin depth of the material to guarantee simulation reliability.The inferior and superior limits of (r,p) were properly customized for each gold thickness.In consideration of the convergence as defined above, the number of iterations required to obtain the best balance between all the parameters playing in the evolution was observed to be 20 for the square array and 40 for the triangular array.In both cases, the number of agents was fixed at 15 by algorithm stability considerations in the literature [19]. For both arrays, the velocity vectors are scattered around the (0,0) point.In the square array case, the scattering amplitude of the velocity vectors is larger and almost constant, while for the triangular one, it is smaller and tends to diminish along the algorithm evolution.The possible peculiar combination between the computational elements, i.e., FDTD box, source, and metasurface, can result in a symmetry able to be defined on a common basis.In fact, in the case of the square array, these three objects share the same basis, whereas for the triangular array, this set does not consist of an orthonormal combination shared with the FDTD box and source.This combination markedly influences the triangular array case, where the evolution is clearly boosting towards higher FF, stochastically selecting agents step by step closer to the solution.The FF scatter plot in the parameter space is reported in panels (b) and (c) of Figure 2 for the square and triangular arrays, respectively. Materials 2024, 17, 807 4 of 14 are shown.In particular, the PSO convergence was studied considering four different gold thicknesses: 100 nm, 80 nm, 60 nm, and 40 nm.The 100 nm value is the standard in the sensing device used in [18].On the other hand, the lower thickness of 40 nm was chosen as the closest to the skin depth of the material to guarantee simulation reliability.The inferior and superior limits of (r,p) were properly customized for each gold thickness.In consideration of the convergence as defined above, the number of iterations required to obtain the best balance between all the parameters playing in the evolution was observed to be 20 for the square array and 40 for the triangular array.In both cases, the number of agents was fixed at 15 by algorithm stability considerations in the literature [19].For both arrays, the velocity vectors are scattered around the (0,0) point.In the square array case, the scattering amplitude of the velocity vectors is larger and almost constant, while for the triangular one, it is smaller and tends to diminish along the algorithm evolution.The possible peculiar combination between the computational elements, i.e., FDTD Fine-Tuning Procedure Within the number of iterations, the PSO algorithm in combination with the FDTD method, provides a (Better r, Better p) pair and the better FF value that the optimization routine can provide.Due to the numerical nature of the process, the optical response of the better FDTD structure shows a mismatch in the tuning wavelength within the tuning interval.To save computational time and compensate for this difference, it was necessary to go beyond the PSO limits.To achieve this, an analytical fine-tuning procedure was developed.Operatively, the p values were swept in steps of 5 nm in both directions, starting from Better p.For example, the main steps of the fine-tuning procedure for the gold thickness value of 100 nm are detailed in Figure 3. Within the number of iterations, the PSO algorithm in combination with the FDTD method, provides a (Better r, Better p) pair and the better FF value that the optimization routine can provide.Due to the numerical nature of the process, the optical response of the better FDTD structure shows a mismatch in the tuning wavelength within the tuning interval.To save computational time and compensate for this difference, it was necessary to go beyond the PSO limits.To achieve this, an analytical fine-tuning procedure was developed.Operatively, the p values were swept in steps of 5 nm in both directions, starting from Better p.For example, the main steps of the fine-tuning procedure for the gold thickness value of 100 nm are detailed in Figure 3.The tuning wavelengths corresponding to the p values of the sweep are reported in panels (a) and (d) of Figure 3 for the square and triangular arrays, respectively.Considering the disposition of the points, within the tuning interval, the data were fitted by a thirdorder polynomial curve with χ 2 values close to 1 (0.995 and 0.997 for the square and triangular array, respectively).The inflection point of the cubic curve systematically The tuning wavelengths corresponding to the p values of the sweep are reported in panels (a) and (d) of Figure 3 for the square and triangular arrays, respectively.Considering the disposition of the points, within the tuning interval, the data were fitted by a third-order polynomial curve with χ 2 values close to 1 (0.995 and 0.997 for the square and triangular array, respectively).The inflection point of the cubic curve systematically corresponds to the (Better r, Better p) pair found by the PSO algorithm.Considering the goodness of the χ 2 values, the easiest way to shift the fit to properly tune the optical response while preserving the FF value is to apply a second derivative analysis, as visible in panels (b) and (e) of Figure 3, keeping the ratio between the Better r and Better p constant considering a linear approximation for small variations.Eventually, a retuned FDTD structure is generated, and the optical response consistency is verified.Panels (c) and (f) show the R spectra for the retuned FDTD structure where (Better r, Better p) is now identified as (Best r, Best p).With the developed procedure, it is possible to rigorously tune at the desired wavelength the R minimum of the FDTD structure resulting from the optimization routine.The (Better r, Better p) and the (Best r, Best p) values are reported in Tables 1 and 2 for the square and triangular arrays, respectively. Discussion To test the reliability of the fine-tuning procedure, we decided to perform a comparison using an alternative computational package, specifically EMUstack, based on an open-source code [46].EMUstack combines Bloch mode expansion in a scattering matrixlike formalism and finite-element method (FEM).In this respect, it is considered more rigorous than FDTD.On the other hand, FDTD allows one to better mimic the experimental configuration for the targeted application and thus improve the implementation and the evaluation of real structures.Both methods, even if completely different, have proven to perform well in the computation of the optical responses of plasmonic systems. Operatively, starting from the (Best r, Best p) pair, two separate sweeps on p and r were performed with both EMUstack and Ansys Lumerical FDTD [45].Operatively, starting from Best r (reported in Tables 1 and 2), the p values were swept below it to the limiting case of the p equal to the hole cylinder diameter and above it, until the R minimum spectral position fell within the range useful for Si-based detectors (400 ÷ 1100) nm.However, starting from Best p, the r was swept from the laser lithography resolution limit (30 nm) to the point where the diameter approached the value of the array pitch.The simulation parameters are reported in Appendix A.1.The results for the square and triangular arrays are reported in Figures 4 and 5, respectively. Regardless of the gold thicknesses, for both the sweep results on r and p, good consistency between FDTD and EMUstack is observed with discrepancies in the R minimum spectral positions of the order of 10 nm.Regarding the p dependence, as visible in panels (a) and (b) of Figure 4, the data are in agreement within the whole spanned p interval, from 200 nm to 700 nm.In the region below 250 nm, the cylinder hole diameter becomes comparable with p, and therefore, the computational method is no longer accurate, especially for EMUstack, due to the too-demanding computational requirements.A nice consistency can be observed in the r dependence too, for r values from 50 nm to 170 nm, with discrepancies of the order of 15 nm, as visible in panels (c) and (d) in Figure 4.For values outside the interval, either the computation is no more trustable or the r values are too low for real sensing applications and for competitive nanofabrication techniques. Also, for the triangular array, the sweeps show good agreement between the two computational methods as visible in Figure 5.Nevertheless, the discrepancies are slightly higher, up to 30 nm. Plasmonic Mode Dispersion The high flexibility of Ansys Lumerical FDTD software (release 2023 R2.3, v8.30.3578) allows for further analysis of the physics behind GNAs.In particular, we are interested in studying the dispersion of the GNA optical modes.Energy dispersion is an intrinsic property of SPPs which can be analytically accounted for from the dielectric function [1].When a periodic array is considered, the folding effect at the symmetry points must be also evaluated and a description in terms of the Brillouin zone has to be used for both SPPs as well as for the pure electromagnetic modes (Wood's anomalies) [13,16,[47][48][49].Moreover, the presence of holes with a finite size is driving the opening of band gaps at the symmetry points as well as the appearance of almost dispersionless localized modes.Experimentally, it is possible to follow the plasmonic modes' dispersion features by variable-angle reflectance (R) or transmittance (T) spectra [21,29,50]. At a computational level, the energy dispersion curves of photonic modes with respect to the in-plane (the x-y plane in Figure 1) wavevector component are carried out using wellestablished techniques [51][52][53] which are typically applicable to dielectric photonic crystals.Due to the inherently lossy nature of GNAs, FDTD simulations provide an effective tool for this investigation.For this reason, we developed a customized script to calculate the plasmonic mode dispersion exploiting Ansys Lumerical FDTD [54]. Firstly, we considered the energy dispersion curves for the suspended gold metasurface in air and embedded in SiO 2 .The results are shown in the left columns of Figures 6 and 7 for the case of gold thickness of 100 nm.Also in this case, we wanted to verify the validity of our customized script by comparing it with the analytical code reported in [55].This code computes the folding of the Wood's anomalies (light lines) and the SPP dispersion inside the Brillouin zone. The outcomes are reported in the second and third columns of Figures 6 and 7 for the square and triangular array, respectively.The dispersion relationships are reported in Appendix A.3. For both array dispositions, the main dispersion features can be clearly distinguished for both methods.Thus, FDTD can be used and trusted for the computation of plasmonic mode dispersion curves in GNAs.Moreover, analytical codes are needed to filter out the computational artifacts created by the FDTD and as a guide to improve and refine the energy dispersion curves.Nevertheless, it has to be pointed out that the FDTD results account for the real array and also the plasmonic band gap opening resulting from the presence of the holes [15,16,49], which are not considered in the analytical approach.For both array dispositions, the main dispersion features can be clearly distinguished for both methods.Thus, FDTD can be used and trusted for the computation of plasmonic mode dispersion curves in GNAs.Moreover, analytical codes are needed to filter out the computational artifacts created by the FDTD and as a guide to improve and refine the energy dispersion curves.Nevertheless, it has to be pointed out that the FDTD results account for the real array and also the plasmonic band gap opening resulting from the presence of the holes [15,16,49], which are not considered in the analytical approach. For completeness, the energy dispersion curves were calculated for the actual GNA structure, and the results are shown in Figure 8.The complicated nature of the plasmonic mode dispersion is evident: both Woods' anomalies and SPPs at the Au/air and Au/SiO2 interfaces are present, generating a complex behavior due to their mutual interaction [28].Nevertheless, the results suggest the possibility of developing a new, reliable approach to study the outcomes of the whole optimization procedure in terms of more complex GNA physical properties.For completeness, the energy dispersion curves were calculated for the actual GNA structure, and the results are shown in Figure 8.The complicated nature of the plasmonic mode dispersion is evident: both Woods' anomalies and SPPs at the Au/air and Au/SiO 2 interfaces are present, generating a complex behavior due to their mutual interaction [28].Nevertheless, the results suggest the possibility of developing a new, reliable approach to study the outcomes of the whole optimization procedure in terms of more complex GNA physical properties. .Plasmonic mode dispersion in adimensional units for the square and triangular array considering the actual GNA structure with a gold thickness of 100 nm, while pitch and radius are equal to 459 nm and 95 nm, respectively, for the square array and equal to 519 nm and 97 nm for the triangular array. Conclusions In this paper, we implemented a customized PSO algorithm in Ansys Lumerical FDTD.The PSO algorithm was able to successfully provide tuned GNA structures with a specific optical response.Consequently, we studied the convergence behavior for a set of Figure 8. Plasmonic mode dispersion in adimensional units for the square and triangular array considering the actual GNA structure with a gold thickness of 100 nm, while pitch and radius are equal to 459 nm and 95 nm, respectively, for the square array and equal to 519 nm and 97 nm for the triangular array. Conclusions In this paper, we implemented a customized PSO algorithm in Ansys Lumerical FDTD.The PSO algorithm was able to successfully provide tuned GNA structures with a specific optical response.Consequently, we studied the convergence behavior for a set of optimized parameters (Better r, Better p) and FF.To verify the reliability of the results, we compared the optimization routine based on the FDTD method with a solution provided by a code based on the scattering matrix formalism.The accordance between the results suggests the validity of our approach.In consideration of these outcomes, we decided to test the FDTD capability also in calculating the plasmonic mode dispersion.Again, the outcomes were successfully compared with a specific code based on a direct analytical formalism.Therefore, FDTD proves to be a versatile and reliable method to perform a comprehensive study of GNAs.Experimental validation has already been planned of the optimized GNAs giving a suitable alternative to the actual metasurfaces currently in use.Furthermore, validation tests on their sensing capabilities will be actuated and full quantum method-based simulations [56][57][58] will be exploited to further probe the plasmonic properties.required on the i7 for the square array, while about 10 h were required for the hexagonal arrays on the i9. Figure 1 . Figure 1.Particle swarm optimization algorithm flow diagram (from upper left to lower right).At the end of the evolutionary process, the algorithm provides a global better solution, i.e., a (Better r, Better p) pair of geometrical parameters corresponding to the FDTD structure with the optimized optical response. Figure 1 . Figure 1.Particle swarm optimization algorithm flow diagram (from upper left to lower right).At the end of the evolutionary process, the algorithm provides a global better solution, i.e., a (Better r, Better p) pair of geometrical parameters corresponding to the FDTD structure with the optimized optical response. Figure 2 . Figure 2. Three-dimensional scatter plots for the square and triangular arrays.Panels (a,c) show the iteration number against the velocity vectors.Panels (b,d) show the FF values against the agents. Figure 2 . Figure 2. Three-dimensional scatter plots for the square and triangular arrays.Panels (a,c) show the iteration number against the velocity vectors.Panels (b,d) show the FF values against the agents. Figure 3 . Figure 3. Main steps of the fine-tuning procedure for the square (first row) and triangular (second row) arrays.Tuning wavelengths of the R minimum plotted versus the array pitch in panels (a,d).Second derivative of the fit (panels (b,e)).Reflectance spectra of the better and best FDTD structures (panels (c,f)). Figure 3 . Figure 3. Main steps of the fine-tuning procedure for the square (first row) and triangular (second row) arrays.Tuning wavelengths of the R minimum plotted versus the array pitch in panels (a,d).Second derivative of the fit (panels (b,e)).Reflectance spectra of the better and best FDTD structures (panels (c,f)). Figure 4 . Figure 4. Sweep on the array pitch, panels (a,b), and cylinder hole radius, panels (c,d), against the tuning wavelength for the square array. Figure 5 . Figure 5. Sweep on the array pitch, panels (a,b), and cylinder hole radius, panels (c,d), against the tuning wavelength for the triangular array. Figure 4 . Figure 4. Sweep on the array pitch, panels (a,b), and cylinder hole radius, panels (c,d), against the tuning wavelength for the square array. Figure 4 . Figure 4. Sweep on the array pitch, panels (a,b), and cylinder hole radius, panels (c,d), against the tuning wavelength for the square array. Figure 5 . Figure 5. Sweep on the array pitch, panels (a,b), and cylinder hole radius, panels (c,d), against the tuning wavelength for the triangular array. Figure 5 . Figure 5. Sweep on the array pitch, panels (a,b), and cylinder hole radius, panels (c,d), against the tuning wavelength for the triangular array. Figure 6 . Figure 6.Plasmonic mode dispersion in adimensional units for the square array.First column: results for the suspended metasurface in air and embedded in SiO2.Gold thickness set to 100 nm, Best p equal to 459 nm, and Best r equal to 95 nm.On the right: Wood's anomalies and SPP dispersion with folding. Figure 6 . Figure 6.Plasmonic mode dispersion in adimensional units for the square array.First column: results for the suspended metasurface in air and embedded in SiO 2 .Gold thickness set to 100 nm, Best p equal to 459 nm, and Best r equal to 95 nm.On the right: Wood's anomalies and SPP dispersion with folding. Figure 6 . Figure6.Plasmonic mode dispersion in adimensional units for the square array.First column: results for the suspended metasurface in air and embedded in SiO2.Gold thickness set to 100 nm, Best p equal to 459 nm, and Best r equal to 95 nm.On the right: Wood's anomalies and SPP dispersion with folding. Figure 7 . Figure 7. Plasmonic mode dispersion in adimensional units for the triangular array.First column: results for the suspended metasurface in air and embedded in SiO2.Gold thickness set to 100 nm, Best p equal to 519 nm, and Best r equal to 97 nm.On the right: Wood's anomalies and SPP dispersion with folding. Figure 7 . Figure 7. Plasmonic mode dispersion in adimensional units for the triangular array.First column: results for the suspended metasurface in air and embedded in SiO 2 .Gold thickness set to 100 nm, Best p equal to 519 nm, and Best r equal to 97 nm.On the right: Wood's anomalies and SPP dispersion with folding. Figure 8 Figure 8. Plasmonic mode dispersion in adimensional units for the square and triangular array considering the actual GNA structure with a gold thickness of 100 nm, while pitch and radius are equal to 459 nm and 95 nm, respectively, for the square array and equal to 519 nm and 97 nm for the triangular array. Table 1 . (Better r, Better p) and (Best r, Best p) values resulting from the optimization procedure for the square array. Table 2 . (Better r, Better p) and (Best r, Best p) values resulting from the optimization procedure for the triangular array.
2024-02-11T16:33:25.016Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "96b27224c69cebc8a1d937d7a9c22d5a9d75a111", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/17/4/807/pdf?version=1707467470", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66f70e1479c3559ea6476ed8ef5cb4fb924e7e1a", "s2fieldsofstudy": [ "Physics", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
125987068
pes2o/s2orc
v3-fos-license
The Influence of Microbial Agent on the Mineralization Rate of Steel Slag Bacteria-based mineralization is a new technique to use the steel slag. In this article, an experimental examination was performed to find out the steel slag advancement by the addition of themicrobial agent that has the possibility to acceleratemineralization ability of bacteria. It is observed that, under natural and CO2 pressure curing conditions, the carbonation rate is significantly raised when microorganisms are added to the steel slag. 0e increased ratio of microorganisms leads to a better carbonation rate. 0e reaction products formed by bacteria mineralization were analyzed with the scanning electron microscope (SEM) and X-ray diffraction (XRD), and the amount of reaction products was examined by thermogravimetric analysis. 0e results show that the compressive strength and carbonation speed rose with the increase in microorganism content. Bacterial could accelerate the rate of carbon sequestration in themineralization process.0e compressive strength of steel slag with 1.5% bacterial could reach up to 51.5MPa.0e micron-sized and roughness mineralization product induced by microorganisms apparently resulted in a denser and compacted structure. 0e carbon depth increased by 50%, and the content of calcite increased by 3 times. 0ese mineralization products would fill in the pore of steel slag cementitious materials and form the integrated and denser structure which produces more strength. Introduction Steel slag is rich in both silica and calcium and is a by-product of a process involving high temperature [1].It could be used as a calcium silicate value-added source.Attempts have been made to use steel slag as a cementitious additional material in cements [2].Different from ground-granulated blast-furnace (GGBF) slag, the steel slag is not hydraulic or pozzolanic [3]. is is because it is deficient of tricalcium silicates and amorphous SiO 2 components.Contrarily, steel slag has also demonstrated high reactivity to CO 2 .is makes it an appropriate slag in mineral sequestration-among the methods suggested for reducing carbon emissions [4].Steel slag can be stimulated by carbon dioxide to make a strength-contributing carbonate bond matrix [5].C 3 S and C 2 S in steel slag could react with the carbon dioxide, encouraging a faster strength increased through the generation of calcium silicate hydrates (CSH) and carbonates (CaCO 3 ) [6]. e investigations of carbonated steel slag for production are inadequate.e compressive strength and immense density of porous slag blocks that were generated by carbonated steel slag curing for about twelve days are 18.4 MPa and 2.4 g/cm 3 [7].Unconfined compressive strength of carbonated compact by hard-pressed ground slag which contacts CO 2 at a pressure of three bars was 9 MPa [8].Electric arc furnace slag also could be stimulated by CO 2 ; the compress strength could accomplish 17 MPa, and its carbon uptake was 11% in 2 h carbonation [9].When the slaked lime is added to the steel slag, the carbonated steel slag and slaked lime mixed specimen under 0.3 MPa CO 2 curing condition acquires 22.7 MPa compressive strength and 5.3 MPa flexural strength correspondingly [10]. Various microbes in the nature can accomplish the interconversion between CO 2 and CaCO 3 [11].e adaptation from CO 2 to CaCO 3 can be used to catch CO 2 in atmosphere.Carbonic anhydrase (CA) could accelerate the interadaptation of CO 2 and HCO 3 − to progress the absorption of CO 2 [12].us, CA can help in the seizure of CO 2 and precipitation of CaCO 3 .Qian et al. have studied the properties of steel slag such as strength and carbon uptake which were mainly investigated.e result of the research shows that biomineralization could significantly improve the strength and carbon uptake. In this investigation, one type of bacteria which can generate carbonic anhydrase was included in steel slag-based substances to generate steel slag.Compared to other carbonated steel slag, the bacteria can encourage the interadaptation between carbon dioxide from the curing environment and CaCO 3 .is means that CO 2 might be transmitted to minerals precipitated in pore when reacting fast with soluble Ca 2+ .e bacteria illustrated exceptional capability to encourage the speed of mineralization.e particle size of the steel slag was less than 200 μm.e XRD pattern of steel slag is shown in Figure 1. e main mineral phases in the steel slag are C 2 S, C 3 S, C 3 A, and a large amount of Fe 3 O 4 and RO phase which has no hydration performance, so the hydraulic activity of steel slag is rather low. Experiment e modulus of river sand, which was chosen as the aggregate, was 2.34, and the bulk density was 1490 kg/m 3 .CO 2 used in the experiment that creates a carbonation environment is of 99% concentration which is produced in the Nanjing Shangyuan industrial gas plant. Microbial Preparation. Bacillus mucilaginous used in this experiment was acquired from China Center of Industrial Culture Collection (CCIC).Bacillus mucilaginous was cultured in sucrose culture media.e content of purified medium of the strain is shown in Table 2; the pH value adjusted by NaOH is about 8.0.Place the purified medium in the triangle bottle, and sterilize at 121 °C for 25 min.Remove the sterilized triangle bottle, and place it in the oven to dry. e purified bacterial strains of carbonic anhydrase were inoculated in the culture medium and cultured in the oscillatory incubators at 30 °C for 24 h; the oscillation frequency of the oscillatory incubators was 170 r/min.e culture had an OD600 value of 1.2, the number of bacteria is about 2.62 × 10 8 cfu•ml −1 , and the enzyme activity value is 0.9 U•mmol•L −1 . en, the harvested microorganisms were kept in a refrigerator at 4 °C as a stock culture until the microbial was used.3. e water/binder (SS + SL) ratio and sand/binder ratio were fixed at 0.5 and 2.0, respectively.e raw materials were mixed with water for 4 min first and then were cast into moulds of 40 mm × 40 mm × 160 mm.All specimens were demolded after 24 h curing at 70% RH and 20 °C.After completing the 1 d hydration curing, samples were cured under different conditions.e standard group samples were cured in 70% RH and at 20 °C for 72 h. Sample Preparation. 5 mix proportions by varying the bacterial ratio are given in Table e carbonation group samples were cured in 0.3 MPa CO 2 curing for 4 h and then were cured in 70% RH and at 20 °C for 68 h.e CO 2 concentration in the environment used for curing the standard samples is 450 ppm.e schematic graphic expression of the mineralization setup is shown in Figure 2. e setup includes the compressed cylinder with 99% purity CO 2 gas, a carbonation chamber, a pressure transducer, and a pressure regulator. e pressure transducer monitors gas pressure, and the regulator could control the chamber pressure at the set value throughout the mineralization process. Test Methods. e compressive strength and flexural strength were measured according to Chinese test methods for Portland cement (GB175-2007) [13].Each property was tested in six samples after curing, and the experimental data were averaged.e micromorphology parameters were measured by using FEI Sirion scanning electron microscope (SEM), mineralogical compositions analysis was conducted by using Bruker D8-Discover X-ray powder diffractometer (XRD), pore structure analysis was conducted by using Micrometrics Autopore IV 9510 mercury porosimetry (MIP), and the thermal analysis on the specimen was conducted by NETZSCH STA449F3 simultaneous thermal analysis meter (DTA/TG), respectively.Ca 2+ concentration in the supernatant was determined according to the study of Stocks-Fischer et al. [14].e OD 400 value was measured with a spectrophotometer (UV-6000PC) every 2 h to characterize microbial growth and reproduction efficiency. Microbial Activity. e OD 400 value was measured with a spectrophotometer (UV-6000PC) to characterize microbial growth and reproduction efficiency in the in steel slag. e simulated pore solution of steel slag materials is shown in Table 4, and the pH was adjusted to 13. e OD 400 value of the bacterial liquid in the simulated pore solution of steel slag materials was tested with time, and the test results are shown in Figure 3. e change of the OD 400 value directly reflects the amount of bacteria.e initial OD 400 value is 1.22 in the simulated pore solution of steel slag materials.From 0 to 60 h, the OD 400 value decreases slowly, and from 60 to 100 h, the OD 400 value declined accelerated.But at 100 h, the OD 400 value was still around 1.0.e value dropped less than 20%.Bacillus has good adaptability to the high alkaline environment.It can satisfy in the alkaline steel slag materials environment with the internal pH of more than 13. e change of calcium concentration in solution directly reflects the formation rate of carbonate ions and the rate of carbonate mineralization.e initial concentration of Ca 2+ was 50 mmol/L in the simulated pore solution of steel slag materials.In the control test, the calcium ion concentration remained basically the same, there was no carbonate ions formed in the simulated pore solution of steel slag materials, the carbon dioxide hydrate reaction rate was low, and it is hard to generate more carbonate ions in a short period of time and unable to form mineral deposits.Microbial secretion is obtained by centrifugation of the microbial culture medium, and the microbial cells were removed.e calcium ion concentration decreased significantly under the effect of microorganism, and the calcium ion basically completely reacted at about 90 h.e carbon dioxide hydration reaction rate increased significantly, more carbonate was generated in a relatively short time, and carbonate mineral deposition was generated when reacting with calcium ions.e decrease rate of calcium ion concentration in the culture medium was slightly higher than that in the secretion, and the bacteria with a certain negative charge were first coupled with free Ca 2+ in the solution, which was regarded as nucleation site and promoted mineral deposition [15]. Acceleration of Calcium Ion Deposition. Figure 5 show the calcium carbonate deposition induced by bacterial in the simulated pore solution of steel slag.e surface and inside of calcium carbonate accumulates with a large number of bacteria.e bacteria exist in the steel slag with certain negative charge, and the bacteria could provide a nucleation site for mineralized deposition, and the spheroid and crystallinity of calcite are formed in steel slag.Meanwhile, bacteria could generate a particular enzyme which can accelerate the generation of carbonate anions through appropriate enzymatic action, so the total amount of CO 3 2− and internal mineralization product generated speed was higher.Bacteria accelerate calcium ion deposition obviously. Strength. e effects of the bacterial powder ratio on the strength of carbonated and uncarbonated steel slag are shown in Figure 6. e strength of uncarbonated samples increases with the bacterial powder ratio; when the adding quantity reached the maximum at the 1.5%, the compressive strength of S4 reached to 6.5 MPa; however, when the adding quantity is 0, the compressive strength of S1 is only 1.2 MPa.When 1.5% of bacterial powder is added, the compressive strength could improve about 5 times.e same tendency is observed in the case of carbonated steel slag.When the adding quantity reached 1.5%, the compressive strength of SP1 reached to 51.5 MPa, while the carbonated steel slag without bacterial powder was 33.8 MPa, the compressive strength improved about 50%. Carbon Area. Obviously some area formed purple-red on response surface of mortar after phenolphthalein solution.Cross-section staining experiments were done after curing under different conditions immediately. e mineralization area of steel slag cementing materials is shown in Figure 7. e experimental results proved the promotion of mineralization by microorganisms, and bacterial has obvious effect on the mineralization of steel slag cementing materials. e mineralization area of steel slag with the bacterial was bigger than the samples under the same mineralization process only.Bacterial could obviously accelerate the rate of carbon sequestration of steel slag and the mineralization process.Meanwhile, the process of mineralization accelerated with the increasing of curing pressure, the speed growth particularly. e effects of the bacterial on the carbide area are shown in Figure 8. e carbonation depth increases with the additive amount of bacterial.Under natural curing conditions, when the adding quantity reached up to 1.5%, the carbonation depth of S4 reached to 6.1 mm; however, the carbide area of S1 was only 2.3 mm.Carbonation depth could increase about 2 times.e same tendency is observed in the case of carbonation curing condition.When the adding quantity is 1.5%, the carbide area of SP1 could reach 18.3 mm; while the carbonated steel slag without bacterial was 11.8 mm, the carbonation depth improved about 50%. ere was a positive correlation between carbonation depth and strength, and the increase ratio of carbonization depth is approximately equal to the increase ratio of strength. Mineralogical Composition. e XRD patterns of microbial mineralized steel slag and carbide steel slag without microorganism can be observed in Figure 9. Constituent minerals CaCO 3 , C 2 S, gehlenite, and Fe 3 O 4 were found in the XRD patterns.Carbon dioxide activated the converted part (C 2 S and CaOH 2 ) into calcium carbonate.e major difference between XRD patterns of microbial mineralized and mineralized without microorganism is the intensity of calcium.e calcium diffraction peaks of microbial mineralized steel slag cementing materials were increased, and it indicates that the crystallization degree of calcium carbonate and magnesium carbonate is increased and that was the reason why the strength of microbial mineralized steel slag cementing materials is far higher than the mineralized steel slag cementing materials without microorganism.10(a) and 10(b), the TG curves display the total weight loss of the mixture of Ca(OH) 2 and CaCO 3 .It demonstrates that certainly a number of characteristics peak and overall weight loss increased with the bacterial.As the figures show the TG curives of microbial mineralized steel slag and carbide steel slag.From the TG curve, two lightness peaks are experiential at approximately 400-550 °C as well as 600-800 °C, which represent dehydration of Ca(OH) 2 and de-carbonation of CaCO 3 correspondingly, in addition to their respective weight losses of 4.87%, 2.70% and 3.28%, 8.50%, respectively.e content of calcium carbonate is 6.1% and 19.3%, respectively, and the content of calcium carbonate increased by 300%.e principal strength of microbial mineralized steel slag source is the mineralization of Ca(OH) 2 as well as the calcium silicate under mineralization state.With the increase of CO 2 curing pressure, calcium utilization of mineralized steel slag also increased.e addition of microorganisms might significantly accelerate mineralization reaction. ermal Analysis. As shown in Figures e chemical procedure of mineralization sequestration reaction might be illustrated as shown in the following equations: e results of pore size distribution showed the total porosity that the pore diameter was between 0.1 mm and 1 mm in Figure 11, and the samples of the natural curing group, carbonized group, and the microbial mineralization group were 1.25%, 1.39%, and 1.80%.Since the molar volume of calcium carbonate is 11.8% larger than that of calcium hydroxide, the calcium carbonate crystals will gradually fill the pores in the steel slag.e porosity inside the microbial mineralization group is lower; the pore size is relatively small, and the distribution is more uniform; the microbial mineralization optimized the pore structure.It can be seen from the figures that the porosity of the edge zone in the three groups is smaller than the internal part.In the microorganism mineralization group, the pore size distribution is the most uniform between edge and internal regions.e carbonated samples showed the uneven distribution and larger porosity. 3.8.Microstructure Analysis.SEM and EDS analyses of the samples curing under the carbonated condition are shown in Figures 12 and 13, and Table 5.As SEM images show, the overall evaluation of the morphology of the samples indicates a suitable and uniform distribution of particles of calcite (CaCO 3 ) crystal composites, the internal structure of carbonated steel slag with microbes is compacted, calcite crystal formed in high pressure arranges closely, crystallinity is better, and crystalline size is greater, which contributed high strength.EDS showed that, in the internal structure of carbonated steel slag, calcite distributed uniformly in the binding material which contributed the strength obviously.Meanwhile, there was hexagonal prism shape crystal formed in the samples; EDS showed that calcium hydroferrocarbonate (C 3 A • FeCO 3 • 11H 2 O) has formed during the hydration of carbonated steel slag, and it may contribute to the increase of the compressive strength in later ages [16]. e surface morphology parameters of calcite particles were observed by the atomic force microscope.e surface of calcite formed by the microbiological method and the carbonation method is show in Figure 14.Microbial mineralization production presents various textures with many grain boundaries and dislocations, and the surface is relatively rough. e enhancement of the strength of the steel slag cementitious materials by biomineralization not only the improves the mineralization rate, but also the difference of the morphology and surface state of the biomineralized products, which is also a reason for the enhancement of the strength of the microbial mineralized steel slag cementitious materials. − would react with the free Ca 2+ inside the samples and generate CaCO 3 .After mineralization, these reaction products could fill in the pores, and the pore size is relatively smaller.Under the natural curing condition, the concentration of CO 2 gas in steel slag is low, and the formation rate of CO 3 2− is slower.e adding of microorganism in the steel slag could generate a particular enzyme [17] which can accelerate the generation of carbonate anions through appropriate enzymatic action [18], so the total amount of CO 3 2− and internal mineralization product generated speed were higher than those of the specimens without bacteria, so the strength of mineralization specimens was much higher [19]. Conclusion e compressive strength of steel slag with 1.5% bacterial could reach up to 51.5 MPa. e micron-size and roughness mineralization product induced by microorganism apparently resulted in a denser and compacted structure, and the porosity reduced by 50%.e carbon depth increased by 50%, and the content of calcite increased by 3 times.ere was a positive correlation between carbonation depth and strength, and the increase ratio of carbonization depth is approximately equal to the increase ratio of strength.ese mineralization products would fill in the pore of steel slag cementitious materials and form the integrated and denser structure which produces more strength.e role of microorganisms is loading and acting as catalyst, and it could transport a continuous stream of carbon dioxide into the body inside the carbonization reaction; meantime, the inside the steel slag.Ca 2+ has a higher probability when combined with CO 3 2− to generate CaCO 3 , and the compressive strength and carbonation speed of steel slag rised with the increasing of microbes.Bacterial obviously accelerates the rate of carbonation of steel slag in the mineralization process.Microbes could induce the deposition of CaCO 3 efficiently with the dissolution of Ca 2+ in the steel slag-slaked lime mixture.e calcium silicates in steel slag and Ca(OH) 2 will mineralize the crystals into CaCO 3, and the mineralization will fill in the gap to optimize pore structure and increase strength. Figure 4 shows the change rule of Ca 2+ concentration with reaction time during the bacterially induced calcium carbonate deposition. Figure 3 : Figure 3: Survival capacity of bacterial in simulation of pore solution of steel slag materials. 3. 7 . Pore Structure Analysis.In the three-dimensional CT scan, different colors represent different pore size and the 4 Advances in Materials Science and Engineering pore distribution. Figure 15 : Figure 15: Formation process of biomineralization product in steel slag. 2.1.Raw Material.e density of steel slag is 3100 kg/m 3 , and the specific area is 510 m 2 /kg.Steel slag powder is from Baoye Slag Comprehensive Development Co. Ltd.Ca(OH) 2 is from Shanghai Ling Feng chemical agent Ltd. e chemical component of steel slag and slaked lime is shown in Table 1. Table 2 : e content of purified medium of the strain. Table 4 : Simulated pore solution of steel slag materials.
2019-04-22T13:13:15.193Z
2018-12-25T00:00:00.000
{ "year": 2018, "sha1": "5e4b0b05087b114a7eafa3ea2b69eb40ca5d6e7b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2018/5048371", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "5e4b0b05087b114a7eafa3ea2b69eb40ca5d6e7b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
253917619
pes2o/s2orc
v3-fos-license
Mutual interactors as a principle for phenotype discovery in molecular interaction networks Biological networks are powerful representations for the discovery of molecular phenotypes. Fundamental to network analysis is the principle—rooted in social networks—that nodes that interact in the network tend to have similar properties. While this long-standing principle underlies powerful methods in biology that associate molecules with phenotypes on the basis of network proximity, interacting molecules are not necessarily similar, and molecules with similar properties do not necessarily interact. Here, we show that molecules are more likely to have similar phenotypes, not if they directly interact in a molecular network, but if they interact with the same molecules. We call this the mutual interactor principle and show that it holds for several kinds of molecular networks, including protein-protein interaction, genetic interaction, and signaling networks. We then develop a machine learning framework for predicting molecular phenotypes on the basis of mutual interactors. Strikingly, the framework can predict drug targets, disease proteins, and protein functions in different species, and it performs better than much more complex algorithms. The framework is robust to incomplete biological data and is capable of generalizing to phenotypes it has not seen during training. Our work represents a network-based predictive platform for phenotypic characterization of biological molecules. Introduction Molecules in and across living cells are constantly interacting, giving rise to complex biological networks. These networks serve as a powerful resource for the study of human disease, molecular function and drug-target interactions. 1,2 For instance, evidence from multiple sources suggests that causative genes from the same or similar diseases tend to reside in the same neighborhood of proteinprotein interaction networks. [3][4][5][6] Similarly, proteins associated with the same molecular functions form highly-connected modules within protein-protein interaction networks. 7 These observations have motivated the development of bioinformatics methods that use molecular networks to infer associations between proteins and molecular phenotypes, including diseases, molecular functions, and drug targets. [8][9][10][11] Many of these methods assume that molecular networks obey the organizing principle of homophily: the idea that similarity breeds connection (see Figure 1b). 12 However, while this principle has been well-documented in social networks of many types (e.g. friendship, work, co-membership), it is unclear whether it captures the dynamics of biological networks. If not, existing bioinformatics methods that assume homophily may not realize the full potential of biological networks for scientific discovery. To better understand the place for homophily in bioinformatics, we consider groups of phenotypically similar molecules (e.g. molecules associated with the same disease, involved in the same function, or targeted by the same drug) and study their interactions in large-scale biological networks. We find that most molecules associated with similar phenotypes do not interact directly in molecular networks, a result which puts into question the assumption of homophily, an assumption that is taken for granted by so many bioinformatics methods. In fact, a different principle better explains how phenotypic similarity relates to network structure in biology. On average, two molecules that interact directly with one another will have less in common than two molecules that share many mutual interactors, just as people in a social network may share mutual friends. We call this the mutual interactor principle and validate it empirically on a diverse set of biological networks (see Figure 1c). Motivated by our findings, we develop a machine learning framework, Mutual Interactors, that can predict a molecule's phenotype based on the mutual interactors it shares with other molecules. We demonstrate the power, robustness, and scalability of Mutual Interactors on three key prediction tasks: disease protein prediction, drug target identification, and protein function prediction. With experiments across three different kinds of molecular networks (protein-protein interaction, signaling and genetic interaction) and four species (H. sapiens, S. cerevisiae, A. thaliana, M. musculus), we find that Mutual Interactors substantially outperforms existing methods, with gains in recall up to 61%. Additionally, we show that the weights learned by our method provide insight into the functional properties and druggability of mutual interactors. Mutual Interactors is an approach based on a different network principle than existing bioinformatics methods. That it can outperform state-of-the-art approaches suggests a need to rethink the fundamental assumptions underlying machine learning methods for network biology. Network connectivity of molecular phenotypes One way we measure phenotypic similarity between two molecules is by comparing the set of phenotypes (e.g., diseases or functions) associated with each molecule and quantifying their similarity with the Jaccard index. We find that the average Jaccard index of the 62,084 molecule pairs that interact in the human reference interactome (HuRI) is significantly smaller than the average Jaccard index of the 62,084 molecule pairs with most degree-normalized mutual interactors (p = 2.00 ⇥ 10 59 , dependent t-test). 13 We replicate this finding on three other large-scale interactomes: a PPI network derived from the BioGRID database 14 (p = 3.56 ⇥ 10 26 ) another derived from the STRING database 15 (p = 1.29 ⇥ 10 10 ) and the PPI network compiled by Menche et al. (p = 1.02 ⇥ 10 4 ). 16 To further evaluate these two principles (i.e., homophily and Mutual Interactor), we collect 75,744 disease-protein associations 17 and analyze their interactions in the protein-protein interaction network (see Figure 1d-f and Figure D4). For each disease-protein association we compute the fraction of the protein's direct interactors that are also associated with the disease. In only 17.8% of disease-protein associations is this fraction statistically significant (P < 0.05, permutation test). Moreover, in 46.5% of disease-protein associations, the protein does not interact directly with any other proteins associated with the same disease. For each disease-protein association, we also compute the degree-normalized count of mutual interactors between the protein and other proteins associated with the disease. We call this the association's mutual interactor score (see Section B.3). In 31.0% of disease-protein associations, this score is significant (permutation test, P < 0.05). For other molecular phenotypes, we get similar results: proteins targeted by the same drug have a significant direct interactor score 35.1% of the time and a significant mutual interactor score 67.5% of the time (see Figure 3b). 18 In only 31.0% of the protein-function associations in the Gene Ontology is the direct interactor score significant, compared with 56.7% for the mutual interactor score (see Figure D1a). 19 For biological processes in the Gene Ontology, these fractions are 26.7% and 46.3% for the direct and mutual interactor scores, respectively (see Figure D1b). These results suggest that, in biological networks, there is more empirical evidence for the Mutual Interactor principle than there is for the principle of homophily. Mutual Interactors as a machine learning method for predicting molecular phenotypes Based on the mutual interactor principle, we develop a machine learning method for inferring associations between molecules and phenotypes. Below, we describe how our method can predict disease-protein associations using the protein-protein interaction network. In network-based disease protein prediction, the objective is to discover new disease-protein associations by leveraging the network properties of proteins we already know to be involved in the disease. Our method, Mutual Interactors, scores candidate disease-protein associations by evaluating the mutual interactors between the candidate protein and other proteins already known to be associated with the disease. Rather than score candidate disease-protein associations according to the raw count of these mutual interactors, our method learns to weight each mutual interactor differently. Intuitively, this makes sense: the significance of a mutual interactor depends on it's profile. For example, that two proteins both interact with the same hub-protein is probably less significant than two proteins both interacting with a low-degree signalling receptor. Rather than hard-code which mutual interactors we deem significant, through training on a large set of disease pathways, Mutual Interactors learns which proteins often interact with multiple proteins in the same disease pathway. Mutual Interactors maintains a weight w z for every protein z in the interactome. This allows Mutual Interactors to down-weight spurious mutual interactors when evaluating a candidate association. To further ground our method, we consider its application to a specific disease pathway. Ketonemia is a condition wherein the concentration of ketone bodies in the blood is abnormally high. 20,21 In Figure 1a, we show the Ketonemia pathway in the human protein-protein interaction network. In red are the proteins known to be associated with Ketonemia, including MLYCD and BCKDHA. 22,23 We see that Ketonemia-associated proteins rarely interact with one another. In Figure 1g, we show the same network and disease pathway, but now we've highlighted in blue the mutual interactors between Ketonemia-associated proteins. Of all 21,557 proteins in the human protein-protein interaction network, Mutual Interactors predicts that PCCA, shown in orange, is the most likely to be associated with Ketonemia. PCCA is a protein involved in the breakdown of fatty acids, a process which produces ketone bodies as a byproduct. PCCA shares mutual interactors with four proteins known to be associated with Ketonemia: BCKDHA, DBT, FBP1, and MLCYD. Further, two of these mutual interactors, MCEE and PCCB, are of very low degree (with 7 and 21 interactions respectively) and are weighted highly by Mutual Interactors. Problem Formulation Though Mutual Interactors was motivated by the molecular phenotype prediction problem, it is a general model that can be applied in any setting where we'd like to group nodes on a graph. Suppose we have a graph G = {V, E} and a set of node sets S = {S 1 , S 2 , ..., S k } where each set S i is a subset of the full node set S i ✓ V . Note that the node sets need not be disjoint. For example, G could be a PPI network and each S i could be the set of proteins associated with a different phenotype. We can split each node set S i into a set of training nodesS i ⇢ S i and a set of test nodes S i S i . Giveñ S i and the network G, we're interested in uncovering the full set of nodes S i . Formally, this means computing a probability P r(u 2 S|S) for each node u 2 V . The Mutual Interactors model The mutual interactors of two nodes u and v are given by the set is the set of u's one-hop neighbors. For each node z 2 V , Mutual Interactors maintains a weight w z . As we discussed above, these weights are meant to capture the degree to which each node in the graph acts as a mutual interactor in the node sets of S. With a weight w z for every possible mutual interactor in the network, we model the probability that a query node u is in a full node set S given the training setS ✓ S as where d u is the degree of node u, (x) = 1 1+e x is the sigmoid function, a is a scale parameter, b is a bias parameter, and w z is a learned weight for node z. With sparse matrix multiplication we can efficiently compute the probability for every node in the network with respect to a batch of k training sets {S 1 , ...,S k }. Let's encode training sets with a binary matrix where A is the adjacency matrix, D is the diagonal degree matrix and W is a diagonal matrix with the weights w z on the diagonal. Note this formulation ignores any edge weights in the graph, future work should explore simple extensions of this formulation that incorporate edge weights. Training the Mutual Interactors model Given a meta-training set of k node sets S = {S 1 , ..., S k }, we can learn the model's weights W, a, and b that maximize the likelihood of observing the node sets in the meta-training set. During meta-training we simulate node set expansion by splitting each set S i into a training setS i encoded by X 2 {0, 1} m⇥n and a target set S i S i encoded by Y 2 {0, 1} m⇥n . For each epoch, we randomly sample 90% of associations for the training set and use the remaining 10% for the test set. The input associations X are fed through our model to produce association probabilities P. We update model weights by minimizing weighted binary cross-entropy loss # positive examples . We can minimize the loss using a gradient-based optimizer. First, we compute the gradient of the loss with respect to model parameters via backpropagation. Then, we use ADAM with a learning rate of 1.0. We train Mutual Interactors with weight decay 10 5 and a batch size of 200. 24 We train for five epochs and use 1 9 of the training labels as a validation set for early stopping. Predicting disease-associated proteins with Mutual Interactors We systematically evaluate our method by simulating disease protein discovery on 1,811 different disease pathways. In ten-fold cross-validation, we find that Mutual Interactors recovers a larger frac- tion of held-out proteins than do existing disease protein discovery methods. Specifically, for 10.2% of disease-protein associations our method ranks the held-out protein within the first 25 proteins in the network (recall-at-25 = 0.102). Mutual Interactors's performance represents an improvement of 60.9% in recall-at-25 over the next best performing method, random walks. Other network-based methods of disease protein discovery including DIAMOnD 10 (recall-at-25= 0.059), random walks 26 (recall-at-25 = 0.063), and graph convolutional neural networks 25 (recall-at-25 = 0.057) recover considerably fewer disease-protein associations (see Figure 2a,c-d). Moreover, Mutual Interactors maintains its advantage over existing methods across disease categories: in all seventeen that we considered Mutual Interactors's mean recall-at-100 exceeds random walks' (see Section C.3 and Figure C3). We also study whether Mutual Interactors can generalize to a new disease that is unrelated to the diseases it was trained on. To do so, we train Mutual Interactors in the more challenging setting where similar diseases are kept from straddling the train-test divide (see Section C.2 and Figure C2). In this setting, Mutual Interactors achieves a recall-at-25 of 0.096, a 50.7% increase in performance over the next best method, random walks. Mutual Interactors can naturally be extended to incorporate other sources of protein data. 27 In Section C.4, we describe a parametric Mutual Interactors model that incorporates functional profiles from the Gene Ontology when evaluating mutual interactors. Instead of learning a weight w z for every protein z, this model learns one scalar-valued function mapping gene ontology embeddings to mutual interactor weights. We show that parametric Mutual Interactors performs on par with the original Mutual Interactors model, outperforming baseline methods by 45.5% in recall-at-25 (see Figure C4). The experimental data we use to construct molecular interaction networks is often incomplete or noisy: it is estimated that state-of-the-art interactomes are missing 80% of all the interactions in human cells. 16 In light of this, we test if our method is tolerant of data incomplete networks. We find that Mutual Interactors exhibits stable performance up to the removal of 50% of known PPI interactions. Mutual Interactors's performance with only half of all known interactions exceeds the performance of existing methods that use all known interactions (Figure 2b). We perform an ablation study to assess the benefits of meta-learning mutual interactor weights w z (see Figure D8 ). In the study, we compare our model with Constant Mutual Interactors where w z = 1 8z. On tasks for which we have a large dataset of phenotypes (i.e. disease protein prediction and molecular function prediction in humans), meta-learning w z improves performance by up to 16.6% in recall-at-25. However, on tasks for which data is scarce (i.e. drug-target prediction and non-human molecular function prediction) learning w z does not provide a significant benefit. For these tasks, we report performance on constant Mutual Interactors where w z = 1 8z. Learned weights provide insight into the function and druggability of mutual interactors. Next we analyze the mutual interactor weights learned by our method. Recall that Mutual Interactors learns a weight w z for every protein z in the interactome. This allows Mutual Interactors to downweight spurious mutual interactors when evaluating a candidate disease-protein association. Here, we study what insights into biological mechanisms these weights reveal. We find that normalized Mutual Interactors weight wz p dz is correlated with neither degree (r = 0.0359) nor triangle clustering coefficient (r = 0.0127) (see Figure D9). However, we do find that proteins with high weight are often involved in cell-cell signaling. We perform a functional enrichment analysis on the 75 proteins with the highest normalized weight wz p dz . Of the fifteen functional classes most enriched in these proteins, ten including signaling receptor activity and cell surface receptor signaling pathway are directly related to transmembrane signaling and the other five including plasma membrane part are tangentially related to signaling (see Figure D6). Further, we find that highly-weighted proteins are often targeted by drugs. Among the 500 proteins with the highest degree-normalized weight, 33.6% are targeted by a drug in the DrugBank database. 18 By contrast only 10.9% of proteins in the wider protein-protein interaction network are targeted by those drugs. This represents a significant increase (p  6.43 ⇥ 10 24 , Kolmogorov-Smirnov test). Although no drug-target interaction data was used, training our method to predict disease proteins gives us insights into which proteins are druggable. Identifying drug targets with Mutual Interactors The development of methods that can identify drug targets is an important area of research, [30][31][32][33] in this section we show how our method can also be used for this task. Recall that mutual interactors between proteins targeted by the same drug are statistically overrepresented in the protein-protein interaction network (see Figure 3a). Like with disease-protein associations, Mutual Interactors can score candidate drug-target interactions by evaluating the mutual interactors between the candidate target protein and other proteins already known to be targeted by the drug (see Section 3.1 for a technical description of the approach). When we simulate drug-target identification with ten-fold cross validation on the drugs and targets in the DrugBank database, 18 we find that our method outperforms existing network-based methods of drug-target identification (recall-at-25=0.374), including graph neural networks (recall-at-25=0.329) and random walks (recall-at-25=0.166). We also compare Mutual Interactors with probabilistic non-negative matrix factorization (NMF). [30][31][32] On aggregate, our method's performance is comparable to NMF's. However, on the hardest examples, drugs that share few targets with the drugs in the training set, our method (recall-at-25=0.381) significantly outperforms NMF (recall-at-25=0.006) (see Figure 3e). Further, our method provides insight into the side-effects caused by off-target binding. For each drug in DrugBank, we use Mutual Interactors to identify potential protein targets that are not already known targets of the drug. Pairs of drugs for which our method makes similar target predictions tend to have similar side effects 34-37 (Figure 3c). Fig. 4: Predicting protein functions across species and molecular networks using mutual interactors. Overall protein function prediction performance across four species and six molecular networks. We predict Molecular Function Ontology 38 terms using PPI, signaling, and genetic interaction networks for human, yeast S. cerevisiae, mouse M. musculus, and thale cress A. thaliana. We show average maximum F -measure. 39 A perfect predictor would be characterized by Fmax = 1. Confidence intervals (95%) were determined using bootstrapping with n = 1,000 iterations. N -number of nodes, M -number of edges, <k> -average node degree. Predicting molecular function across species and molecular networks Molecules associated with the same molecular function (e.g., RNA polymerase I activity) or involved in the same biological process (e.g., nucleosome mobilization) tend to share mutual interactors in molecular networks of various type and species (see Figure D1a-b). For example, the eleven proteins involved in the formation of the secondary messenger cAMP (cyclase activity, GO:0009975) do not interact directly with one another in the protein-protein interaction network, but almost all of them interact with the same group of twenty-five mutual interactors (see Figure D3). Using the Mutual Interactor principle, we can predict the molecular functions and biological processes of molecules. Via ten-fold cross validation, we compare Mutual Interactors to existing methods of molecular function prediction, including Graph Neural Networks 40 and Random Walks. 26 Across all four species and in three different molecular networks (protein-protein interaction, signaling, and genetic interaction), we find that Mutual Interactors is the strongest predictor of both molecular function (see Figure 4) and biological process (see Figure D2). Conclusion This work demonstrates the importance of rooting biomedical network science methods in principles that are empirically validated in biological data, rather than borrowed from other domains. This need for more domain-specific methodology in biomedical network science is also demonstrated by Kovács et al., who find that social network principles do not apply for link prediction in PPI networks. 41 This study complements these findings: with experiments across three different kinds of molecular networks (protein-protein interaction, signaling and genetic interaction), and four species (H. sapiens, S. cerevisiae, A. thaliana, M. musculus) we show that a method designed specifically for biological data can better predict disease-protein associations, drug-target interactions and molecular function than can general methods of greater complexity. The power of Mutual Interactors to predict molecular phenotypes lies not in it's algorithmic complexity-it outperforms far more involved methods-but rather in the simple, yet fundamental, principle that underpins it. Motivated by our findings that molecules with similar phenotypes tend to share mutual interactors, we formalize the Mutual Interactor principle mathematically with machine learning. Mutual Interactors is fast, easy to implement, and robust to incomplete network data-its foundational formulation makes it ripe for extension to new domains and problems.
2022-11-26T16:24:21.105Z
2022-11-24T00:00:00.000
{ "year": 2023, "sha1": "44e12f93de9ae6b863ab7dbcb9a35fc37efa5296", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1142/9789811270611_0007", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e487a55dae9e87f747a773665f7e662e807332f", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
225852642
pes2o/s2orc
v3-fos-license
Dynamical Analysis of Transmission Model of the Cattle Foot-and-Mouth Disease The epidemic of foot-and-mouth disease (FMD) in cattle remains particular concern in many countries or areas. The epidemic can spread by direct contact with the carrier and symptomatic animals, as well as indirect contact with the contaminated environment. The outbreak of FMD indicates that the infection initially spreads through the farm before spreading between farms. In this paper, considering the cattle population, we establish a dynamical model of FMD with two patches: within-farm and outside-farm, and give the formulae of the basic reproduction number R0. By constructing the Lyapunov function, we prove the disease-free equilibrium is globally asymptotically stable when R0 <1, and that of the unique endemic equilibrium when R0>1. By numerical simulations, we confirm the global stability of equilibria. In addition, by carrying out the sensitivity analysis of the basic reproduction number on some parameters, we reach the conclusion that vaccination, quarantining or removing of the carrier and disinfection are the useful control measures for FMD at the large-scale cattle farm. AMS subject classifications: 34D05,34D20 Introduction Foot and mouth disease (FMD) was the first disease which the World Organization for Animal Health (OIE) established official status recognition. It is a highly contagious and economically devastating viral disease of cloven-hoofed animals, such as cattle, pigs, sheep, goats and deer. The typical clinical sign is the occurrence of blisters (or vesicles) on the muzzle, tongue, lips, mouth, between the toes, above the hooves, teats and potential pressure points on the skin. The earliest written records of FMD was in 1546, but the pathogenic agent was not discovered by two former pupils of Robert Koch until the late nineteenth century. FMD is notorious as a perennial threat to ruminants for centuries. FMD outbreaks have occurred in most countries containing the FMD virus (FMDV) susceptible animals. Australia, New Zealand and Indonesia, Central and North America and Western Europe are currently free of FMD. However, FMD is still prevalent in Africa, the Middle East, Asia, and South America. Depending on the epidemiological situation of the FMD, the control strategy is implemented varying from country to country. The FMD free countries or areas prefer to control epidemics by slaughtering infected animals, rather than using vaccination as the control strategy. In many FMD infected countries or areas, vaccination remains an alterative part of an effective control strategy, such as in China, Mongolia, Korea and India. But the decision on whether or not to use vaccination lies in national authorities [25]. FMD is transmitted by multiple routs. Susceptible animals may become infected by contact with infected animals or contaminated objects, and indirect contact with an infected environment [20,29]. In addition, animals are considered to be carriers of FMDV if the virus or viral genome can still be isolated from the esophageal pharyngeal fluid more than 28 days after infection. The carrier state in the cattle usually does not persist for more than 6 months, although in a small proportion it may last up to 3 years. The carrier animals hold a high level of neutralizing antibody within their sera, but retain the live virus, meanwhile, excrete low-level of FMDV [16,30]. A study quantifies the transmission rate of FMDV infection from carriers to susceptible animals [33]. The carrier animals are one of the possible sources, which may be occasional cause of new outbreaks [32,34]. Therefore the model of FMD transmission should take into account the carriers, especially in many countries which use vaccination against FMD. Dynamic models play the significant role in estimating transmission size and evaluating transmission intensity as well as the control measures. Numerous models studied the mechanisms of the disease transmission between farms, and the epidemiological unit of analysis was the individual farm. These models made predictions with different types of control measures that were taken prevent the epidemic from spreading [2,5,6,8,10,11,14,[21][22][23]28]. These models were based on the unit of the farm, which considered that all the animals at the farm are infected when one case is found, as a consequence, many uninfected animals were culling. In addition, the infection will initially spread through the farm before spreading between farms [15,26]. The within-farm transmission of FMD is module simulating a farm outbreak and modeled local control measures, and parameter values are estimated for animals in literature and vaccination experiments [12,23]. The qualitative analysis of dynamical model can give an insight into the mechanism of infection and highlight a 'threshold' effect which signals a radical change in behavior of the epidemic, and the quantitative analysis can estimate numbers of individuals likely to be affected by the disease. Therefore, in this paper we establish a dynamic model to describe propagation both on and off the farm and carry out dynamical analysis for the model. The model consists of two patches: within-farm and outside-farm. The model can provide the understanding of the spreading mechanism of FMD at a single farm as well as outside-farm, and give us threshold values and other constants which we use to describe the behavior of the disease and control of its spread. The rest of the paper is organized as follows: In Section 2, we introduce the model in details, give the basic reproduction number of the model and carry out the dynamical analysis of the model. In Section 3, we take the large-scale cattle farm in China as a numerical example, and illustrate the effectiveness of the proposed results. We give a brief discussion about the main conclusions in Section 4. Model and basic reproduction number We divide herds of cattle into two patches: within-farm and outside-farm. Cattle are homogeneously mixed in each patch. The within-farm is a scale cattle farm, which has its own staffs, feed mills and slaughterhouses. Farm staffs, visitors and vehicles visit the farm regularly. The outside-farm refers to the herds that are mainly composed of rural household's scatter breeding within a certain radius of the within-farm. Here, we ignore the migration of animals between the patches, since the artificial insemination (AI), in addition to captive breeding, have been extensively applied at the scale cattle farm. The transfer diagram of the model is depicted in Fig. 1. We interpret this model in three aspects: at the within-farm, the outside-farm and in the environment. At the within-farm, the total population (size of N) is divided into six states: susceptible (S), latent (E), carrier (asymptomatic, I 1 ), symptomatic (I 1 ), vaccinated (V) and recovered (R), with the total population N = S+V +E+ I 1 + I 2 +R. The animal of the state E is infected but not infectious, has no symptoms and does not excrete virus. But the FMDV is replicating in its body, which can be detected by nonstructural protein antibody (e.g. 3A, 3B, 2B, 2C, 3ABC) tests (NSPS) [3,7], and the positive animal would be quarantined or removed. The animal of the state I 1 is infectious and excretes virus but not shows symptoms, after which the animal recovers to the state R. The proportion of carrier animals are quarantined or removed by NSPS. The recovered animals is assumed to have the permanent immunity. The animal of the state I 2 shows signs of clinical or is tested in etiologic diagnose, which would be immediately culled. The animal of the state V cannot be infected and not transferred to the state I 1 but enter S when the immunity fades away. At the outside-farm, S d , E d , I d and V d denote the number of susceptible, latent, symptomatic and vaccinated animals, with the total population satisfying There is no the carrier state, as FMDV is exclusively localized to the nasopharyngeal mucosa for carriers [31] and not normally shed [24]. In the meanwhile, at the outside-farm, the cattle have few chance to contact with each other due to the low density of animals. In addition, given the limited economic sources, animals are not detected by NSPS. In the environment of the within-farm and outside-farm, the quantity of FMDV are denoted by W and W d , respectively. In a word, the susceptible class has the input of new susceptible and removal (referred as natural death and out-migration in the paper), the remaining classes have removal. We make the following hypotheses about the routes of infection of FMD. At the within-farm, the susceptible animals can be infected in two ways: the direct contact with the carrier and symptomatic animals, the indirect contact with the contaminated environment, which can be described by the incidence rates β 1 SI 1 , β 2 SI 2 , and ηS W θ+W , respectively. At the outside-farm, the susceptible animals are infected by direct contact with the symptomatic animals and indirect contact with the contaminated environment, given by β d S d I d and η d S d W d θ+W d , respectively. Since the farm is confined in space in which animals uniformly mixed, an increase in population size will proportionally increase the population density, the direct contact number is proportional to the total population size. In this sense, the bilinear incidence β 1 SI 1 , β 2 SI 2 , and β d S d I d are suitable for the direct transmission. For FMDV in the environment, the higher the number of FMDV is, the greater the probability of an individual is infected. When the amount of FMDV increases to a certain degree, the probability of infection of an individual will tend to saturation and no longer increase. The bilinear and standard incidence are no longer valid, the saturation incidence is more appropriate, given by ηS W θ+W and η d S d In addition, since farm staffs, visitors and vehicles would carry FMDV in the infected environment when they enter and leave between the within-farm and outside-farm, which may induce the transmission of FMD across the farm. Based on the transmission mechanisms and previous assumptions, we express the time evolution of the population states in the following deterministic ordinary differential equations: (2.1) From system (2.1), all the parameters are described in Table 1, and assumed nonnegative, the total population N = S+ I 1 + I 2 +E+V +R satisfies which implies that the region Ω is positively invariant for system (2.1), that is It is easy to see that system (2.1) has a unique disease-free equilibrium The threshold condition known as the basic reproduction number R 0 which estimates the average number of secondary cases generated by one average primary case in an entirely susceptible population during the mean infectious period. Clearly, when R 0 < 1 each successive 'infection generation' is smaller than its predecessor, and the infection cannot persist with time. Conversely, when R 0 > 1 successive 'infection generations' are larger than their predecessors, an initial case will lead to an outbreak. Using the next generation matrix formulated in van den Driessche and Watmough [36], we will give the basic reproduction number of system (2.1). E, E d , I 1 , I 2 , I d , W and W d are considered to be the disease compartments. Let a = λ 1 +λ 2 +ω+µ, b = γ+ω +µ, c = p 2 +µ, e = λ d +φ, We can obtain that the next generation matrix is The basic reproduction number of system (2.1) is where R 01 , R 02 are the basic reproduction numbers which are induced by the withinfarm and outside-farm transmission, respectively. Since θegHh , it can be given that R 0 = max{R 01 ,R 02 }, when κ = 0 or κ d = 0, and others R 0 > max{R 01 ,R 02 }, especially, The global stability for the disease-free equilibrium In this section, we will investigate global stability of the disease-free equilibrium. By van den Driessche and Watmough [36, Theorem 2], the disease-free equilibrium P 0 of system (2.1) is locally asymptotically stable when R 0 < 1. If R 0 > 1, the disease-free equilibrium P 0 of system (2.1) is unstable. Theorem 2.1. The disease-free equilibrium P 0 of the system (2.1) is globally asymptotically stable in Ω when R 0 < 1. Proof. We will prove global stability of the disease-free equilibrium by using a Lyapunov function. For the disease-free equilibrium P 0 , system (2.1) can be rewritten as follows: We define the Lyapunov function The derivative of the Lyapunov function L is Therefore, since R 0 <1, it has R 01 <1 and R 02 <1, then L ′ <0, by using LaSalle'a Lyapunov's method, the disease-free equilibrium P 0 is globally asymptotically stable in Ω [17]. FMD can be eliminated from the herds if the basic reproduction number of the model is less than unity. Existence and stability for the endemic equilibrium of system The system (2.1) is said to be uniform persistent [9,35] By the using similar method of the proof of [19, Proposition 3.3] and [27,Theorem D.3], the positive endemic equilibrium P * = P(S * , E * , is uniformly persistent if and only if R 0 > 1. Now we prove the global property of the endemic equilibrium of the system (2.1), which implies that the endemic equilibrium is unique. We establish the following result. Theorem 2.2. The unique endemic equilibrium P * of system (2.1) is globally asymptotically stable when R 0 > 1. Proof. We define the following Lyapunov function L The derivative of the Lyapunov function L is We have cattle farm in China as a numerical example, and FMD outbreaks due to FMD type O at the farm. Then we give sensitivity analysis of the basic reproduction number on the model parameters, and show that the basic reproduction number is a global threshold parameter for the extinction and persistence of the FMD. Parameter evaluation On the basis of the field investigation and data collection, the expert's information consultation feedback, consulting official publication and literatures, and estimation method, we acquire the value of parameters of the model (2.1). Since the number of outbreak points of FMD in China was reported monthly, the time unit of the model in the manuscript is adopted as month. The approximation of parameters A = 0.5× N/12, A d = 0.5× N d /12, µ = φ = 0.41 are estimated by fitting the production data of the cattle in China Statistical Yearbook. The latent period may vary from 2-14 days in a cattle, with a mean of 5 days, giving λ 1 = λ 2 = λ d = 6 [37]. The recovery rate is γ = 0.1076 [33]. In China, the number of the household's scatter breeding is on the decrease, the scale farm is increasing rapidly at present. The number of cattle ranging from 1 thousands to 10 thousands at the large-scale cattle farm, thus the average number of cattle about 10 4 is adopted at the farm. Therefore, we assume N = 10 4 . For simplicity, we set the transmission coefficients β 1 = β 2 = η = ζC(N) N are equal, in which ζ = 0.026 is the probability that a contact with an infections beef produces an infection [33], the effective contact number C(N) = 20 is the number of cattle living in a byre. Farm staffs, visitors and vehicles visit the farm over twice a month, which may carry the virus from the environment about 10% at a time, so κ =κ d =2×0.1. The random sample of 3 per cent of cattle at the farm every 6 months is detected by NSPS, in which the part of positive cattle is quarantined or removed, assume ω = 0.03/6. We take one month as the average survival time of FMDV in environment, so the natural decay rate of FMDV is 1. The average peak amounts of expelling of virus discharged by a heifer per day can reach 10 4.3 TCID50 (the tissue culture 50% infective doses), and the median duration of shedding is 3 days, so the shedding rate is ε 1 = ε 2 = ε d = 3×10 4.3 [1,24]. We apply the least-square estimation method to obtain the values of β d , η d , τ d , δ d and p d , and fit the model with the cattle cases of FMD type O from 2010 to 2016, which is reported in monthly or yearly units in China Yearbook of animal husbandry and veterinary medicine. These parameters are estimated by fitting the patch of outside-farm in model (2.1) with actual monthly accumulative number of the cases. The data fitting result is shown in Fig. 2. Sensitivity analysis on the basic reproduction number R 0 Based on the model (2.1), and the parameter values in Table 2, we give the phase diagram of S(t) and E(t) with different initial conditions of the model (2.1), when R 0 < 1 and R 0 > 1, respectively (see Fig. 3), which shows that the disease-free equilibrium is globally asymptotically stable when R 0 <1, then an epidemic will not occur, the endemic equilibrium exists uniquely and is globally asymptotically stable when R 0 > 1, it means that an initial case will lead to an outbreak. Then, we give the sensitivity analysis of R 0 for the control parameters δ, r and ω, respectively, in Fig. 4(a), (b), (c). And the effect of δ, r and ω on R 0 are studied by global sensitivity analysis, respectively: ∂R 0 ∂δ · δ R 0 = −0.39, ∂R 0 ∂ω · ω R 0 = −0.07, ∂R 0 ∂r · r R 0 = −0.04. We can see that vaccination is the most effective measure to decrease R 0 . Vaccination remain the main strategies to control the spread of FMD type O at the lager-scale cattle farm. in Fig. 4(d) shows R 0 increases at higher κ d it is easy to see that work staffs, visitors and vehicles must be stringently disinfected before they enter and leave the farm. From Fig. 5, we can see that the effect of δ and ω, r and ω seem rather obvious on R 0 , comparing Fig. 5 with Fig. 6. It argues that R 0 < 1 when δ = 0.57, ω = 0, r = 1, and R 0 < 0.8 when δ = 0.57, ω = 0.06, r = 4, thus the more vaccination, quarantining or removing of the positive cattle by NSPS and disinfection, the less infection will be caused. Conclusion Foot and mouse disease (FMD) is a highly contagious viral disease of cloven-hooved animals with significant the economic impact. Of the OIE Member Countries and Territories, 65 are recognized as free from FMD without vaccination, 1 country is recognized as free with vaccination. Several other countries are recognized as zones that are free with or without vaccination. Over 100 countries are still not considered to be FMD free regions. FMD is endemic in Asia, Africa, Australia, New Zealand and the Middle East, in which some countries take the vaccination. Considering the transmission mechanism of cattle FMD both on and off the farm, and the existence of the carrier, we have constructed a dynamic model to study the transmission dynamics of cattle FMD. The model consists of two patches: within-farm and outside-farm. Firstly, we have given the formula of the basic reproduction number R 0 , which determines whether the disease dies out or persist in the population. By constructing the Lyapunov function, we prove the disease-free equilibrium is globally asymptotically stable if R 0 < 1, while the endemic equilibrium exists uniquely and is globally asymptotically stable if R 0 > 1. As a numerical example, we apply the dynamical model to assess the spread of FMD type O at the large-scale cattle farm in China. The globally asymptotically stability of the equilibria was confirmed by numerical simulation. By the sensitivity analysis of the basic reproduction number R 0 on parameters, we find that the vaccination is the most effective measure to decrease R 0 . In order to control the spread of FMD type O at the large-scale cattle farm, the condition of R 0 < 1 must be sufficient, we suggest that success rate of the vaccination is higher than 0.57 of the cattle, work staffs, visitors and vehicles must be stringently disinfected before they enter and leave the farm, quarantining or removing of the positive cattle by NSPS and disinfection four times a month, FMD will become disappeared at the large-scale cattle farm by doing so.
2020-07-02T10:28:29.804Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "2c7521c38cf31b2ea2be0ea281e90dbd6143018c", "oa_license": null, "oa_url": "https://global-sci.org/intro/article_detail/auth/17178.html", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8ddb80f9023c752e814a80ce8a780499d2e58826", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
235908115
pes2o/s2orc
v3-fos-license
Effects of Swallowing Training and Follow-up on the Problems Associated with Dysphagia in Patients with Stroke AIM This study aimed to determine the effect of poststroke swallowing training and follow-up on swallowing function, nutritional status, and the development of problems associated with dysphagia. METHOD This study was designed as a single-group, pretest-posttest, quasi-experimental study and was conducted with 32 patients, who met the inclusion criteria for the study and were hospitalized with a diagnosis of acute stroke in the neurology clinic of a training and research hospital between June 2010 and September 2011. The patients were provided with swallowing training, followed up during meals, and given a training brochure. The Structured Information Form, the Standardized Mini Mental Test, the Barthel Index, and the Bedside Water Drinking Assessment Test were used to collect the data. Data were analyzed by the SPSS 16.0 program using descriptive and comparative statistical methods. TREND statement was followed for reporting. RESULTS It was determined that there was a statistically highly significant difference (p < .01) between the mean total score of the bedside water drinking assessment test after training compared with before the swallowing training, the duration of eating shortened (p < .01), and the amount of food consumed increased (p < .01) in the first follow-up. It was determined that the patients stayed in the hospital for an average of 9.75 ± 3.44 days; and aspiration occurred in 9.4% of them during this period. It was observed that patients who developed aspiration had prior lung problems. CONCLUSION It was observed that swallowing training decreased the duration of eating and increased the amount of food consumed in patients with stroke and resulting dysphagia. It was considered that the implementation of the training and the follow-up of swallowing function could be useful in preventing the development of problems. Introduction Dysphagia is one of the common problems after a stroke (Bonilha et al., 2014;Boyraz, 2015;Çiyiltepe, 2004;Karaca Umay et al., 2010;Özdemir & Çekin 2011;Palli et al., 2017). Although the duration of the disease varies depending on factors such as the location and size of the lesion and the evaluation method used, the incidence of dysphagia after a stroke varies between 29%-81% (Huang et al., 2014;Smithard et al., 2007;Türkmen, 2005;Westergren, 2006). Paciaroni et al. (2004) evaluated 406 patients who were diagnosed with a stroke within the first 24 hours, and they observed that 34.7% of them developed dysphagia. Smithard et al. (2007) conducted a longterm follow-up in a sample of 1288 people who had a stroke for the first time, and they detected dysphagia in 567 (44%) patients in the initial evaluation. Poststroke dysphagia is associated with decreased oral, pharyngeal, and esophageal functions, leads to social isolation in addition to serious problems resulting in airway obstruction, aspiration, aspiration pneumonia, dehydration, malnutrition, sepsis and death, affects the quality of life, and may cause a delay in rehabilitation and an increase in health spending (Huang et al., 2014;Karaca Umay et al., 2010;McFarlane et al., 2014;Mourao et al., 2016;Özdemir & Çekin, 2011;Türkmen, 2005;Westergren, 2006). For the treatment and rehabilitation of dysphagia, it is universally recommended to diagnose and continuously monitor the swallowing function in all the patients with a diagnosis of acute stroke within the first 24 hours by using validated, reliable, and sensitive bedside assessment tools (American Heart Association/American Stroke Association, 2013, Class; Level of evidence B) (Boyraz, 2015;Jauch et al., 2013;Kang et al., 2012), and it is emphasized that the nurse has an important role in swallowing training (Palli et al., 2017). Different methods, direct and indirect methods, are used in the rehabilitation and training program of dysphagia. The aim of direct methods is to improve voluntary motor activities. It includes sensory stimulation of motor activity and speed with electrical, chemical, and thermal applications. The aim of indirect methods is to increase swallowing safety and prevent aspiration into the respiratory tract without correcting the underlying neuromuscular insufficiency by providing compensation with individualized interventions. Indirect methods that are also applied by nurses include postural arrangements (chin-tochest, chin tuck posture, tilting the head backward, turning the head to the strong/weak side), diet modification (viscous liquids, homogenized semi-solid foods), and compensatory maneuvers (supraglottic swallowing technique, super-supraglottic swallowing technique, Mendelsohn maneuver, and hard swallowing) (Arsava et al., 2018;McCullough et al., 2012;Perry, 2001;Vural et al., 2004). Ensuring and maintaining safe nutrition will help prevent the development of problems owing to swallowing difficulties as well as reducing the duration of the patient's hospital stay and contributing to an increase in life expectancy and quality of life (Perry, 2001;Selçuk, 2006). In their study, Elfetoh and Karaly (2018) concluded that the swallowing training program given to stroke patients with dysphagia was effective in improving swallowing. A study by Lin et al. (2003) determined that swallowing training in stroke patients with dysphagia increased the amount of food consumed and that there was a significant increase in arm circumference and body weight of these patients. However, in our country, no study has been found in which the swallowing function was evaluated in patients with stroke and swallowing training given to the patient and the person who would care for the patient and the results were evaluated. Therefore, this study aimed to determine the effect of poststroke swallowing training and follow-up on swallowing function, nutritional status, and the development of problems associated with dysphagia. The following hypotheses (H) were tested in the study: H1: Swallowing training and follow-up in stroke patients with dysphagia strengthens the patient's swallowing function. H2: Swallowing training and follow-up in stroke patients with dysphagia shorten the duration of eating. H3: Swallowing training and follow-up in stroke patients with dysphagia increase the amount of food consumed. H4: Swallowing training and follow-up in stroke patients with dysphagia prevents the development of problems associated with dysphagia. Study Design This study was conducted as a single-group, pretest-posttest, quasi-experimental study. Sample A total of 180 patients diagnosed with acute stroke and hospitalized in the neurology clinic of a training and research hospital between June 2010 and September 2011 constituted the population of the study. Among these patients, 37 patients who developed dysphagia within the first 24-48 hours and met the inclusion criteria for the study constituted the sample of the study. However, 5 patients were excluded from the study because of changes in the patients' clinical and cognitive condition after 48 hours. Thus, the study was conducted with a total of 32 patients. TREND statement was followed for reporting. Figure 1 illustrates the inclusion process in a flow chart. Inclusion criteria: Patients with a standardized mini mental test (SMMT) score ≥24, having the first assessment of swallowing function within 24-48 hours, having voluntary coughing, being able to swallow secretions, having symmetrical facial movements, having a bedside water drinking assessment test score ≥3, and who were willing and volunteered to participate in the study were included. Data Collection Tools The data of the study were collected using SMMT, structured information form, Barthel Index, and bedside water drinking assessment test. A training brochure was given to the patient/family/caregiver. The scores of the SMMT and bedside water drinking assessment test, which were among the data collection tools, were also used as inclusion criteria for the study. Standardized Mini Mental Test The SMMT, which was developed by Folstein et al. in 1975, is a short, useful, and standardized method that can be used to evaluate the cognitive level globally. It includes 11 items and 5 sub-dimensions, namely orientation (10 points), record memory (3 points), attention and calculation (5 points), recall (3 points), and language (9 points), and is evaluated over a total of 30 points. The scores between 24 and 30 indicate normal cognitive level, 21-23 indicate mild cognitive impairment, and ≤20 indicate moderate and severe cognitive impairment (Folstein et al., 1975;Güngen et al., 2002). Although the SMMT was adapted to the Turkish society for literate individuals by Güngen et al. in 2002, its adaptation to the Turkish society for illiterate individuals was performed by Ertan et al. in 1999. Patients with a score of ≥24 from the SMMT were included in the study. Structured Information Form This form was developed after reviewing the literature and included 3 parts. The first part of the form included individual (age, sex, and educational status) and disease-related characteristics of the patient (diagnosis and total length of stay in the hospital), the second part included the variables affecting nutrition and swallowing function (the Barthel Index and bedside water drinking assessment test to determine nutrition and swallowing function and the variables affecting them), and the third part included observational data during swallowing training and nutrition (the amount of food consumed and duration of eating, the development of problems associated with dysphagia, and so on) and the data obtained by measurements (body mass index, body temperature follow-up, sodium in blood, blood urea nitrogen [BUN] and albumin values). The third part of the information form allowed recording of repeated measurement data for each patient. Dehydration and malnutrition are among the common poststroke complications owing to dysphagia. On the basis of literature review, this study evaluated sodium and BUN values in the blood as an indicator of hydration status of the patients (Crary et al., 2013), and the BMI and serum albumin levels as an indicator of nutritional status (Boyraz, 2015;Horasan, 2012). Barthel Index The sensitivity of the 10-item index, which was developed by Mahoney and Barthel in 1965 to measure the level of disability experienced during life activities, was modified and increased by Shah et al. in 1989. This index, adapted for the Turkish society by Küçükdeveci et al. in 2000, was demonstrated to be valid and reliable for groups of patients with stroke and spinal cord injury. The index includes 10 subheadings; eating, bathing, self-care, dressing, bladder and bowel control, toilet use, chair/bed transfer, mobility, and staircase use. The total score ranges from 0 to 100, where 0 indicates full dependence and 100 indicates full independence. The Cronbach's alpha internal consistency coefficient for stroke patients was found to be 0.93 (Küçükdeveci et al., 2000;Shah et al., 1989). In this study, the Cronbach's alpha internal consistency coefficient of the scale was found to be 0.83 in the first follow-up and 0.89 in the second and third follow-ups. In this study, the Barthel Index was used to evaluate the physical dependence of patients in their life activities during follow-up. Bedside Water Drinking Assessment Test It is a commonly used method to evaluate the swallowing function by giving patients small amounts of water (Arsava et al., 2018;Osawa et al., 2013). Pa- Figure 1 Flow Diagram of the Study (TREND Statement) tients with a score of 0-2 are believed to have normal swallowing function, whereas those with a score of 3-6 have dysphagia (Türkmen, 2005). In other studies, it was observed that the evaluation of this test together with oxygen saturation led to more precise results (Smith et al., 2000;Tippett, 2011;Türkmen, 2005). Therefore, in our study, the bedside water drinking assessment test was performed along with pulse oximetry. The patient was seated in an upright position and asked to drink 10 mL of water from a plastic glass without using a pipette and without pausing (those who coughed or seemed to choke and those whose voice quality changed after drinking were considered unsuccessful in the water swallowing test). During this time and for the next 10 minutes, arterial O 2 saturation was evaluated from the unaffected arm and the second (index) finger with a battery-operated console type pulse oximeter. Pulse oximetry is an easy and fast non-invasive method, which is reliable in determining aspiration (Tippett, 2011). In this study, the bedside water drinking assessment test was performed along with pulse oximetry to evaluate the swallowing function of the patients during follow-up. Training brochure The training brochure was prepared using evidence-based practice guidelines (American Heart Association/American Stroke Association, 2013, Class I; Evidence Level B) (Jauch et al., 2013), meta-analysis studies (Hines et al., 2010;Westergren, 2006), and reviewing literature (Akın & Durna 2016;Çiyiltepe, 2004;Gulanick & Myers, 2007;Horasan, 2012;Özdemir & Çekin, 2011;Potter & Perry, 2008;Vural et al., 2004). This brochure included information on the issues related to swallowing and nutrition that should be considered before, during, and after nutrition to ensure and maintain safe nutrition. In this study used the training brochure to ensure and maintain safe nutrition of patients with stroke whose swallowing function was affected. Data Collection The study was conducted in 3 follow-ups. The first follow-up was performed at patient admission, the second at discharge, and the third at the first clinical control in the hospital 30-45 days after discharge. The applications in the first and second follow-ups were maintained until the patients were discharged. The first part of SMMT, the structured information form, the bedside water drinking assessment test, the second part of the form, and Barthel Index were applied at patient admission (within the first 24-48 hours after stroke diagnosis) to obtain basic data from the patients who met the inclusion criteria for the study. Body temperature and anthropometric measurements (BMI) were performed, information in the patient file (sodium, BUN, and albumin value in blood and chest radiography report) such as laboratory and radiology test results performed upon the physician's request were recorded in the third part of the information form (n = 37) (first follow-up). The measurements and evaluations, except for the first part of the SMMT and the structured information form, including individual characteristics and disease characteristics, were repeated during the patient's discharge and at the first check-up in the hospital approximately 30-45 days after discharge (second and third follow-ups). Swallowing training and follow-up were performed in accordance with the information included in the training brochure. When deciding on the method to be used in the swallowing training, a physiotherapist was used when necessary; and chin-to-chest, supraglottic swallowing, and Mendelsohn maneuvers were used. Chin-to-chest is the process of swallowing with the head tilted forward and the patient's chin resting on the chest. Aspiration is less likely to occur when the head is slightly tilted forward. Supraglottic swallowing refers to taking food inside the mouth and chewing it, taking a deep breath before swallowing and holding it, passing the bite to the pharynx as a whole by pushing the head back slightly at the same time, and swallowing the bite while keeping the breath. The patient coughs immediately after swallowing and before breathing again. Thus, the bite passes through the pharynx without any problem. Voluntarily extending the time to keep the airway closed before and during swallowing is a technique to protect the airway and prevent aspiration. Mendelsohn maneuver is a swallowing technique used to improve the cricopharyngeal opening width and opening time during swallowing. The patient is asked to swallow and hold the larynx for 2-3 seconds after swallowing and then swallow again (Boyraz, 2015;Çiyiltepe, 2004;Vural et al., 2004). In patients who were functionally dependent in maintaining their life activities, families/caregivers were included in the training. The swallowing training, which continued until the patient was discharged, was performed and repeated by the researcher in cases where the family members/caregivers involved in the care changed. In the first 2 meals, the act of swallowing was guided by the researcher and observed by the patient and family/ caregiver. During the next meals, the patient and family/caregiver managed swallowing and nutrition, and the researcher participated as an observer. The researcher intervened when necessary, repeated the swallowing training at required stages, and supported the maintenance of safe nutrition. Swallowing Training • Patients who used glasses, hearing aids, and dentures were fitted with them before nutrition. • Clothes tightening the neck, torso, and abdomen were loosened if any. • The patients were provided with oral care to prevent aspiration pneumonia and to stimulate the flow of saliva and the sense of taste before nutrition. • In accordance with the physician's recommendation, a diet consisting of viscous liquids and homogenized semi-solid food was prepared for bolus control by collaborating with the dietician and the patient's family/caregiver. • A suitable body and head position was provided for safe swallowing. The head of the patients fed in the sitting position was raised to 90 degrees, and the head of the patients fed in the lying position was raised to 60 degrees. The weak parts of the body, the hips, and back were supported by pillows. • The chin-to-chest, supraglottic swallowing, and Mendelsohn maneuver were used to ensure safe swallowing. • During nutrition, the patient was warned not to speak when eating. The spoon used was held above mouth level, and touching the teeth or placing the food far behind the mouth was avoided. Only 1 teaspoon of homogenized semi-solid food or 10-15 mL of viscous liquid food was given at a time. The amount was increased once the patients managed to swallow this amount successfully. Before each spoon, the patient was encouraged to swallow through verbal guidance (Take the food in your mouth. Keep the food in your mouth. Close your lips. Lift your tongue to your palate. Think about your swallowing. Close your mouth and swallow. Swallow again. Cough to clear the airway). The patient was observed in terms of delayed cough, change in voice quality, and change in lung sounds during nutrition. If any of them were present, the physician was informed, and the nutrition was interrupted or stopped. • Sufficient time was allocated for nutrition. A patient-specific nutrition plan was prepared with small meals, 6 meals a day. • The increased amount of food was measured with a 50 cc glass tip feeding injector. The amount of food given to the patients at 1 meal included 300 mL of viscous liquid food. The amount of food was increased when swallowing function started to improve at the end of a week. • Patient responses during feeding, the amount of food consumed, and the duration of eating were recorded. • After nutrition, whether there was any food left in the patient's mouth was checked, oral care was provided, and the patient was seated in an upright position for approximately 30-45 minutes. Daily excretion efficiency and dryness of the skin were monitored, and BMI was calculated by weight monitoring. The body weight of the patients was followed up every day at the same time, with the same clothes, using a scale with a sensitivity of 100 g and the maximum measuring capacity of 150 kg. Body temperature was measured with a tympanic thermometer twice a day until discharge. The patients with a body temperature above 37.5°C were evaluated for lung infection. Chest radiography was evaluated in terms of lung infection once in patients with normal body temperature and twice in patients with a body temperature above 37.5°C. The results of the chest radiography and laboratory tests (sodium, BUN, albumin) from files were evaluated by the patient's physician and recorded in the third part of the structured information form. Statistical Analysis The data were analyzed using descriptive ( The written informed consents were obtained from the patients voluntarily after explaining the aim and duration of the study to them. Results The age of the patients included in the study varied between 46 and 80 years, the mean age was 66.28 ± 9.73 years, 56.3% of them were women, 62.5% of them were literate or primary school graduates, 59.4% of them were married, and 78.1% of them were living with their spouses and children. It was determined that SMMT scores of the patients varied between 24 and 28 and that the mean score was 25.19 ± 1.35 (Table 1). When the distribution of disease characteristics of the patients was examined, it was observed that 93.7% of them were diagnosed with acute ischemic stroke, and 21.9% of them had previous lung problems. It was determined that the patients' length of stay in the hospital varied between 5 and 21 days and that the patients were hospitalized for an average of 9.75 ± 3.44 days (Table 1). When the mean total score of the Barthel Index, which was applied to determine the physical dependence of patients, was examined; it was determined that the level of physical dependence, which was mostly observed in the first follow-up, decreased in the second follow-up (p < .01) and further decreased in the third follow-up (p < .01), and that their independence increased. This difference was found to be statistically highly significant (p < .01, Table 2). When the dependence related to the nutrition dimension of the Barthel Index was examined, it was found that 43.8% of the patients in the first and second follow-ups and 31.3% of them in the third follow-up were functionally dependent, which was also found to be statistically significant (p < .05, Table 2). Hemorrhagic stroke 2 6.3 Min-Max Mean ± SD Median Standardized Mini Mental Test score 24-28 25.19 ± 1.35 25 Total length of stay in the hospital (d) 5-21 9.75 ± 3.44 Note. Min = minimum; Max = maximum; SD= standard deviation. When the mean score of the bedside water drinking assessment test, which was performed to evaluate the swallowing function of the patients, was examined; it was observed that dysphagia, which was high in the first follow-up, decreased but continued in the second follow-up, and that the swallowing function almost returned to normal in the third follow-up, and this difference was found to be statistically highly significant (p < 0.01) ( Table 3). It was determined that each of the patients included in the study received swallowing training at an average of 19.50 ± 6.89 meals according to the needs of the patient/family/caregiver until discharge, in addition to the swallowing training given for the first 2 meals. It was ensured that the family/caregiver participated in all the trainings along with the patient (Table 4). When the results of swallowing training and nutrition observed during the meals during the hospitalization of the patients were examined, it was observed that the mean duration of eating decreased by 4.78 minutes in the second follow-up compared with the first follow-up, and that the mean amount of food consumed increased by 56.87 mL in the second follow-up compared with the first follow-up. This difference was found to be statistically significant (p < .01) ( Table 4). When sodium, BUN, albumin, and BMI values analyzed to monitor dehydration and malnutrition in patients were examined, it was determined that after the first follow-up, sodium and BUN values gradually increased; however, albumin and BMI values decreased. These differences in repeated measurement results were also found to be statistically significant (p < .05, Table 5). During the hospitalization period of the patients, it was determined that mean body temperature values did not differ significantly between morning and evening measurements (p < .01), body temperatures were within normal limits, and aspiration developed in 9.4% of the patients. When the patients who developed aspiration were examined, it was observed that these patients had previous lung problems. In the third follow-up examination during the post-discharge control, it was observed that none of the patients experienced signs and symptoms of aspiration during the meals (Table 6). Discussion In patients with stroke, it is universally recommended to evaluate and monitor the swallowing function within the first 24 hours after admission (McFarlane et al., 2014;Perry, 2001). For swallowing problems that do not resolve spontaneously in the first few weeks after stroke, planning and implementation of individualized care including early measures with a multidisciplinary approach can prevent/minimize the development of problems (McFarlane et al., 2014;Perry, 2001;Selçuk, 2006). Palli et al. (2017) have found that the monitoring of swallowing in patients with stroke reduced the rate of pneumonia associated with aspiration and the length of stay in the hospital. Accordingly, within the scope of the study conducted to determine the effect of poststroke swallowing training and follow-up on swallowing function, nutritional status, and the development of problems associated with dysphagia, when the individual characteristics of the patients were examined, it was observed that they were in parallel with the studies conducted on patients with stroke and the literature data (Benbir & İnce, 2013;Boyraz, 2015;Crary et al., 2014;Smithard et al., 2007;Türkmen, 2005). The mean SMMT score of the patients included in the study was 25.19 ± 1.35 (Table 1). This result, which was included among the inclusion criteria for the study, revealed that the cognitive level of the patients was adequate in terms of perception and implementation of the swallowing training to be provided. The diagnosis was acute ischemic stroke in 90.6% of the patients (Table 1). This result was found to be consistent with the literature data and study results emphasizing that ischemic strokes constitute 89% of all strokes and that hemorrhagic strokes are observed less frequently (Boyraz, 2015;Mourao et al., 2016). The patients' length of stay in the hospital varied between 5 and 20 days, and they were hospitalized for an average of 9.75 ± 3.44 days (Table 1). In the study conducted by Demirci Şahin et al. , it was determined that the median hospital stay of the patients with stroke varied between from 8 and 11 days. This result supported the result of the above study. The patients' levels of physical dependence decreased in repeated measurements, and the difference between the measurements was significant (p < .01) ( Table 2). The deficiencies in motor, sensory, and cognitive functions in the poststroke period cause dependence in patients and prevent them from performing life activities. Although the improvement in physical functions depends on the intensity of the motor deficit that develops at the beginning of the stroke, it progresses most rapidly within the first 3 months, the recovery rate slows down within 6 months, and the patient's condition may continue (Öztürk et al., 2014). Considering the period between the patients' total length of stay in the hospital (5-21 days) and the first clinical control (approximately 30-45 days after discharge), it was observed that the results of the study were consistent with the literature data. In this study, found that the swallowing functions of the patients gradually improved in repeated measurements and that the difference between the measurements was significant (p < .01) (Table 3). In the literature, it was reported that dysphagia was observed in approximately 29%-81% of patients with stroke (Huang et al., 2014), 70% of them were able to recover at the end of the first week; however, dysphagia continued for a long time (>6 months) in 11%-19% of them (Boyraz, 2015). Considering that the mean total hospital stay of the patients participating in the study was 9.75 ± 3.44 days, the fact that the incidence of dysphagia in the second follow-up was 37.5%, and this rate decreased in the third follow-up was found to be consistent with the literature data. As members of the healthcare team, nurses play a key role in diagnosing, evaluating, and managing dysphagia and preventing the development of problems associated with it. A study by Tülek et al. (2018) on the current practices in the nursing care of patients with stroke in 11 European countries (Sweden, Belgium, Denmark, England, Norway, Turkey, Malta, the Netherlands, Switzerland, Iceland, and Serbia) in which a total of 92 nurses participated, indicated that 75 nurses performed bedside swallowing assessment alone or together with other healthcare team members within the first 24 hours. Each patient included in the study was accompanied at meals from the time of admission until discharge, and they received an average of 19.50 ± 6.89 swallowing and nutrition training according to need. The patient and family/caregiver participated in all trainings (Table 4). There is limited scientific evidence demonstrating the positive effects of improving the nutritional status and providing appropriate energy intake on clinical improvement in patients with stroke and resulting dysphagia (Arsava, 2018;Nii et al., 2016). Nevertheless, considering the negative effects of malnutrition on prognosis in patients with stroke, it becomes necessary to reach protein and calorie targets as early as possible (Arsava et al., 2018). In the literature review, although the importance of guiding the swallowing action of the patient with verbal warnings is emphasized to ensure safe swallowing and nutrition during the meals in patients with dysphagia (Werner, 2005), there was no study evaluating this practice. It was observed that the mean duration of eating decreased by 4.78 minutes, and the mean amount of food consumed increased by 56.87 mL in the second follow-up compared with the first follow-up. This difference was found to be statistically significant (p < .01) (Table 4). During the 3 follow-ups, although a gradual improvement was observed in swallowing function, the fact that the functional dependence of the patients during their hospital stay did not change (Table 2); and therefore, the affected extremity functions remained the same according to the nutrition dimension of the Barthel Index suggested that the swallowing training provided to the patient had an effect on shortening the duration of eating and the increasing the amount of food consumed during meals. Krajczy et al. (2019) have applied comprehensive swallowing therapy including training to the patient and family about safe food and fluid intake in the early period in patients with stroke and with dysphagia, and they found that there was a significant decrease in swallowing time after therapy in the experimental group. In the same study, it was emphasized that a comprehensive therapy could reduce the complications that may develop owing to dysphagia. It was determined that sodium and BUN values increased in patients after the first follow-up, albumin and BMI decreased, and these differences in the measurement results were statistically significant (p < .05) ( Table 5). Because of the lack of an accepted standard for the determination of dehydration, the number of studies revealing the relationship between dysphagia and hydration is less than the number of studies revealing the relationship between dysphagia and nutritional status. Dehydration is a complication associated with poststroke dysphagia and is associated with the 3-month mortality after a stroke. BUN value is considered to be one of the best indicators in the evaluation of dehydration. There is evidence indicating that the BUN/creatinine (Cr) levels may increase in patients with stroke and with dysphagia, which may lead to further dehydration. In general, a BUN value of 6-20 mg/dL is a common reference for hydration status (Crary et al., 2013). In the studies, a BMI value of below 18.5% and a serum albumin level of below 3.5 g/dL were considered as malnutrition (Boyraz, 2015;Horasan, 2012). In their study, Crary et al. (2013) have evaluated the nutrition (prealbumin) and hydration status (BUN/Cr) of patients with stroke and with dysphagia during admission to the hospital and 7 days after admission (at discharge), and they found that patients with stroke and dysphagia had higher BUN/Cr levels at admission and at discharge compared with patients with but without dysphagia. However, they could not find a significant relationship between dysphagia and malnutrition within the first 7 days after admission. Finestone et al. (1995) have evaluated albumin and transferrin levels, total lymphocyte count, BMI, skin thickness, and arm muscle circumference in patients with stroke and dysphagia, and they considered low values in at least 2 of the 6 parameters as malnutrition and reported the malnutrition rate as 65%. Davalos et al. (1996) have evaluated the albumin level, skin, and arm circumference in patients with stroke and with dysphagia and found malnutrition in 51% of them. According to the results of the studies, it can be said that patients with stroke and with dysphagia are at risk for dehydration and malnutrition during their hospital stay, and it is recommended that they be followed up in terms of adequate nutrition and hydration. Moreover, it is necessary to conduct relevant studies to determine other patient and/or healthcare factors that contribute to poor nutrition and hydration status in patients with stroke. During the hospitalization period of the patients, it was seen that mean body temperature values did not differ significantly between morning and evening measurements (p < .01), and that body temperatures were within normal limits except for 1 patient (Table 6). It was determined that the body temperature of this patient reached 37.7°C only once in the evening measurement performed immediately after the visit time. Indeed, when the chest radiography of this patient was evaluated by his physician, it was observed that there was no evidence of infection. This result on the mean body temperature values of the patients was similar to the results of the study of Karabacak et al. (2012) in which they have evaluated the vital signs of the patients before, during, and after the visit and found a statistically significant difference in body temperature. As a complication of the dysphagia, 9.4% of the patients developed aspiration during their stay in the hospital (Table 6). When the patients who developed aspiration were examined, it was observed that these patients had a previous lung problem. In the literature, it was reported that aspiration mostly occurred in patients with dysphagia and low level of consciousness (Palli et al., 2017;Westergren, 2006) and that the risk of developing pneumonia in patients with stroke and aspiration was 7.6 times higher than in patients without aspiration (Smith & Connolly, 2003). It is also indicated in the literature that approximately 35% of deaths observed after an acute stroke were related to pneumonia developing after aspiration owing to dysphagia (Turner-Lawrence et al., 2009). Early detection of dysphagia in patients with stroke is, therefore, important for the prevention of complications, especially pneumonia. It was determined that the incidence of pneumonia was higher in patients with stroke who were not screened for swallowing than in patients who were screened (Arsava et al., 2018). In particular, the determination that none of the patients experienced the signs and symptoms of aspiration in their third follow-up suggested that the patients continued to swallow and feed safely at home as a result of the swallowing training and follow-up applied with the evaluation of swallowing functions in the first 24-48 hours; and thus, the training applied may be effective in preventing the development of problems associated with dysphagia. When the results obtained from the study were evaluated, it was observed that swallowing function could be strengthened in patients whose swallowing function was evaluated and were provided with swallowing training, that the completion time of portion at meals was reduced, that the amount of food consumed during meals increased, and that the development of complications owing to dysphagia could be prevented. The results of the study supported all hypotheses. Study Limitations This study had a few limitations, including the fact that the study was conducted in a single institution and with a single group because of the low clinical bed capacity (15 beds). Another limitation was the failure to create a control group, and the sample size was not determined by power analysis. The results of the study can only be generalized to patients who meet the sample selection criteria, and the use of structured forms prepared according to the literature owing to a lack of a valid and reliable scale evaluating dysphagia in Turkey. Conclusion and Recommendations It was determined that swallowing training decreased the duration of eating and increased the amount of food consumed in patients with and resulting dysphagia. The evaluation of swallowing function and the implementation of training supported by an ef-fective material (brochure, and the like) on the patient or caregiver could be effective in preventing the development of problems and supporting the strengthening of swallowing function. This study recommended that measurement tools be developed that provide more concrete results in the evaluation and follow-up of swallowing function and to perform reliability and validity studies, to include swallowing training guides to ensure safe swallowing and nutrition in the planning of care of patients with and with dysphagia, and to repeat the study as a randomized controlled experimental study. Ethics Committee Approval: Ethics committee approval was received for this study from the İstanbul University-Cerrahpaşa medical faculty clinical research ethics committee (date and number: 10.11.2009/B-015). Informed Consent: Written informed consent was obtained from patients who participated in this study. Conflict of Interest: The authors have no conflicts of interest to declare. Financial Disclosure: The authors declared that this study has received no financial support.
2021-07-16T06:16:32.601Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "20a7a899364e6233763efc688dca9c31e7c1dda9", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5152/fnjn.2021.19007", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96759ef722ea073e73057f23deb11c24ca1f7d7a", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
14946362
pes2o/s2orc
v3-fos-license
Current Usage of Component Based Principles for Developing Web Applications with Frameworks: a Literature Review Component based software development has become a very popular paradigm in many software engineering branches. In the early phase of Web 2.0 appearance, it was also popular for web application development. From the analyzed papers, between this period and today, use of component based techniques for web application development was somewhat slowed down, however, the recent development indicates a comeback. Most of all it is apparent with W3C's component web working group. In this article we want to investigate the current state of web application development with component approach. Most of all we are interested in which way components are used, which web development frameworks are being used, for which domains is component based web development most popular and successful, etc. How many current web development frameworks explicitly refer to component-based approach? To answer this question, we performed a literature review. Current usage of component based principles for developing web applications with … 254 INTRODUCTION Creating complex software architecture by (re-) using smaller, more manageable, software elements is the main goal of component based development (CBD).In many cases it has proven to simplify software design and have a positive impact on extra-functional properties of software products, e.g.better maintainability, scalability, reliability, usability, etc.Since the research on World Wide Web related development is constantly growing, we find interesting to verify in which way are these two research areas are related.To satisfy our curiosity we conducted a literature review in which we analyzed how web researchers and practitioners apply existing component based development techniques to create architecture of their web applications.Since most of the web applications are currently developed using different web development frameworks, we are also interested how many frameworks are based on CBD.Therefore, the main research question is: "How many of the current web application development frameworks explicitly refer to application of component-based approach?"-Based on this question we derive several more specific questions:  Q 1 -In which way is CBD used for web application development? Q 2 -What is the relation between CBD and web application development? Q 3 -Which component models are used for web application development? Q 4 -In which web application development domains is CBD used? The rest of the article is organized as follows: in Section 2 describes the review protocol and all the related methods used to perform a literature review.Section 3 provides an overview of the paper selection process.In Section 4 we present a detailed analysis of the results and discuss them.Finally, Section 5 concludes the article. REVIEW PROTOCOL The review protocol of this study is based on the work of Breivold et.al.[1] and suggestions by Kitchenham [2].It consists of the following steps (Figure 1): a) motivation statement, b) research goal statement, c) defining the search terms, d) providing information on restrictions and selection criteria, e) database selection, f) paper search process, g) paper quality assessment, h) paper data extraction, i) data synthesis.Since steps a) and b) are given in the previous section, here we proceed with steps c)e) which are given in the Table 1. SEARCH TERMS, SELECTION CRITERIA AND DATABASE SELECTION For more effective selection of relevant papers we used Mendeley, a reference management tool [3].Every database listed in Table 1provides a way to extract and import the search results into such tools.Since ACM is an exception, we imported the search results manually.Duplicated were automatically excluded or merged. To additionally ensure that the quality of the papers is satisfactory, there are some additional criteria [1]: a) paper must provide evidence for claims and theoretical reasoning in data analysis, b) paper must have the description of the context in which the study was conducted, c) the research method must be described or easily inferred, and finally d) goals of the study need to be described or easily inferred. DATA EXTRACTION Once the final list of papers was obtained, a content analysis and review was performed while considering [1]: (a) general paper information (title, authors, publication year, source, publication type, citation information, research methodology, and data analysis type) and (b) content related information (research focus, area of CBD application, applied CBD models, software application domain (what kind of software, who uses it, etc.), programming languages used, problems with applying CBD, area of future work). The result of reading the papers are synthesized in the following sections. SEARCH OVERVIEW DOMAIN SPECIFIC REVIEW CRITERIA Based on the research area we decided to exclude papers which refer to service oriented architecture (SOA), semantic web and ontology and finally papers which refer to service oriented computing (SOC). Since the search terms resulted in 166 papers related.Also, as stated by Bano and Ikram, SOA is a shift of paradigm in software development, as it can be seen in application of web services instead of using commercial off-the-shelf software [4].Having this in mind we decided to exclude SOA related papers as a whole new literature review can be performed with SOA as a main topic. Semantic web and ontology related papers are also removed because these principles are used for all kind of applications, not necessarily web applications.Although, some papers corresponded to the initial criteria, we decided to leave out ones which relate to semantic web, as this research area is growing and very specific. Finally papers which refer to service oriented development, service oriented computing, webservice based applications and development of web-services were left out as they are mostly related to development techniques and SOA.We conclude that similarly as for SOA, this research area would deserve its own separate literature review. Since some papers are excluded in this phase because they are out of scope, they will not be addressed further.However, we would like to refer the interested readers to the sources: [5][6][7][8]. DATABASE QUERIES AND SEARCH RESULTS OVERVIEW Table 2 shows the overview of the search results.By applying the exclusion criteria from the initial result of 1132 papers only 29 were selected for full reading. At this point we also verified the validity of research queries.We performed a search without "CBD" term, as presented in Table 2.One can notice that there is no significant difference when this term is left out and "component-based" is used.Finally, by using the terms "component" or "components" alone 7696 results were found in Scopus only.One can easily After the second complete reading of all papers 27 Number of papers which are accessible (full text of 3 papers wasn't found in any database) conclude that this terms cannot be used by themselves and considering only the keywords the following query was used: TITLE-ABS-KEY((web AND development) OR (web AND architecture)) AND TITLE-ABS-KEY ("component based" OR "component-based" OR "component" OR "components") The exact search queries with all the restrictions are available in Appendix A of this paper.Appendix B contains the list of final 29 papers labeled [A1] - [A29]. From the final 29 papers which were selected for full reading, 3 of them could not be accessed for full reading.These papers are [A1]- [A3] and because they were unavailable, they are not present in all parts of this review.Also, after all the papers were fully read, two additional ones were excluded which left as with final 24 papers [A4]- [A27].The two excluded papers are out of scope.Paper [A28] is related to quality assessment and not web development while paper [A29] is business oriented, but mentions frameworks. RESULT ANALYSIS DATABASES AND YEAR DISPERSION As it can be seen by the right data bar in Figure 2, considering the number of papers found, Scopus is the most inclusive database with 22 of final 24 papers (and with 7 found only in Scopus), while Science Direct was the most exclusive with only 1 paper.The remaining two papers not found in Scopus, first was found only in IEEE [A23], while the second one [A20] was in ACM and Web of Science.The number of papers included in the database which were selected for final reading (the original 27 papers, which includes 3 inaccessible) is shown on the left data bar.Here, Scopus was also the most inclusive database with 25 of 27 papers.The two missing papers are the same as previously and both of them are found in IEEE Xplore.Therefore, combining Scopus and IEEE Xplore gives most of the relevant papers in this research area.The database search was performed in mid-January of 2014 and it was set to include papers from previous 10 years, i.e. 2004-2014.Figure 3 shows the dispersion of papers between years.As it can be seen, most of the papers, i.e. around 50% of them, were written between 2005 and 2006.This should be a consequence of Web 2.0 which was popularized by Tim O'Reilly in late 2004 at the O'Reilly Media Web 2.0 conference.At that time there was an increase of web application development and since web applications started growing more complex, it was necessary to ease the development process and perform research in this direction.The solution was found in web development frameworks and component based techniques (as one of the options).Earlier papers are focusing on theoretical aspects while later on they became more practical providing different benchmarks and case studies. PUBLICATION TYPES Figure 4 shows publication types.Two-thirds of selected papers are conference proceedings while one-third of remaining papers were published in journals.List of conference proceedings and journals is presented in Table 3 and Table 4.It is interesting to notice that every paper which was selected comes from a different journal and therefore there isn't any conclusion on which journals to follow for this particular topic.However, the conferences are a bit more conclusive.Table 3 shows that two papers come from 13 th International WWW Conference Proceedings 2005 while 6 papers are published in Springer's Lecture Notes.This indicates that Springer database should also be included in future investigation of this area. RESEARCH METHODS AND TYPES Figure 5 presents the information about most common research methods used.In 5 papers authors explicitly state that they are performing a case study while in other papers, the applied research method was inferred from the context.It turns out that case study is used by 70 % of the papers, followed by theoretical reasoning, experiment and action research. Considering the type of the study presented in papers, Figure 6 shows that qualitative reasoning is the most popular one.20 papers use qualitative reasoning, while of the remaining 4, 2 of them use quantitative study and 2 of them use a mixed approach, i.e. qualitative and quantitative. Table 5 provides an insight into the relation between the type of the study and the research method used.As it can be seen in the Table 5, most papers use qualitative study type performed on a case study. CITATION The citation count of the selected papers (including the ones not accessible) is shown in the Table 6.Citing information was taken from Google Scholar in late April of 2014.The most cited paper is [A26] which has 56 citations.Several of the following papers have relatively good citation record, but on average there are 8 citations per paper.If we exclude the papers published in last four years (because they are fairly recent), the average number of citations per paper is 11.Therefore, we can conclude that research in component based web applications needs some further investigation due to relatively low publication and citation count.However, due to exclusion of SOA related papers which is becoming hugely popular research area, it is most likely that this effected the number of publications related to CBD and web development frameworks.Also, another limiting factor is the strong search criteria which states that the framework must be explicitly mentioned. USED CBD MODELS AND PROGRAMING LANGUAGES In Table 7 and Figure 8 one can notice that 11 out of 24 papers haven't defined a component model.The most used component model is some variation of JavaBeans consequently making Java the most popular development language (usually J2EE).Both JavaBeans and EJB are used in 4 papers.In 3 of them, JavaBeans and EJB are used simultaneously, 2 papers use COM and Corba component models which are well known, and the remaining ones use custom component models. Considering programming languages (Figure 9), Java is most widely used, with 12 out of 21 papers using it.Among these, 10 of which using explicitly Java and in 2 paper Java is an option (papers [A21] and [A12]).Paper [A15] also uses Java, but only to build XML which is then used to develop web applications, therefor it is not counted since this XML can be generated in many programming languages. CBD BEST PRACTICES Most papers do not explicitly report on any major problems while using the component approach, but rather they report suggestions for future researchers and practitioners concerned with CBD.All the suggestions are aggregated and presented in Table 8. OVERVIEW OF FRAMEWORK USAGE All the selected papers authors use some type of a web development framework, which can be divided in two groups; general and specific.General frameworks are used to develop any kind of web application, i.e. they can be used in many domains, while specific frameworks Components.Rather those specific application needs should be abstracted and generic Components (Tools and Engines) should be created that can be used across many application domains."[A11] Components should be easy to use by End Users, yet they need to be complete so that it aid full capture of all the necessary 'Components parts' of the application such as front end pages, back end processing logic and database information [A16] It is not an easy task to develop an in-house component framework or to integrate available preexisting COTS in enterprise applications.It actually needed far more efforts and investments than it was foreseen in the beginning (approximately 50% more work than expected)."[A16] It is not one time effort but continuous process, which needs considerable investment in time and resources.[A16] The percentage of reusability changes from application to application and often needs component modification and reconfiguration."[A16] The major benefit of an in-house Component framework development surprisingly is not the project cost and time reduction based on business logic and business functions reusability (our but the company knowledge sharing and the creation of business function components [A14] Component composition: Each component is designed to achieve some special task; several components can be composed together in a dependent series to achieve a larger task.[A14] Problem in distributed systems is distributed component management.[A18] Problem is redesign of components to be more generic, simple and fast integration procedure with arbitrary Web applications. are specialized (or limited) to only a certain type of web application, i.e. a certain domain.As it can be seen in Figure 10, authors tend to use general frameworks, however the number of specific ones is fairly significant.Table 9 and Table 10 present a detailed overview of frameworks which are in the selected papers. In Figure 11 the time dispersion of framework types is given.As it can be seen, in last four years general frameworks are preferred.Although there is one exception, one can notice that there seems to be the stabilization of the research domain.Initially, there was a lot of specific frameworks but due to growing complexity of web applications, researchers seem to use existing and already proven frameworks. Paper Framework description [A4] New model openMVC is a framework which enables building web applications that then can change Style Information, Layout and validation constraints updated without coding.eCommerce shopping cart application was built as a prototype. [A6] Framework for developing and testing web applications.Model (CBTOWADM) that is used simplifies the difficulty of web application testing.Authors focus on functional testing based on a UML model.Model can also be used for developing web applications. [A11] Framework which end-users use to develop web applications using components developed by web developers.This framework targets SME's (small and medium enterprise) web applications [A12] Plug-in framework Plux, for integrating components into web applications. [A14] Web app framework for end-user development, for end-users to quickly implement simple sites with backend logic, like using a database.Framework is meant for SME web applications.[A15] XVM is XML virtual machine used as framework for developing and deploying XMLbased applications, it is not a programing language.XML application container is built on top of XVM. [A16] Component framework in which different components are plugged (containing specific business logic) in and then are used to build Web applications.Currently Finance and Crediting web applications was built.Some components are domain specific like portfolio management.[A17] Real Time Distributed Control Systems (RTDCS) is a Framework for loose integration of COTS tools.Idea is integration of domain specific COTS tools, in the sense of automatic interchange of formally expressed information through standard and free software middleware.A prototype was built, which integrates several COTS tools aimed to develop RTDCS. [A18] Framework that can easily be configured to work and integrate into an arbitrary application, and by configuring the framework, we configure all the components created using it and make those accessible to the host application.[A19] WebComfort (Framework) a dynamic component-based CMS platform which allows users to manage and operate complex web applications in a dynamic and integrated fashion. [A21] Framework for developing component based open hypermedia systems (CB-OHS). [A23] Framework that makes the modeling, implementation, and maintenance of wireless mobile online applications intuitive and easy, especially for students and beginners. Framework decomposes a complex online application into modules.Each module is a plug-and-play unit.The components in the libraries can be directly called and used. Table 9. Usage of CBD and connection to architecture (continuation from p.266). Paper Usage of Components Connection to architecture [A25] Framework for developing web applications using COTS components. [A27] Framework for developing any kind of web applications used in typical industrial enterprises.Framework gives a repository of components that can be used and customized to build new web applications, especially suitable for small and mid-sized enterprises with low IT expertise.Two types of Components: " (a) Tools that allow End Users to create and assemble applications and (b) Engines that could be used to run these applications."[A11] Architecture not particularly described. Framework is built so that end-users can use existing components to build new applications, or developers can create new component include into Framework and this can be further used again by end-users. [A16] Components contain business logic or presentation logic, with connectors to HTTP, JDBC, JNDI, CORBA, RMI Architecture is multi-tier (client, web (presentation), business, and database). Components are used at web and business tier. [A17] Components are generated by different tools.These components are then used to build new applications.Enterprise Java Beans are a group of classes responsible for achieving the tasks implied in the business logic (the implementation of the services offered over RTDCS data). The Model Collaboration Engine (MCE) architecture is based on View Model Approach."Each of the Domain-Specific Models shows only the information about the system relevant to a specialist (or tool).Four models or views (more can be added) are identified as Domain-Specific: Control System (architecture independent system functionality), Distribution System (network topology and services), Real Time System (software architecture and temporal issues) and Software Engineering (code and documentation generation). " [A17] [A21] "Component/Structure server: Reifies the domain specific abstractions, providing the domain specific services to clients.They are semi-autonomous components, since they rely on the infrastructure services for common functionality.They establish a well-defined interface for communication with client applications."[A21] Layered architecture is used in CB-OHS.These tiers are: client, structure server (components) and infrastructure.Layered architecture is used (Database, Core, third layer, application layer)."The core package from iKDD models the component abstraction, implements a graph-based processing module, and covers basic components e.g.XML import / export."[A9] [A13] Components have specialized functionalities to client modules that require server-based functionality (e.g.data analysis or computation of visualizations that require large data set). Client-server architecture is used.Components are used on server side. [A22] "Using component-based programming, we developed a highly maintainable system, which contains three components packages: Monitoring Controls Package based on ActiveX, Analysis Controls Package based on ActiveX, and Diagnosis Algorithms Package based on COM." [A22] "A four-tier model based on the Microsoft's tier concept is adopted in the WRMFDS, which consists of the Presentation Services Tier, the Application Services Tier, the Data Access Services Tier and the Database Services Tier."[A22] [A23] Layered structure of components is used.The top layer consist of components which are used to build final applications (exp.welcome component, login components …). Three-tired-architecture is used (MVC pattern).The tires are: server, client and databases.Components are used on server side."A request-response pair contains three parts (Model, View, controller) and forms a unit.Each unit is implemented by reusing component libraries in the layered component structure and each unit can be plug-and-play into the system. " [A23] [A24] "There are three major roles in the 3CoFramework: component implements or wraps the domain-specific computational logic or data access; a connector implements the component interaction; a coordinator implements the distributed components and connectors management."[A24] Layered software architecture is used (Data, Information, Knowledge, Presentation) and components are used in each of these layers Usage of Components Connection to architecture [A19] Components are different platform features. In the paper they say that Componentbased architecture is used.Architecture consist of: (1) Modules; (2) toolkits; (3) extenders; (4) data repository access; (5) module actions; and ( 6) the WebComfort API.As small connections between components as possible.Paramount to some of these aspects was the usage of the Provider pattern, which is a mix of the Abstract Factory, Strategy and Singleton patterns. [A20] Components are content created form three layers."A Web application can usually be described in three layers.Presentation layer, business logic layer, and database layer.Each layer can be partitioned and distributed among the CDN's replica servers; In such replication approaches, content elements drawn from the three layers are structured into components that are replicated.The components are then dynamically assembled and delivered from the replica servers when they are requested."[A20] Hierarchical component-based content architecture is used.Where components are at the lowest layer.Hierarchy, top down: application, site view-web page, components. [A5] Component is every tier in JEE multitiered architecture which is wrapped with JADE Framework. "In a JEE multi-tiered architecture, the Web server is classically divided in several tiers: the HTTP daemon (Apache), the servlet engine (Tomcat), the EJB business server (JOnAS), and the database tier (MySQL for e.g.Three-tired-architecture (client, application server, DB server) with MVC pattern is used.Components are used in application server. [A12] Component can be anything.Every user can add his own components. Components can be server-side that are installed and executed on server.Clientside installed and executed on client and use local resources.And sandbox components installed on server and downloaded to client on demand and executed on sandbox on the client. Component based plug-in architecture.Different components are combined by end users and web applications created. [A18] Framework itself is a component that can be integrated into other web applications, but also consist of components which consist of components "nested components". MVC architecture, components are used on all MVC layers. [A4] Components are used in all parts of the architecture for implementing all kind of functionalities, like styling component, validation constraint component, etc. Five layers architecture (client, presentation logic, business logic, data abstraction, database), each layer has components for some specific functionalities. [A25] There are 4 kinds of components (domain, common business, base business) each with their own sub components which are used in new web applications build from end users. Each component type is a specific module in the overall architecture."We have identified web components and layered on ABCD architecture"[A25] [A26] Components are used to implement algorithm for creative web searching."More advanced students could also develop their own components to test out theories and improve their understanding of the base concepts of not just search engines but the various fields that play a role in information retrieval systems."[A26] "… component-based software architecture has been proposed which will allow for a range of different style systems to be developed with little overhead, thereby improving the chance of creative outcomes occurring in a different way."[A26] [A27] Components are used for building new web applications.They are used by enduser when they build their applications and users can customize the components with some parameters depending on their needs.In all the above cases the architectural decision is made solely by the end user, and all papers report only on developing prototypes (weather it is a framework or a web application).While most of the authors use component approach on the server side to implement various services, on the client side, 11 papers report using n-tier architecture, thus making it the most common. SUGGESTIONS FOR FUTURE RESEARCH If one is interested component approach and web development frameworks the most relevant scientific databases are Scopus, Springer and IEEE which will cover most of the related publications.Currently, the most relevant publications (2/3 of them being conference proceedings) were published between 2005 and 2006 which is most likely due the popularization of Web 2.0. According to the selected papers here are some interesting research directions for the future:  Graphical tools for creating application models which are then exported to XML schemas and automatically translated into component templates for creating web applications [A11]. Enhancement of security, creating security models, and develop a complete XML virtual machine (XVM, XMLVM) development process model (analysis, deployment evaluation, performance evaluation, etc.) [A15], [A12]. Refine components to reduce end-user effort to develop web applications, minimize faults, handle exceptions [A11]. Research into component approach and mixed-media web applications [A8]. Implementation of unified conceptual models and component libraries [A22]. Model driven development approach for component web applications [A19]. Research into verification models and tools for building component based web applications [A6]. It is apparent that component based approach is becoming a serious architectural direction and there is a very recent working groups focused solely on component based development for web, including the one from the W3C [9,10]. CONCLUSION In this paper we presented a literature review on component based development relation to web application frameworks.The original pool of related publications had 1132 papers which were, by the strict set of rules filtered out to 27 papers.Since three of them were inaccessible, 24 of them made it through to the full analysis which resulted in answers to the research questions Q 1-4 . Q 1 : In which way is CBD used for web application development?-There are three main approaches: a) component approach is used for creating component based frameworks which are then used for creating web application (not necessarily component oriented), b) component approach is used for building components which are the building blocks of web applications, and c) a mix of two previous approaches.In approach a) and c) the end user decides whether to use component approach for web application development while in b) component approach is imposed to the end users. Q 2 : What is the relation between CBD and web application development?-Component approach is used mostly for server side applications.Using it on the client side is less common, but there are cases and end-users aren't constrained to use it.Most widely used architecture is n-tired with components used inside different layers.For any future researchers and practitioners it is strongly suggested to plan component approach right from the start of the application design process.Although it requires more time, true benefits (separation of concerns, better maintainability, scalability, replaceability, single point of edit, etc.) are apparent later. Q 3 : Which component models are used for web application development?-Based on the reviewed papers it is obvious that EJB and Java beans are most preferable component models, hence Java being also the most popular programming language for this purpose. Although, it should be noted that there are a lot of custom models also.Since Java/J2EE is a leader in this field future researchers and practitioners have a choice to make, weather to expand the existing Java based component models or create new ones which requires more time, however offers new possibilities independent of a single technology. In which web application development domains is CBD used?-It is hard to recognize distinct domains however there are two types of web application development frameworks presented in the selected publications: a) general; used for any kind of web applications and b) specific; for developing special purpose web applications (e.g.eLearning, 3D graphics, monitoring, etc.).Majority of selected papers ( 14) describe the general framework. Finally, the answer to the overall research question (i.e.how much of the current web application development frameworks explicitly refer to application of component-based approach) is hardly intuitive.It is apparent that most of the papers which are selected explicitly referee to component based approach, however this is the result of the selection process.While making decision, one should keep in mind the fact that the answers arise from a small number of processed papers (after filtering process).Nevertheless the answers are interesting and give an indication what is happening in the presented field.Review with more papers included should be performed to have a broader overview of the field. There is a lot of publications dismissed which are related to SOA, and it would be very useful to perform an additional literature review with the same focus but focused on SOA. Considering the number of papers published, it is apparent that in the past there was a minor setback of this research area however, there is a growing trend.Although component based approach is more popular in other software engineering domains, with appearance of SOA, and Web 2.0 the number of component model is growing and we envision it will still grow, especially with W3C involvement.Therefore we reckon that any future web framework researchers and practitioners should be acquainted with component based development techniques, as it will become more popular in the near future. APPENDIX A. EXTRACT OF SEARCH QUERIES Here are the search queries for each database: Figure 7 . Figure 7. Citation level of papers by years. Figure 11 . Figure 11.Type of frameworks per year. Table 3 . List of journals. Table 4 . List of conference proceedings. Springer 13 2008 34th Euromicro Conference Software Engineering and Advanced Applications IEEE 14 Lecture Notes in Computer Science Springer 15 5th IEEE/ACIS International Conference on Computer and Information Science and 1st IEEE/ACIS International Workshop on Component-Based Software Engineering,Software Architecture and Reuse (ICIS-COMSAR'06) IEEE CONFERENCE PREOCEADINGS OF PAPERS THAT WERE NOT READ IN FULL 16 Proceedings of the IASTED International Conference on Internet and Multimedia Systems and Applications IASTED 17 ICEIS 2005 -Proceedings of the 7th International Conference on Enterprise Information Systems ICEIS Table 5 . Type of study and research method by paper (continued on p.261).Paper Year Type Research method Validation [A10] qualitative theoretical reasoning AMACONT project used for developing component-based adaptive web presentations.Theoretical descriptions.[A13]qualitative case study GOWARN concept from 2003 extended to new locations (AIS (atlas information system) for Campi Flegrei, a volcanic area near Naples, Italy). Table 5 . Type of study and research method by paper (continuation from p.260). Table 6 . Papers sorted by citation level.Nested web application components framework: A comparison to competing software component models In Figure7, which presents number of cited papers by year, it can be noticed that most of the citations are from between 2005 and 2006, more than 50 %.But surprisingly the most cited paper is from 2007, and the more recent one, from 2014 is cited 9 times.It will be interesting to see if the rising trend as seen from 2011 up to 2014 will continue. Table 8 . Suggestions for future component-based web framework development.Components should capture domain knowledge of web application development and hide complexities from End User [A11] Components should not capture application domain specific knowledge into Table 11 . Usage of CBD and connection to architecture (continued on pp.269-271). Table 11 . Usage of CBD and connection to architecture (continuation from p.268, continued on pp.270-271). Table 11 . Usage of CBD and connection to architecture (continuation from pp.268-269, continued on p.271). Table 11 . Usage of CBD and connection to architecture (continuation from pp.268-270). Table 11 shows for which purpose authors used component based development and how did it affect the software architecture of their web applications.There are three ways of component approach usage which one can distinguish:  Components are used for creating web development frameworksin this approach authors create component based frameworks which are used to create web applications, which can, but don't need be component based. Components used as application building blocksin this approach components are used to create component oriented web applications without the underlying framework. Mixed approachboth framework and web application developed with this framework are component oriented.
2016-07-05T18:43:41.079Z
2016-03-14T00:00:00.000
{ "year": 2016, "sha1": "c406fa75f1cbf0b5fb64bc291e971d182829c247", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7906/indecs.14.2.14", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "c406fa75f1cbf0b5fb64bc291e971d182829c247", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
212682097
pes2o/s2orc
v3-fos-license
Gain-of-function mutation in the voltage-gated potassium channel gene KCNQ1 and glucose-stimulated hypoinsulinemia - case report Background The voltage-gated potassium channel Kv7.1 encoded by KCNQ1 is located in both cardiac myocytes and insulin producing beta cells. Loss-of-function mutations in KCNQ1 causes long QT syndrome along with glucose-stimulated hyperinsulinemia, increased C-peptide and postprandial hypoglycemia. The KCNE1 protein modulates Kv7.1 in cardiac myocytes, but is not expressed in beta cells. Gain-of-function mutations in KCNQ1 and KCNE1 shorten the action potential duration in cardiac myocytes, but their effect on beta cells and insulin secretion is unknown. Case presentation Two patients with atrial fibrillation due to gain-of-function mutations in KCNQ1 (R670K) and KCNE1 (G60D) were BMI-, age-, and sex-matched to six control participants and underwent a 6-h oral glucose tolerance test (OGTT). During the OGTT, the KCNQ1 gain-of-function mutation carrier had 86% lower C-peptide response after glucose stimulation compared with matched control participants (iAUC360min = 34 pmol/l*min VS iAUC360min = 246 ± 71 pmol/l*min). The KCNE1 gain-of-function mutation carrier had normal C-peptide levels. Conclusions This case story presents a patient with a gain-of-function mutation KCNQ1 R670K with low glucose-stimulated C-peptide secretion, additionally suggesting involvement of the voltage-gated potassium channel KCNQ1 in glucose-stimulated insulin regulation. Background Impaired function of the voltage-gated potassium channels Kv7.1 (encoded by KCNQ1) and Kv11.1 (encoded by KCNH2), caused by inheritable mutations or drugs leads to long QT syndrome (LQTS) characterised by malignant cardiac arrhythmias [1]. Moreover, inhibition of Kv7.1 and Kv11.1 increases the glucose-stimulated insulin and Cpeptide secretion from the pancreatic beta cells and increases glucagon-like peptide (GLP)-1 in mice [2,3] and we have previously shown that patients with LQT1 (due to loss of function mutations in KCNQ1) and LQT2 (due to loss of function mutations in KCNH2) have glucose-stimulated hyperinsulinemia and postprandial hypoglycaemia [2,4]. KCNQ1 is expressed in human beta cells [5] and blockage of the channel increases glucose-stimulated insulin secretion [3], and its overexpression impairs glucose-stimulated insulin secretion [6]. KCNE1 encodes a human potassium channel accessory (β) subunit, and modulates Kv7.1 in cardiomyocytes, but does not seem to be expressed in beta cells [7]. Gain-of-function mutations in either KCNQ1 or KCNE1 genes shorten the action potential duration and effective refractory period in cardiomyocytes, increasing the risk of atrial fibrillation (AF) [8,9]. In this case study, we investigated glucose-stimulated hormone secretion in two patients with AF due to confirmed gain-of-function mutations KCNQ1 R670K and KCNE1 G60D, respectively. Expression in Xenopus laevis oocytes of KCNQ1 R670K or Kv7.1 co-expressed with KCNE1 G60D resulted in larger current amplitudes compared with wildtype, confirming a gain-of-function phenotype [8,9] of the mutations. We hypothesized that patients with a KCNQ1 gain of function mutation would have decreased glucose-induced insulin and C-peptide secretion, whereas patients with gain of function mutations in KCNE1 would be expected to have normal insulin and C-peptide secretion upon glucose stimulation. Case presentation We present two patients with AF who are confirmed heterozygous gain-of-function mutations carriers, recruited from the outpatient clinic at Department of Cardiology, Rigshospitalet, Denmark. One patient had persistent AF and carried the KCNQ1 R670K mutation, while the other patient had paroxysmal AF and carried the KCNE1 G60D mutation. Neither patients had echocardiography abnormalities. For comparison with normal glucose metabolism and ECG profiles, six control participants were BMI, age and sex-matched with the AF patients recruited from the Danish populations studies Inter99, Health 2006, Health 2010 and DanFund studies. The methods used for the investigations and sample analyzing were previously detalied described in [2]. Below follows a condensed version. The patients and control participants each underwent a 6-h oral glucose tolerance test (OGGT) after overnight fasting. The patients did not take medication the morning before the examination. In a resting state, baseline ECG and blood samples were taken 15, 10 and 0 min before ingestion of a standard 75 g glucose solution. During the following 6 h, ECG and blood samples were taken every 15 min for the first hour and then every 30 min for the remaining 5 h. For continuous glucose monitoring (CGM), the participants agreed to wear an iPro2 CGM (Medtronic, Watford, U.K.) between 3 and 7 days. During this period each meal was noted with time and meal composition. Findings: There were no differences in HbA1c, fasting hemoglobin, fasting total cholesterol or fasting creatinine between the patients and the corresponding control participants. None of them had HbA1c levels ≥48 mmol/mol (Table 1). At fasting state, the KCNQ1 R670K carrier presents with slightly higher fasting insulin levels (but still within the levels observed in the control participants (insulin 88 vs range 14-137 pmol/L and C-peptide 774 vs 338-1226 pmol/L) and therefore increased HOMA-IR (3.1 vs 1.5 ± 1.2) and HOMA-Beta (123% vs 70 ± 55% compared to control participants). In contrast, during glucose stimulation the KCNQ1 R670K carrier had a markedly blunted Cpeptide response and lower glucose levels compared to control participants and the KCNE1 G60D carrier ( Fig. 1 and S1). The glucose-stimulated GLP-1 response was also blunted in the KCNQ1 GOF patient compared to control participants, whereas glucagon response did not differ among the examined participants (Fig. 1). During CGM for 3-7 days, the KCNQ1 mutation carrier had lower increase in blood glucose levels within 1 h after carbohydrate rich meals (mean increase of 0.8 ± 0.6 mmol/l), compared to both matched controls and the KCNE1 mutation carrier (mean increase of 1.5 ± 0.4 mmol/L and 1.7 ± 0.5 mmol/L, respectively) (Fig. S2). The two patients had similar cardiac profiles as previously reported [8,9] (Fig. 2). Discussion and conclusions We previously identified that patients with loss-of-function mutations in KCNQ1 have increased glucose-stimulated Cpeptide and insulin secretion, but normal fasting levels [4]. Kv7.1 is expressed in human beta cells and participates in depolarization-evoked insulin exocytosis [5]. In this case study of a patient with gain of function mutation in KCNQ1, we observed a lower glucose-stimulated C-peptide and GLP-1 response compared to matched control participants. This may be due to shorter repolarization duration in beta and L-cells cells, similar to what has been observed for this variant with shortened action potential duration in Xenopus laevis oocytes [8,9]. Furthermore, this observation is in agreement with studies of overexpression of KCNQ1 showing an increased glucose stimulated K + current and impaired and limited glucose stimulated insulin secretion from beta-cells [6]. The KCNQ1 gain of function patient had a low increase in glucose level after glucose ingestion both during OGTT and during 7 days continuous glucose monitoring, even though the C-peptide in response to glucose stimulation were low. The patient reported that he bicycled more than 20 km every weekday, which may explain the high insulin sensitivity in the glucose-stimulated state of the patient making him able to compensate for the low C-peptide levels by increased glucose uptake in the muscles. Intron variants in KCNQ1 associated with increased risk of type 2 diabetes in genome wide association studies seem to increase the function of KCNQ1, whereas siRNA silencing decrease KCNQ1 function and increase exocytosis of insulin [5]. Thus, with time and without exercise the KCNQ1 gain of function patient may be in risk of type 2 diabetes. We examined another AF patient with gain of function mutation in KCNE1 that is not expressed in pancreas [7], this patient had a similar C-peptide response compared to the matched control participants. Hence KCNE1 does not seem to function as a beta subunit of KCNQ1 in human beta-cells. Although limited by the very modest sample size, this study provides additional suggestions of the involvement of the voltage-gated potassium channel KCNQ1 in insulin regulation. Acknowledgements We thank the study participants and the technicians Annemette Forman and Lene Albaek. Authors' contributions SST, JK, and TH designed the study. MSO provided the patients. AL provided the cohort of matched control participants. LH-C collected data. JZ analyzed and evaluated data. CRJ contributed to evaluating data. JZ wrote the manuscript with help from SST. TH, JJH, AL, MSO, SST, JK and CRJ contributed to discussion, reviewed/edited the manuscript and approved the final version. The corresponding authors JK and SST confirms full access to data and final responsibility for the decision to submit for publication. All authors have read and approved the manuscript. Funding The study was supported by the Novo Nordisk Foundation Center for Basic Metabolic Synergy Grant (SST) and the Danish Heart Association (SST). LH-C was supported by a scholarship from the Lundbeck Foundation, and JZ was supported by the scholarship from China Scholarship Council. None of the funders were involved in the planning and conduction of the study, analysis and interpretation of the data and in writing the manuscript. Availability of data and materials The datasets generated during and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable. Fig. 2 Results from the ECG measurement during the oral glucose tolerance test (OGTT). Six hour OGTT and results from Heart rate (a), QTcB (b) and QTcF (c) of the KCNQ1 R670K carrier (KCNQ1) and KCNE1 G60D carrier (KCNE1) and their BMI, sex and age matched control participants (n = 4), means ± SEM
2020-03-13T14:41:48.246Z
2020-03-13T00:00:00.000
{ "year": 2020, "sha1": "4ab34d042f51c4a6ccf43d3e51ae9659bf6d9134", "oa_license": "CCBY", "oa_url": "https://bmcendocrdisord.biomedcentral.com/track/pdf/10.1186/s12902-020-0513-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ab34d042f51c4a6ccf43d3e51ae9659bf6d9134", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18803286
pes2o/s2orc
v3-fos-license
Phase transitions in the two-dimensional super-antiferromagnetic Ising model with next-nearest-neighbor interactions We use Monte Carlo and Transfer Matrix methods in combination with extrapolation schemes to determine the phase diagram of the 2D super-antiferromagnetic (SAF) Ising model with next-nearest-neighbor (nnn) interactions in a magnetic field. The interactions between nearest-neighbor (nn) spins are ferromagnetic along x, and antiferromagnetic along y. We find that for sufficiently low temperatures and fields, there exists a region limited by a critical line of 2nd-order transitions separating a SAF phase from a magnetically induced paramagnetic phase. We did not find any region with either first-order transition or with re-entrant behavior. The nnn couplings produce either an expansion or a contraction of the SAF phase. Expansion occurs when the interactions are antiferromagnetic, and contraction when they are ferromagnetic. There is a critical ratio R_c = 1/2 between nnn- and nn-couplings, beyond which the SAF phase no longer exists. the checkerboard AF order and causes the system to show tricritical behavior. That is characterized by the presence of a tricritical point (H t , T t ) in the phase diagram line where the transition changes from second to first order [10][11][12]. The purpose of this work is to investigate the influence of nnn couplings on the phase transitions of the 2D SAF Ising model in a uniform magnetic field. The Hamiltonian is where S z i can take the values ±1. The parameters J x and J y are energy couplings between nn spins along x and y, respectively. J 2 is the coupling between nnn spins, and H the magnetic field. In this work we assume J x = J y = J 1 > 0, whereas J 2 can be either positive or negative. For simplicity, from here on we use the notation R = J 2 /J 1 , and set J 1 = 1 as the energy unit. Figure 1 shows the energy couplings that appear in Eq. 1. To determine the phase diagram of the model, we use two different numerical methods, Monte Carlo (MC) [13][14][15] and Transfer Matrix (TM) [11,16]. Both methods have been used in statistical physics problems, especially in Ising-type models. We are interested in the location of the phase boundaries and the nature of the transitions, whether they are of first-or second-order, as well as if re-entrant is observed. Both methods are well-suited to achieve those objectives. In the present work, we use both methods to determine the phase boundaries. Even though MC is very realible to ascertain the nature of the phases, [15,17] we elect to use the TM method due mostly to its simplicity. Once the phase boundaries are found by the TM method, little further computational effort is needed to establish their nature [11,16]. We show results for the cases R = ±0.2, ±0.4, which, as we shall see, will provide the essential features of the phase diagram. We also consider the case R = 0, which is known [2], to check the reliability of our calculations. In our MC calculations, we use the single-flip Metropolis algorithm [18] in square lattices of L × L spins, 8 ≤ L ≤ 128, with periodic boundary conditions. We divide the lattice into two sub-lattices A and B, such that A (B) is the set of rows labeled with even (odd) indices. We use even values of L to avoid frustration effects at the edges of the y-direction, along which there is AF ordering in the SAF phase. First, for a given set of the energy parameters and temperature, we let the system equilibrate after 10 7 Monte Carlo steps (MCS). Then we collect the data for each additional configuration generated by a sweep through the lattice. The data are stored in 10 3 bins, each holding up to 10 4 -10 5 sets of data points. This will ensure that the autocorrelation time does not exceed the bin size. The average values in each bin are used to determine the statistical averages and the standard errors. The corresponding error bars are always smaller than the symbols we use in all the graphs that follow. In addition to the internal energy, specific heat, magnetization, and susceptibility. We also calculate the SAF magnetization fourth-order cumulant, defined by: The quantities < M 2 s > L and < M 4 s > L are the second-and fourth-order moments of the SAF magnetization, The quantities < m A > and < m B > are the sub-lattice magnetizations, with < m p >=< 2 L 2 iǫp S z i > and p = A, B. One of the properties of the fourth-order cumulant, Eq. 2, is that as T → 0, U L → 2 3 , regardless the value of L. At criticality, U L → U * in the thermodynamic limit [15,[19][20][21]. The critical temperature is determined by the intersections of the U L curves for systems of different sizes. As an example, in Fig. 2 we plot the fourth-order cumulant versus temperature for the cases R = −0.2, H c = 2.0, with system sizes L = 8, 16, . . . , 128. The curves intersect nearly at the same point. In order to determine the critical temperature at the thermodynamic limit, in Fig. 3 we plot the crossing temperatures for two systems of linear sizes L and L = L + 2 versus the ratio x = L/(L + 2), with the same parameters as in namic value T c = 2.99 ± 0.01. As can be seen from these figures, the temperature crossings converge fairly rapidly to the thermodynamic value of T c . That value can be inferred even when very small lattices are used. We employ this procedure to obtain the critical lines in the H-T space. Numerically, it becomes prohibitive time-wise to analyze the region T < 0.2, since it becomes very difficult to obtain reliable statistics. Hence, in our MC simulations, we only treat cases T ≥ 0.2. At T = 0, however, the model is trivially solvable, so that we can determine the critical temperatures and fields and thus complete the phase diagrams to satisfaction. There are two possible phases which, depending on the applied field, can be the ground-states of the system: the SAF state, with its alternating rows of up-and down-spins, and the induced ferromagnetic (F) state. At sufficient low fields H the SAF state prevails, whereas at very large H all the down-spins are flipped in the direction the field, hence the F state. All other phases, like the AFM-checkerboard or more exotic orderings, will have higher energies than those of the SAF and F states, therefore they can be disregarded. The ground-state energies of the SAF and F states are readily calculated, with results By equating these energies, we determine which is the field strength necessary to align all the spins with the magnetic field without expenditure of energy. We now proceed to the determination of the phase-diagram of the system by using the TM method [16]. In addition to the location of the critical temperatures and fields, the method provides a simple criterion to establish the nature of the transition, whether is of second-or first-order. It relies on two correlation lengths, where α = 1 denotes the first, and α = 2 the second correlation length. The quantities We calculate the correlation lengths for infinite strips of widths L = 2, 4, . . . , and 16 lattice spacings, with periodic boundary conditions. The final results are extrapolated to L → ∞. In Fig. 5, we plot the correlation lengths ξ field at H c = 3.599945. We use a similar plot with the second correlation length ξ (2) L , to unravel the nature of the transition. Figure 6 shows the second correlation length for strips of widths L = 6 and 8. The curves never cross, thus indicating that the transition is of second-order. We have examined the phase diagram with this procedure throughout, and conclude that the transitions are always of second-order for the entire range of parameters, and no re-entrant behavior is ever observed. In order to obtain the thermodynamic values of the critical temperatures and fields, in Fig. 7 we plot the critical temperatures T c against the ratio x = L/M, M = L + 2. We choose the same energy parameters as those that were presented in Figs does not produce first-order transitions. The critical lines for R = 0.2 and 0.4 are shown in Fig. 10. There is a shrinkage of the region occupied by the SAF phase as R increases. That is a result of the nnn interactions competing with the local AF couplings, thus weakening the SAF phase. Hence, smaller fields and temperatures are able to destroy the order. The SAF phase region disappears altogether as R → 1 2 , which follows from setting H c = 0 in Eq. 4. To summarize, we studied the phase transitions of the SAF Ising model in a uniform external magnetic field with nnn couplings on a square lattice. We used two numerical methods, Monte Carlo (MC) and Transfer Matrix (TM) to obtain the critical lines in the (H-T ) plane. We find that all transitions are of second-order and no evidence for re-entrant behavior was observed. Our main results are shown in Figs. 9 and 10. The critical properties of the model are marked by a transition line separating the SAF phase at low temperatures and fields from a paramagnetic phase at high temperatures and fields. The SAF order is reinforced when R < 0, and depressed when R > 0, up until the limiting value R c = 1 2 , at which the phase disappears entirely.
2013-05-10T17:18:21.000Z
2013-05-10T00:00:00.000
{ "year": 2013, "sha1": "f7aaa88efe2019b5292fff41b5e23f6690f83e2a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1305.2392", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f466916cb90829c042ed5df0764a3c6d0c2cab92", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
225136067
pes2o/s2orc
v3-fos-license
Confluent and Reticulated Papillomatosis of Carteaud and Gougerot in a Young Nepali Male Confluent and reticulated papillomatosis of Carteaud and Gougerot is a keratinization disorder with an infective aetiology. Patients present with hyperpigmented papules on the upper trunk and axillae that coalesce centrally and demonstrate reticulation peripherally. Diagnosis is based on clinical findings, characteristic histopathologic changes and response to therapy. We report a case of a young Nepali male who presented with gradual onset of asymptomatic raised dark brown lesions on his neck, trunk and axillae over the course of eight years. The condition was previously misdiagnosed as pityriasis versicolor and he had received oral and topical antifungals. The diagnosis was revised to confluent and reticulated papillomatosis based on clinical and histopathological examination. He was subsequently started on oral minocycline 50 mg twice daily and nightly application of topical tretinoin 0.05% gel. There was complete resolution of all his lesions except for residual hyperpigmentation at the end of two months of therapy. There has been no relapse six months from the end of therapy. This is to our knowledge, the first case of confluent and reticulated papillomatosis reported from Nepal. Oral minocycline and topical tretinoin should be considered first line in the treatment of confluent and reticulated papillomatosis. Introduction C onfluent and reticulated papillomatosis (CRP) was first described by Gougerot and Carteaud in 1927. 1 It is a rare condition that typically develops in young adults and presents with dark brown papules on the upper trunk and axillae that coalesce centrally and become reticulated peripherally. 1 It is assumed to be a disorder of keratinization with an infective aetiology based on histopathological features and its response to antibiotics and retinoids. 1 The treatment options include tetracycline and macrolide antibiotics, oral and topical retinoids, oral and topical antifungals and topical calcineurin inhibitors. 1 Herein, we report a case of CRP in a young adult which responded to oral minocycline in combination with topical retinoid. Case Report A 23-year-old Nepali male presented to our hospital with a 7-year history of asymptomatic, brownish skin lesions in his neck, trunk and axillae. The lesions started on the trunk and slowly spread to the axillae and neck over several years. He had been treated with oral and topical antifungals in the past which did not lead to any improvement. On examination, there were multiple hyperpigmented papules on the upper chest, back, neck, upper arms and axillae, which had coalesced centrally to form plaques while demonstrating reticulation peripherally ( Figure 1). The lesions were swabbed with 70% alcohol, which did not lead to removal of the lesions. A potassium hydroxide test did not reveal presence of any fungal organisms. A biopsy taken from trunk revealed slight acanthosis with hyperorthokeratosis, papillomatosis, and slight hyperpigmentation of the basal layer. Underlying dermis showed mild degree of perivascular lymphocytic infiltration ( Figure 2). Periodic acid Schiff staining did not reveal presence of any fungal organism. Congo red was negative for amyloid material. On the basis of these findings, his condition was diagnosed as CRP and he was initiated on capsule minocycline 50 mg twice daily along with nightly application of tretinoin 0.05% gel. There was complete resolution of the lesions after two months of treatment and he has remained disease free for the last six months (Figure 3). Discussion CRP usually occurs in young adults with asymptomatic brown to hyperpigmented papules on the upper trunk, axillae and neck. 1 An infective aetiology was confirmed based on response of the condition to antimicrobials and the demonstration of Dietzia papillomatosis on the skin scrapings by Jones et al. 2 It has also been associated with pregnancy, insulin resistance, obesity, pituitary and thyroid disorders and can also be familial. 3 The diagnosis is based on the criteria developed by Davis et al 4 in 2006, which is as follows: (i) presence of scaly brown macules and patches, some of which are reticulated and papillomatous; (ii) upper trunk and neck involvement; (iii) absence of fungus on skin scales; (iv) absence of response to antifungal treatment; and (v) excellent response to minocycline. The histological features suggestive of CRP are: 1) basket-weave orthohyperkeratosis, 2) papillomatosis, 3) focal acanthosis and 4) increased basal melanin pigmentation. 1 The differentials that we had considered in our case were acanthosis nigricans, pityriasis versicolor, terra firma-forme dermatosis and Darier disease. Acanthosis nigricans was excluded on the basis of absence of increased body habitus and presence of acanthosis and dermal inflammatory infiltrates on histology. We ruled out pityriasis versicolor based on history of non-responsiveness to oral and topical antifungals and the absence of hyphae and spores on potassium hydroxide preparation of skin scrapings. We had swabbed his skin with 70% alcohol which did not lead to any improvement, thus ruling out terra firmaforme dermatosis. Darier disease was excluded based on absence of nail and palmoplantar changes and absence of characteristic histological findings. The therapeutic options include oral antimicrobials, oral and topical antifungals, and topical retinoids and topical calcineurin inhibitors. 1 We started the patient on minocycline 50 mg BD for two months following which there was complete resolution of his lesions. He has remained disease free for the last 6 months. As one of the diagnostic criteria proposed by Davis et al 4 was response to minocycline, our patient fulfilled all the criteria for CRP. The other antimicrobials that can be used are doxycycline, tetracycline and azithromycin. The response to antibiotics is excellent with more than 50% response to minocycline or azithromycin. 1 Cases that are unresponsive to minocycline or azithromycin can be initiated on systemic retinoids such as isotretinoin. 5 Some authors believe that the condition is transitory, can resolve on its own and may not require active therapy. 6 However, as our patient had the condition for 7 years and was slowly progressive, wait-and-watch may not be a feasible option in most cases. Conclusions CRP is a rare condition characterized by development of dark brown lesions on the trunk, neck and axillae of young adults. It should be kept in the differentials of non responsive pityriasis versicolor. The response to antibiotics is excellent and they should be initiated on all patients with the condition instead of adopting a wait-and-watch approach.
2020-10-28T19:12:48.769Z
2020-10-08T00:00:00.000
{ "year": 2020, "sha1": "0d8a700cfc2360e5e2fa2f3ac5324bdf6f786c86", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/NJDVL/article/download/25594/25172", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3955bb5aa6a61eabeeae088841b48f72cbd23b93", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213314519
pes2o/s2orc
v3-fos-license
Evaluation of Hydrogen Sulfide Scrubbing Systems for Anaerobic Digesters on Two U.S. Dairy Farms Hydrogen sulfide (H2S) is a corrosive trace gas present in biogas produced from anaerobic digestion systems that should be removed to reduce engine-generator set maintenance costs. This study was conducted to provide a more complete understanding of two H2S scrubbers in terms of efficiency, operational and maintenance parameters, capital and operational costs, and the effect of scrubber management on sustained H2S reduction potential. For this work, biogas H2S, CO2, O2, and CH4 concentrations were quantified for two existing H2S scrubbing systems (iron-oxide scrubber, and biological oxidation using air injection) located on two rural dairy farms. In the micro-aerated digester, the variability in biogas H2S concentration (average: 1938 ± 65 ppm) correlated with the O2 concentration (average: 0.030± 0.004%). For the iron-oxide scrubber, there was no significant difference in the H2S concentrations in the pre-scrubbed (450± 42 ppm) and post-scrubbed (430 ± 41 ppm) biogas due to the use of scrap iron and steel wool instead of proprietary iron oxide-based adsorbents often used for biogas desulfurization. Even though the capital and operating costs for the two scrubbing systems were low (<$1500/year), the lack of dedicated operators led to inefficient performance for the two scrubbing systems. Introduction: Hydrogen sulfide (H 2 S) is a corrosive gas that can corrode and damage, even in trace quantities, engine-generator sets (EGS) utilizing biogas from anaerobic digestion (AD) for electricity production. The produced H 2 S can react with water vapor present in the biogas producing hydrosulfuric acid that can be further oxidized to sulfuric acid, which can cause corrosion. Hydrogen sulfide is also toxic to living organisms under certain concentrations and can result in range of adverse health effects. The US Occupational Safety and Health Administration (OSHA) lists the acceptable ceiling concentration for human exposure to H 2 S to be 20 ppm for an 8-h duration [1]. In some industrial sectors, the total weighted average exposure limit is 10 ppm over 8 h. The acceptable peak concentration above the ceiling concentration is 50 ppm, but for a maximum time limit of 10 min. Concentrations exceeding 500 ppm in a closed environment can lead to death within 30-60 min, while concentrations exceeding 1000 ppm is instantly fatal [2]. Combustion of H 2 S also leads to SO x emissions, which has harmful environmental effects. Anaerobic digesters, used in conjunction with H 2 S scrubbers, are effective at controlling odor problems, which is often perceived as an environmental issue by residents living close to dairy farms [3]. For digestion systems with EGS to operate effectively, it is important to remove H 2 S from biogas before utilization. The two H 2 S scrubbing techniques discussed in this study include: (1) biological desulfurization (BDS) of H 2 S using sulfur-oxidizing bacteria (SOB) to oxidize H 2 S to elemental sulfur and sulfates, which can occur in a separate bio-trickling filter (BTF) or with air injection into the digester headspace, and (2) physical-chemical adsorption and oxidation using iron oxides. Biological conversion of H 2 S results from microbial oxidation in an oxygenated environment. Small concentrations of air (or oxygen) are injected into a biological scrubbing system, such as a BTF, or into the digester headspace [9]. The oxygen is used by SOB, which use H 2 S, sulfur, and thiosulfate as their primary energy sources. Schieder et al. (2003) showed 90% reduction in H 2 S concentrations (up to 5000 ppm) using BTF-based biogas scrubbers (BIO-Sulfex ® biofilter modules (Promis Company, Warsaw, Poland), with inlet biogas flow rates ranging from 10 to 350 m 3 /h [10]. A simpler method of BDS of biogas is the controlled addition of oxygen or air directly into the digester headspace, which creates a micro-aerobic environment for H 2 S oxidation. However, air injection needs to be carefully controlled in order to prevent accidental formation of explosive gas mixtures of CH 4 and O 2 [3]. With differences based on the temperature, residence time, and the percentage of injected air, there have been full-scale digesters with micro-aeration that have observed reductions as high as 80% to 99%, reducing H 2 S in the biogas from approximately 500 ppm to 20-100 ppm [2]. Iron oxide pellets or wood chips impregnated with iron oxide (also known as 'iron sponge') can also be used for biogas desulfurization [11]. The iron oxide in the media reacts with the H 2 S and is converted into iron sulfide. Iron sponge is the most recognized iron oxide adsorbent in the industry with H 2 S reductions >99.9% (3600 ppm to 1 ppm after scrubbing) reported in the literature [2]. The iron sponge adsorbent can also operate in conjunction with a small air flow into the system, along with the biogas input, to promote continuous regeneration. Sulfide removal rates up to 2.5 kg H 2 S/kg Fe 2 O 3 have been observed in continuously regenerated systems with <1% oxygen input [12]. Studies have shown that proprietary iron oxide-based scrubbing systems, such as SOXSIA ® (Gastreatment Services, Bergambacht, Netherlands), can remove up to 2000 ppm of H 2 S at 40 • C, with biogas flow rates of 1000 Nm 3 /h in full-scale anaerobic digestion (AD) systems, resulting in 2 Nm 3 of H 2 S removed per hour (2.9 kg H 2 S/h) [8]. A previous study investigated the performance and economic benefits of two BTF systems on NY farms and found that the total annual cost to own and operate the scrubbers may not justify the capital and maintenance costs of the scrubber systems compared to increasing the frequency of oil changes [4]. It was suggested that longer monitoring periods may be necessary to understand the benefits of H 2 S scrubbing on major generator overhauls. The study also highlighted the importance of a dedicated operator for keeping the systems functioning at peak efficiency. A report published on biomethane production in California estimated the cost of an H 2 S scrubbing system to be around 10% of the total capital costs [3]. It was also suggested that the use of H 2 S scrubbers was dependent on the end-use of the biogas, as more frequent oil changes (every 300 h instead of 600 h) could be sufficient for maintaining EGS health. Even though several H 2 S scrubbing technologies exist, there is only limited field-scale data on long-term H 2 S removal efficiency, and the costs associated with operating and maintaining a scrubbing system, especially on rural dairy farms in the United States [2]. The objective of this study was to quantify the efficacy and costs associated with H 2 S scrubber systems using units on dairy farms with AD systems. Two different H 2 S scrubber systems on rural US dairy farms were evaluated through quantification of scrubbing efficiency, capital costs, maintenance costs, and maintenance practices to determine how scrubber management affected the performance of these systems. The results can be used to understand the costs, maintenance requirements, and variations over time for these two H 2 S scrubbing systems. Farm and H 2 S Scrubber Information The iron oxide scrubber (IOS) on Farm 1 (S IOS ) treated biogas from an ambient temperature anaerobic digester. The 2574 m 3 AD system received a combination of food waste and the liquid fraction of dairy manure after solid-liquid separator. The unheated digester was exposed to ambient temperatures, which resulted in lower biogas production during winter months. In addition, there was no mixing of the substrate inside the digester. The farm (750 cows) operated a 110-kW EGS for electricity production, with the produced energy used on-farm. The vessel for the H 2 S scrubber was a 208 L plastic drum. PVC piping was used for the connection from the digester to the scrubber and then to the EGS. The iron oxide scrubber was filled with rusted scrap iron and steel scrapings (approximately 50% volume of the scrubber system). Additional rusted scrap iron (approximately 25% of the scrubber volume) was added by the farmer after 45 days of monitoring (without cleaning out used media in the vessel) to increase the efficiency of the scrubbing unit. After 105 days, the old media was removed and changed to fresh grade 000 steel wool (252 pads, 4.4 kg) (Homax, Bellingham, WA, USA) to determine if the increased surface area of this material would affect scrubber performance. The scrubber media covered three-quarter of the entire volume (156 L) of the scrubbing unit in order to enhance the contact time between the untreated biogas and the steel wool. Biogas flow rate from the digester was measured before the biogas passed through the scrubber. There were no condensation traps before the scrubber to collect condensed water from water vapor present in the produced biogas. The biogas exited the digester and entered the bottom of the scrubber, flowing through the barrel over the scrubbing media before exiting from the top of the scrubber vessel. A regenerative blower (Gast Regenair Model-R5325R-50, Benton Harbor, MI, USA) installed at the outlet of the scrubber was used to pull the biogas through the scrubber and directed the biogas to the generator. The generator was operated only during the farm operational hours, which averaged 12 h per day. The air injection pump for BDS (S BDS ) inside the digester headspace on Farm 2 was connected to a commercially designed, mixed anaerobic digester. Raw unseparated dairy manure (650 cows) was mixed with solid food waste (discarded produce) and fed into 1817 m 3 capacity digester. Electricity was generated using a 140-kW generator. The digester was heated to 35 • C using the waste heat from the EGS, with electricity sold to the grid. The generator was operated continuously, with breaks in operation for maintenance and repairs only. The H 2 S scrubber system consisted of an air pump that pumped air into the headspace of the digester. The pump (SST10 Aquatic Ecosystems Inc, Pentair, Apopka, FL, USA) was rated at 223 W, 51 Nm 3 /h, and single phase (115/230 V). The air pump was set to inject air at a consistent rate of 2.86 m 3 /h. A rotameter attached to the air pump, installed by the farmer, was used to measure the flowrate. The installed air pump did not have an automatic air flow regulator to change the airflow according to the amount of H 2 S in the biogas. The pipe from the air pump to the digester headspace required regular maintenance to prevent clogs. Performance Monitoring and Cost Information The CH 4 , and H 2 S concentrations were logged for 179 and 73 days for S IOS and S BDS , respectively. The scrubber system capital costs were confirmed, and the scrubber maintenance costs were collected for at least one year from each farm. Untreated and treated biogas were analyzed to detect daily and seasonal differences using two portable continuous biogas testing and monitoring systems (Siemens Model #7MB2337-3CR13-5DR1, Siemens AG, Munich, Germany) for CH 4 (0% to 100%), CO 2 (0% to 100%), O 2 (0% to 100%), and H 2 S (0-5000 ppm), with a Campbell Scientific CR1000 data logger and acquisition system, and gas meters (Model #9500, Thermal Instrument Co, Trevose, PA, USA; Model #FT2, Fox Thermal, Marina, CA, USA) and assembled as described in Shelford et al. 2019 [4]. The monitoring system were moved and installed at each farm for the study period (73 and 179 days). The Ultramat 23 was capable of an auto-calibration with air every eight hours, with regular monitoring and calibration of the units were conducted according to manufacturer's standards to maintain the accuracy of the H 2 S sensors. The monitoring systems collected data for 15 min for each biogas stream (pre-and post-H 2 S scrubbing). Operation and maintenance records of the AD and scrubbing systems was undertaken by the farmers, with records on the time and costs spent on their AD and scrubber system, including oil change costs, generator repair costs, and electrical energy generated over 12 months, if available. At the end of December 2016, the gas analyzer system installed for project purposes on Farm 2 (S BDS ) started malfunctioning and the system had to be removed for repairs, likely due to H 2 S corrosion. The on-farm biogas was then field tested using a Landtec handheld gas meter (Biogas 5000, Landtec, Dexter, MI, USA) during farm visits. H 2 S Removal Calculations Hydrogen sulfide percent removal (η) was calculated using the formula: where C in and C out (ppm) are the scrubber inlet and outlet H 2 S concentrations. The daily mass (grams/d) of sulfur removed (w) was calculated using the formula: where C in and C out (ppm) are the scrubber inlet and outlet H 2 S concentrations, 1.43 kg/m 3 is the gas density at NTP (20 • C, 1 atm), and F is the biogas flow rate (m 3 /d). Statistical Analysis Significant differences in pre-and post-scrubbed CH 4 and H 2 S concentrations over time within each farm was determined using t-tests using SAS ® statistical analysis software (version 9.4, SAS Institute Inc., Cary, NC, USA), with an alpha value set at 0.05. All values are presented as mean ± standard error. Iron Oxide Scrubber (S IOS ) The mean H 2 S concentrations in the pre-scrubbed and post-scrubbed biogas of S IOS were 450 ± 42 ppm and 430 ± 41 ppm (based on 179 data points: n = 179), respectively, when averaged over the entire study period (August 2016-January 2017) ( Figure 1). Prior to the media change from scrap iron to steel wool (n = 85), the H 2 S concentrations in the pre-scrubbed biogas was 740 ± 53 ppm and post-scrubbed biogas was 719 ± 52 ppm. After the media change, the pre-scrubbed H 2 S concentration (52 ± 9 ppm) was significantly higher (p-value < 0.0001) than the post-scrubbed H 2 S concentration (33 ± 6 ppm). This rapid decrease (Days 102-120) in H 2 S concentration is likely due to the temperature drop in the unheated digester at that time. The temperature of the digester effluent dropped from 28.1 • C in August to 10.5 • C in December, which corresponded with the ambient temperatures, which averaged 26.1 • C and 3.5 • C, respectively [13]. Sulfate reducing bacteria (SRB), the primary producers of H 2 S in anaerobic digesters, have lowered activities at temperatures below 20 • C [14]. During the study period, the average CH4 content in the pre-scrubbed biogas was 64.1 ± 0.2%, with 64.9 ± 0.2% CH4 in the post-scrubbed biogas ( Figure 3). The average daily CH4 production rate calculated using the biogas production data over one year (June 2016 to May 2017) was 432 m 3 /d or 0.58 m 3 /cow.day. The daily CH4 production rate from a mesophilic dairy manure AD system can vary from 1.5 m 3 /cow·day to 3.9 m 3 /cow·day [16]. As the AD system in this study was not heated, the average CH4 yield was below this average range. The use of scrap iron and unoxidized steel wool as scrubbing media, instead of iron sponge or proprietary iron-oxide based adsorbents resulted in poor H 2 S removal efficiencies for S IOS . Dry iron-oxide based adsorbents are the most commonly used and effective scrubbing technique but can generate a hazardous waste stream [2]. Commercially available iron sponge media can be up to 100% effective, but the use of scrap iron and steel wool as the adsorption media resulted in low H 2 S reduction efficiency (3%) for S IOS [12]. Kohl and Nielsen (1997) also reported that wetted iron-oxide based adsorbents are not as effective as chemically hydrated oxides [15]. The steel wool media and the scrap iron media were not allowed to oxidize before being used for H 2 S scrubbing, which could have contributed to the low scrubbing efficiency. Media replacement to steel wool The media replacement to steel wool and the increased residence time due to the lowered biogas flow rates in the winter season resulted in a decrease in the biogas H 2 S content even though the pre-scrubbed H 2 S concentration was below 100 ppm. The biogas production varied from 1202 m 3 /d in the summer (June to September, with an average temperature of 28 • C) to 51 m 3 /d in the winter (January to February, with an average temperature of 10.9 • C) ( Figure 2). The average biogas flow rate before the media change was 980 m 3 /d (n = 4), which was reduced to 51 m 3 /d (n = 4) due to the temperature drop that coincided with the media change. The residence time of the biogas in the scrubber increased from 0.25 min to 6 min, as the lower winter temperatures led to a sharp decline in the biogas production from the unheated digester. Commercially available iron oxide media usually require 1-15 min residence time and could have been more efficient at removing H 2 S for S IOS , especially during the summer months [12]. Zicari (2003) reported that a farm digester (capacity-554 m 3 ) with an average biogas production of 669 m 3 /d could reduce H 2 S concentrations from 3600 to <1 ppm, with a 4200 L iron oxide scrubber with a bed height of 240 cm [2]. The S IOS volume was 208 L with an empty bed height of 88 cm (66 cm media height), with 4.2 kg of steel wool. The low adsorption Energies 2019, 12, 4605 6 of 13 efficiency seen in this study was affected by the high volume of biogas passing through the scrubber compared to the scrubber size. The total volume of biogas passing through the scrubber from August to November 2016 was 119,000 m 3 , with 3.8 kg of H 2 S removed from the biogas through the scrubber. After the media replacement with steel wool, a total of 1800 m 3 of biogas flowed through the scrubber in 36 days, with 68 g of H 2 S removed. The low sulfur removal was likely due to the low concentrations of H 2 S present in the biogas coupled with the comparatively low effectiveness of the fresh steel wool. Iron oxide-based adsorbents have been shown to remove 0.56 kg H 2 S/kg adsorbent in a batch system, with a recommended bed height of 120-300 cm [15]. Based on the results from the study, the steel wool had an adsorption capacity of 0.016 kg H 2 S/kg steel wool, which is an order of magnitude lower than the adsorption capacities of commercially available dry iron oxide-based sorbents. During the study period, the average CH4 content in the pre-scrubbed biogas was 64.1 ± 0.2%, with 64.9 ± 0.2% CH4 in the post-scrubbed biogas ( Figure 3). The average daily CH4 production rate calculated using the biogas production data over one year (June 2016 to May 2017) was 432 m 3 /d or 0.58 m 3 /cow.day. The daily CH4 production rate from a mesophilic dairy manure AD system can vary from 1.5 m 3 /cow·day to 3.9 m 3 /cow·day [16]. As the AD system in this study was not heated, the average CH4 yield was below this average range. to steel wool During the study period, the average CH 4 content in the pre-scrubbed biogas was 64.1 ± 0.2%, with 64.9 ± 0.2% CH 4 in the post-scrubbed biogas ( Figure 3). The average daily CH 4 production rate calculated using the biogas production data over one year (June 2016 to May 2017) was 432 m 3 /d or 0.58 m 3 /cow.day. The daily CH 4 production rate from a mesophilic dairy manure AD system can vary from 1.5 m 3 /cow·day to 3.9 m 3 /cow·day [16]. As the AD system in this study was not heated, the average CH 4 yield was below this average range. The generator produced a total of 47,158 kWh of electrical energy from the produced biogas from August to December 2016 (131 days), resulting in a daily average rate of 380 kWh/d. The EGS stopped functioning in December 2016, but the exact reason for generator failure was not determined. During daily operation, the generator did not run continuously, which could affect the EGS lifetime. The EGS had an average run-time of 12 h/d, corresponding with day-time farm operations, but variations in the EGS run-time were verified in the farmer's reports. From June to December 2016, the biogas flow rate was continuous during the EGS operational hours, with the regenerative blower suppling the biogas to the generator. The average daily CH 4 production during the monitoring period of generator activity was 542 m 3 /d. The electricity generated from the biogas was 0.70 kWh/m 3 CH 4 , but the flare was not metered, so the actual value may be lower than estimated. The generator produced a total of 47,158 kWh of electrical energy from the produced biogas from August to December 2016 (131 days), resulting in a daily average rate of 380 kWh/d. The EGS stopped functioning in December 2016, but the exact reason for generator failure was not determined. During daily operation, the generator did not run continuously, which could affect the EGS lifetime. The EGS had an average run-time of 12 h/d, corresponding with day-time farm operations, but variations in the EGS run-time were verified in the farmer's reports. From June to December 2016, the biogas flow rate was continuous during the EGS operational hours, with the regenerative blower suppling the biogas to the generator. The average daily CH4 production during the monitoring period of generator activity was 542 m 3 /d. The electricity generated from the biogas was 0.70 kWh/m 3 CH4, but the flare was not metered, so the actual value may be lower than estimated. In-Vessel Biological Desulfurization System Using Air Injection (SBDS) Overall, biogas H2S concentrations (average: 1938 ± 65 ppm; n = 73) varied considerably during the study period from 171 to 3327 ppm, but the CH4 (56.2 ± 0.1%) and O2 concentrations (0.030 ± 0.004%) were consistent (October to December 2016). Correlations between the H2S, CH4, and O2 were also observed, as expected (Figures 4 and 5). In mid-October (Day 7), the H2S concentration decreased to 171 ppm, while the O2 concentration rose to 0.51%, and the CH4 concentration dropped to 50%, likely due to nitrogen (N2) introduced into the biogas stream with air injection. It is likely that once the oxygen was depleted, further oxidation did not take place, and the H2S concentration increased (after Day 9). Schieder et al. (2003) reported that micro-aeration by itself may not be sufficient to achieve complete desulfurization [10]. They collected data from biogas plants in the state of Baden-Württemberg in Germany and found that 54% of the micro-aerated AD systems had outlet H2S concentrations >500 ppm. They suggested the use of an external biological scrubber to achieve outlet H2S concentrations of <100 ppm and increase the life of combined heat and power (CHP) units and decrease the frequency of oil changes. In practice, digester manufacturing companies in the US have recommended limits of 500 ppm H2S in the biogas [4]. The variable H2S concentrations during the study period indicated variable treatment efficiency. The O2 concentration was not always sufficient for adequate H2S removal (<500 ppm) throughout the period after the initial rise to 0.51% O2. The O2 concentrations increased to 0.07% in mid-December for a short duration, which correlated with a decrease in the H2S concentration from 2596 to 1645 ppm. In-Vessel Biological Desulfurization System Using Air Injection (S BDS ) Overall, biogas H 2 S concentrations (average: 1938 ± 65 ppm; n = 73) varied considerably during the study period from 171 to 3327 ppm, but the CH 4 (56.2 ± 0.1%) and O 2 concentrations (0.030 ± 0.004%) were consistent (October to December 2016). Correlations between the H 2 S, CH 4 , and O 2 were also observed, as expected (Figures 4 and 5). In mid-October (Day 7), the H 2 S concentration decreased to 171 ppm, while the O 2 concentration rose to 0.51%, and the CH 4 concentration dropped to 50%, likely due to nitrogen (N 2 ) introduced into the biogas stream with air injection. It is likely that once the oxygen was depleted, further oxidation did not take place, and the H 2 S concentration increased (after Day 9). Schieder et al. (2003) reported that micro-aeration by itself may not be sufficient to achieve complete desulfurization [10]. They collected data from biogas plants in the state of Baden-Württemberg in Germany and found that 54% of the micro-aerated AD systems had outlet H 2 S concentrations >500 ppm. They suggested the use of an external biological scrubber to achieve outlet H 2 S concentrations of <100 ppm and increase the life of combined heat and power (CHP) units and decrease the frequency of oil changes. In practice, digester manufacturing companies in the US have recommended limits of 500 ppm H 2 S in the biogas [4]. The variable H 2 S concentrations during the study period indicated variable treatment efficiency. The O 2 concentration was not always sufficient for adequate H 2 S removal (<500 ppm) throughout the period after the initial rise to 0.51% O 2 . The O 2 concentrations increased to 0.07% in mid-December for a short duration, which correlated with a decrease in the H 2 S concentration from 2596 to 1645 ppm. Ramos et al. (2013) showed that an outlet H 2 S concentration of <200 ppm can be obtained with low O 2 (0.2% to 0.3%) concentrations in the output biogas [17]. The O 2 utilization efficiency for H 2 S oxidation by the SOB increased with a decrease in the O 2input /H 2 S initial ratio. Mulbry et al. (2017) also showed that an outlet H 2 S concentration of <100 ppm can be obtained with 0.5% O 2 in the output biogas [18]. In S BDS , the average outlet O 2 concentration was much lower (0.03%), as the air input was set at 2.86 m 3 /h (2.75% of the average biogas flow rate), resulting in an average O 2 input of 0.58%. An increase in the air injection rate could have decreased H 2 S concentrations further but at the cost of lowering CH 4 concentration due to N 2 dilution. The AD operator did not increase the air injection rate due to the low CH 4 concentration (50% to 55%) in the produced biogas. The EGS efficiency can be negatively affected when operated with a CH 4 concentration of <50% [15,16]. In such cases, a pure O 2 input may be desirable over air injection, but a pure O 2 input entails a higher operational cost. Ramos et al. (2013) showed that an outlet H2S concentration of <200 ppm can be obtained with low O2 (0.2% to 0.3%) concentrations in the output biogas [17]. The O2 utilization efficiency for H2S oxidation by the SOB increased with a decrease in the O2input/H2Sinitial ratio. Mulbry et al. (2017) also showed that an outlet H2S concentration of <100 ppm can be obtained with 0.5% O2 in the output biogas [18]. In SBDS, the average outlet O2 concentration was much lower (0.03%), as the air input was set at 2.86 m 3 /h (2.75% of the average biogas flow rate), resulting in an average O2 input of 0.58%. An increase in the air injection rate could have decreased H2S concentrations further but at the cost of lowering CH4 concentration due to N2 dilution. The AD operator did not increase the air injection Ramos et al. (2013) showed that an outlet H2S concentration of <200 ppm can be obtained with low O2 (0.2% to 0.3%) concentrations in the output biogas [17]. The O2 utilization efficiency for H2S oxidation by the SOB increased with a decrease in the O2input/H2Sinitial ratio. Mulbry et al. (2017) also showed that an outlet H2S concentration of <100 ppm can be obtained with 0.5% O2 in the output biogas [18]. In SBDS, the average outlet O2 concentration was much lower (0.03%), as the air input was set at 2.86 m 3 /h (2.75% of the average biogas flow rate), resulting in an average O2 input of 0.58%. An increase in the air injection rate could have decreased H2S concentrations further but at the cost of lowering CH4 concentration due to N2 dilution. The AD operator did not increase the air injection A constant air flow rate could have reduced the desulfurization efficiency in the digester headspace. A variable air flow rate based on the H 2 S production can ensure sufficient desulfurization to meet recommended limits for heating or electricity production while minimizing N 2 dilution [19]. Ramos and Fdz-Polanco (2014) used a PID (proportional-integral-derivative) controller to vary the O 2 flow rate to meet the set output H 2 S concentrations. The O 2 input was controlled using two methods: H 2 S content in the biogas, and biogas production rate, and in both cases >99% removal of H 2 S was obtained [20]. The ORP (oxidation-reduction potential) of the liquid wastewater was used by Khanal and Huang (2006) as a parameter to control the injection rate to prevent under-dosing/overdosing of O 2 [21]. However, instead of adding O 2 directly into the headspace, the authors injected it into the outlet of the reactor that contained a mixture of both biogas and the digester effluent. The resulting mixture was then sent to a separate sulfur oxidizing unit to separate the biogas, the effluent, and the elemental sulfur produced by the SOB. The method was able to reduce >99% of the total dissolved and gaseous sulfides for a range of initial dissolved sulfide concentrations (287 mg/L-1997 mg/L). However, using ORP as a controlling parameter could be unreliable, as each AD system is different and a set standard for an ORP increase may not be appropriate [19]. Addition of O 2 /air into the digester liquid could also lead to degradation of organics in the digestate, and therefore, a higher dose of air/O 2 may be required for adequate H 2 S removal [22]. Another factor that could have affected the desulfurization efficiency is the excess formation of sulfur mats in the digester headspace. The digester headspace was never cleaned, and therefore, large-sized elemental sulfur particles would drop back into the digester, along with the formation of sulfur laden biofilms on the liquid surface [18]. Sulfate reducing bacteria are also known to use elemental sulfur as an energy source for H 2 S production [23]. The accumulation of oxidized sulfates and elemental sulfur can be reduced again by SRB and can lead to increased H 2 S concentrations in the biogas [24]. External vessels used by Ramos et al. (2013) and Mulbry et al. (2017) that can be cleaned on a regular basis have been suggested as a better alternative to prevent reduction of the accumulated sulfates and sulfur [17,18], which resulted in a steady CH 4 production rate within the range for mesophilic digesters (1.5 m 3 /cow·day to 3.9 m 3 /cow·day) [16]. The farm averaged 2003 m 3 /d of biogas flow through the generator (1125 m 3 /d or 1.73 m 3 /cow·day CH 4 yield) and produced 689,656 kWh of electricity in 10 months at a rate of 1.95 kWh/m 3 CH 4 combusted. The average rate of electricity production was 2196 kWh/d. The average biogas flow rate was affected by the generator malfunction during the last 3 weeks of data collection ( Figure 6). 9 O2 input may be desirable over air injection, but a pure O2 input entails a higher operational cost. A constant air flow rate could have reduced the desulfurization efficiency in the digester headspace. A variable air flow rate based on the H2S production can ensure sufficient desulfurization to meet recommended limits for heating or electricity production while minimizing N2 dilution [19]. Ramos and Fdz-Polanco (2014) used a PID (proportional-integral-derivative) controller to vary the O2 flow rate to meet the set output H2S concentrations. The O2 input was controlled using two methods: H2S content in the biogas, and biogas production rate, and in both cases >99% removal of H2S was obtained [20]. The ORP (oxidation-reduction potential) of the liquid wastewater was used by Khanal and Huang (2006) as a parameter to control the injection rate to prevent underdosing/overdosing of O2 [21]. However, instead of adding O2 directly into the headspace, the authors injected it into the outlet of the reactor that contained a mixture of both biogas and the digester effluent. The resulting mixture was then sent to a separate sulfur oxidizing unit to separate the biogas, the effluent, and the elemental sulfur produced by the SOB. The method was able to reduce >99% of the total dissolved and gaseous sulfides for a range of initial dissolved sulfide concentrations (287 mg/L-1997 mg/L). However, using ORP as a controlling parameter could be unreliable, as each AD system is different and a set standard for an ORP increase may not be appropriate [19]. Addition of O2/air into the digester liquid could also lead to degradation of organics in the digestate, and therefore, a higher dose of air/O2 may be required for adequate H2S removal [22]. Another factor that could have affected the desulfurization efficiency is the excess formation of sulfur mats in the digester headspace. The digester headspace was never cleaned, and therefore, large-sized elemental sulfur particles would drop back into the digester, along with the formation of sulfur laden biofilms on the liquid surface [18]. Sulfate reducing bacteria are also known to use elemental sulfur as an energy source for H2S production [23]. The accumulation of oxidized sulfates and elemental sulfur can be reduced again by SRB and can lead to increased H2S concentrations in the biogas [24]. External vessels used by Ramos et al. (2013) and Mulbry et al. (2017) that can be cleaned on a regular basis have been suggested as a better alternative to prevent reduction of the accumulated sulfates and sulfur [17,18], which resulted in a steady CH4 production rate within the range for mesophilic digesters (1.5 m 3 /cow·day to 3.9 m 3 /cow·day) [16]. The farm averaged 2003 m 3 /d of biogas flow through the generator (1125 m 3 /d or 1.73 m 3 /cow·day CH4 yield) and produced 689,656 kWh of electricity in 10 months at a rate of 1.95 kWh/m 3 CH4 combusted. The average rate of electricity production was 2196 kWh/d. The average biogas flow rate was affected by the generator malfunction during the last 3 weeks of data collection ( Figure 6). Economic Analysis The total cost of the scrubber systems was calculated using data provided by the farm owners. The total capital cost of the iron oxide scrubber system (S IOS ) was approximately $525 based on the reactor vessel and piping costs, as this was a homemade system. All the maintenance was conducted by the farm owner, and the labor costs were considered negligible. Additionally, scrap iron ($25 cost) was added by the farmer once during the study. Steel wool media cost $80 to fill the space within the scrubber. The replacement media for the scrubber was calculated to be $650/year with original iron scrap based on 26 media replacements per year and $960/year with grade 000 steel wool based on 12 media replacements per year. Approximately, $450/year was required for oil changes as one liter of oil was added to the generator every other day (183 L/yr). The total cost to own and operate the scrubber was $1100 (with iron scrap media) and $1410 (with grade 000 steel wool). Generator maintenance and repair can add significant costs as well, but no information was available for generator repair costs. The total capital cost of S BDS was approximately $450 for the air pump for air injection into the digester headspace. Scrubber maintenance was carried out by cleaning out the air injection connection into the digester on a weekly basis. This was estimated to take 20 min per week and cost the farm $120/year in labor costs (estimated to be $10/week at $30/h.). Oil change costs ranged from $1190 to $1795 per month and additional costs during a month were for generator repairs. The farm owner spent $10,798 for oil changes and repairs to the EGS engine head in April 2017. One of the primary reasons for the lower costs of oil change for S IOS was the lower average H 2 S concentrations (430 ppm) compared to S BDS (1938 ppm). Zicari (2003) tabulated data for different proprietary iron-oxide based adsorbents, where the capital costs ranged from $8000 to $43,600 and the operating costs ranged from $8290 to $23,840 for a biogas stream with 4000 ppm of H 2 S and a gas flow rate of 1350 m 3 /d, which is comparable to the average daily biogas flow rates for both farms in this study [2]. These cited costs were much lower than the costs associated with owning and operating the BTF units in the study conducted by Shelford et al. (2019) [4]. The operational, maintenance, and utilities costs for BTF systems in their study ranged from $17,050 for farm 2 to $32,563 for Farm 1, which are comparable to the operational costs of iron oxide scrubbers, but the capital costs were at least four times higher. The proprietary iron-oxide scrubbers examined by Zicari (2003) had high H 2 S removal efficiencies and low H 2 S output concentrations (up to 100% and less than 1 ppm) compared to the lower efficiencies (80.1% and 94.5%) and higher H 2 S output concentrations (450 and 150 ppm) seen in the study by Shelford et al. (2019) [4,12]. However, on larger farms, the operating costs associated with iron oxide scrubbers may be much higher due to the larger volume of biogas to be treated and the higher handling and disposal costs of the spent media [12]. When the costs were normalized on the basis of volume of biogas treated, the costs were comparable, with iron-based adsorbents costs ranging from $0.024 to $0.046 per m 3 of biogas treated and BTF systems costs ranging from $0.012 to $0.03 per m 3 of biogas treated [2,4,12]. Shelford et al. (2019) also calculated the economic benefits of having a BTF scrubbing system by calculating the savings associated with less frequent oil changes after scrubber installation [4]. The farms reported a net annual loss of $61,593 for BTF 1 and $30,093 for BTF 2, which may be economically infeasible for smaller farms, especially during low milk price cycles in the US. The results and observations from this study and Shelford et al. (2019) study showed that even though H 2 S scrubbing system existed on all four farms studied, consistent performance was lacking in the inexpensive systems analyzed in our study. Both S BDS and S IOS had significantly lower capital and operating costs than the two BTF systems, but it is unclear if the farmers realized any economic or social benefits from these two H 2 S scrubbing systems during the study period. It is also difficult to calculate monetary benefits of having the scrubbing systems, since there was no information available on oil changes prior to scrubber installation and the highly inefficient performance of the scrubbing systems. Table 2 shows the cost information of the BTF units from Shelford et al. (2019) in comparison to the scrubbing systems monitored in this study. Scrubber Management An important factor to consider for efficient scrubber operation is scrubber management by farm or AD operators. H 2 S management on agricultural digesters has lagged behind municipal and industrial digesters due to limited funding [18]. Hiring full-time operators for ensuring efficient scrubber performance can lead to unaffordable operating and labor costs, especially for farm owners with AD systems. Changing the iron-oxide media after saturation is a labor-intensive process due to a need for careful handling of the saturated media [12]. Without proper monitoring of biogas quality, it is also impossible for farmers to know when to replace the saturated media or ascertain if biological conversion of H 2 S is occurring in the digester headspace. Portable biogas quality monitoring equipment used in the study cost $17,000 and required technical expertise for regular calibration and H 2 S sensor replacements every 3-6 months for accurate data collection. The farm with in-vessel biological desulfurization (S BDS ) had previously installed an external BTF to work in conjunction with the in-vessel micro-aeration. The BTF unit was abandoned for several years after the farmers encountered operational issues that they could not troubleshoot. It is important for manufacturers to provide on-field assistance for the maintenance of these systems for several years after they are purchased. In addition, one of the farms in the Shelford et al. (2019) study had a dedicated operator, and the H 2 S scrubbing efficiency was 94.5%, whereas, the other farm had multiple personnel acting as temporary operators for the BTF unit, which contributed to the H 2 S scrubbing efficiency dropping to 80.1% (Table 3) [4]. S IOS and S BDS , in this study, did not have dedicated operators maintaining the scrubbing systems, and monitoring H 2 S concentrations in the scrubbed biogas. As a result, the scrap iron media for S IOS was not replaced upon saturation, and it was impossible to determine the effectiveness of the media, leading to poor performance of the system (3% removal efficiency). In the case of S BDS , regular maintenance of the air flow lines to prevent flow obstruction and appropriate modification of the air flow rates could have resulted in a lower H 2 S concentration in the biogas. In a detailed report compiled by Lusk (1998), it was shown that AD operators faced a multitude of problems caused by high H 2 S content in biogas [25]. Currently, managing H 2 S in biogas is still an issue, as seen from our study results. Based on interaction with the participating farmers operating the AD systems, frequent EGS oil changes to reduce corrosion instead of managing the H 2 S scrubbing system were considered to be a more practical solution. Libarle (2014) found that most AD technology adopters encountered operational and maintenance issues due to a lack of training and scientific understanding of the processes involved [26]. Similar issues were observed during this study, as the farm owners of the S IOS and S BDS systems encountered several hurdles while trying to increase the H 2 S scrubbing efficiencies of their underperforming systems. In addition, the rural locations of the farms limit access to consultants and AD experts capable of aiding farmers facing challenges from elevated H 2 S concentrations in the biogas. There seems to be a need for increased assistance (education and outreach workshops, free biogas monitoring services, etc.) to impart more technical knowledge to the farm owners and offset some of the costs involved in managing and maintaining these systems. Conclusions The studied in-vessel air injection system for biological desulfurization had a low capital and time investment, with positive but inconsistent H 2 S removal efficiencies. The iron-oxide scrubber also had a low time and labor investment but negligible H 2 S removal efficiencies over the study period. The use of the appropriate scrubbing media (commercially available iron oxide or iron sponge) for increased reactivity and contact area, instead of scrap iron and steel wool could have increased the scrubber performance. The study also showed a substantial effect of scrubber operation and management on its performance. H 2 S scrubber systems that were better managed with more time and labor investment have shown more efficient and consistent scrubbing performance. Future studies should quantify and incorporate long-term costs (5 or more years) associated with engine overhauls, down-times, repairs, etc., undertaken due to H 2 S related damage to better understand the economic benefits of H 2 S scrubbers. Funding: This material is based upon work supported by the National Institute of Food and Agriculture at the U.S. Department of Agriculture through a Northeast Sustainable Agriculture Research and Education (SARE) grant (#LNE15-341).
2019-12-05T09:07:27.500Z
2019-12-04T00:00:00.000
{ "year": 2019, "sha1": "583eeb21a3e71718ec4fe212b7de31f866844277", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/12/24/4605/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "49678294c8cbc55a01451edf8a5355b2457bce6d", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
209357642
pes2o/s2orc
v3-fos-license
Problematic Internet Use and Smoking among Chinese Junior Secondary Students: The Mediating Role of Depressive Symptomatology and Family Support Background: Internet use is significant public health issue and can be a risk factor for other addictive behaviors, such as smoking. The present study examined the association between problematic Internet use (PIU) (i.e., Internet addiction (IA) and social networking addiction (SNA)) and smoking, and the mediating role of depressive symptomatology and family support played in such associations. Methods: A cross-sectional study was conducted among 5182 junior secondary students (grade 7 and 8) recruited from nine schools using stratified sampling. Results: A total of 3.6% of students had smoked in the past month, and 6.4% of students were identified as IA cases. Adjusted for significant background variables, PIU (ORa = 2.07, 95% CI = 1.48, 2.90 for IA, ORa = 1.26, 95% CI = 1.09, 1.47 for SNA), and probable depression (ORa = 1.33, 95% CI = 1.05, 1.69) were significant risk factors, while family support (ORa = 0.85, 95% CI = 0.77, 0.94) was a significant protective factor of smoking. The mediation effects of lower family support and probable depression on the association between score on IA scale and smoking, and the mediation effect of lower family support on the association between score on SNA scale and smoking were significant, while the mediation effect of probable depression on the association between score on SNA scale and smoking was marginally significant. Conclusions: PIU contributed to an increased risk of smoking through depressive symptomatology and decreasing family support among junior school students. Interventions to reduce smoking are warranted; they should seek to reduce problematic Internet use and depressive symptomatology, and promote family support. Introduction Smoking is considered as one of the leading preventable causes of death in the world [1] and is largely initiated and established during adolescence [2]. In China, a lower age of smoking onset has been observed and smoking among adolescents has been commonly reported [3]. One study among vocational high school students found that 45% had initiated smoking and 25% smoked in the past month [4]. Another study among junior high school students has shown that 5.6% have ever smoked in the past month [5]. Starting to smoke in adolescence predicts future smoking patterns [6,7] and also other problem behaviours such as substance use, violent behaviours, dropping out of school, and risky sexual behaviours [8,9]. Adolescent smoking has also been found to be associated with various negative health outcomes, such as sleeping disorders [10] and respiratory infections [11]. It is therefore important to identify factors associated with smoking among adolescents so that effective interventions could be designed. With the rapid growth of the Internet and the social media, the emerging issue of problematic Internet use (PIU) might pose additional challenges to the smoking concern among adolescents. A meta-analysis of 31 nations across seven world regions showed that a global prevalence of IA was estimated at 6.0% [22]. In China, one study among 1173 Chinese college students found that 15.2% were classified as having Internet addiction (IA) [23]; another study have reported that 12% of young smart-phone users (mean age = 26) were classified as probable problematic social networking use [24]. As smoking and PIU are both considered addictive behaviors, it might be possible that they share similar underlying roots. It has been shown that that both Internet gaming disorder and nicotine dependence share similar neural mechanisms related to craving and impulsive inhibitions [25]. Similarly, a few studies have documented the association between PIU and smoking. For example, one study investigated the association of smoking and PIU among adolescents reported a dose-dependent relationship between smoking and IA, in which IA tended to increase with current smoking habit or the number of cigarette smoked [26]; a longitudinal study among high school students in China and USA reported a positive predictive relationship between baseline compulsive Internet use and change in substance use (which included smoking) at one-year follow-up among female students [27]. Despite the numerous studies that have reported an association between PIU and smoking, very few studies have explored the mechanisms which PIU might lead to smoking. In the present study, we hypothesized that lower level of support from family and higher level of depressive symptomatology might be the potential mediating factors between PIU and smoking. There is significant concern that adolescents who are frequently exposed to the Internet may substitute the Internet for direct human interactions, leading to decreased level of social support and increased level of depression. Indeed, studies have shown that depression is one of the most common comorbidity of PIU across various populations [28,29]. For instance, a longitudinal study among Chinese adolescents showed that those who have developed IA exhibited increased depression more than that non-addiction group at the one-year follow up [30]. PIU has also found to be significantly associated with parental conflict, social isolation, and low level of social support in various studies [28,29]. One longitudinal study has found that greater use of the Internet was associated with declines in participants' communication with family members in the household, declines in the size of their social circle, and increases in their depression [31]. In China, studies have shown that students who reported poorer parent-child relationships, higher levels of depression, and lower levels of psychosocial competence were more likely to report behaviors indicative of IA [23]. Alternatively, the association between lower level of family support and higher level of depression with smoking has been widely documented [15,16,20,21,32]. Despite the extensive evidence on the association between PIU, family support, and depressive symptomatology, and between family support, depressive symptomatology, and smoking, no studies to date have examined the mediating role of these psychosocial variables on the relationship between PIU and smoking. In addition, most of the evidence reported to date have mainly focused on IA, while relatively less studies have been conducted on the association between social networking addiction (SNA), family support, depressive symptomatology, and smoking. The present study examined the prevalence of smoking, and the association between PIU (as measured by IA and SNA) and smoking among adolescents in China. The potential mediators, including depressive symptomatology and lower level of family support, were also examined. Target Participants Participants were grade 7-8 students from nine public junior middle schools in Guangzhou, Mainland China. Grade nine students were excluded as they would need to prepare for the high school entrance examination. Sampling A stratified cluster sampling method was applied to recruit participants. First, one district was selected from each of the three regions (core region, suburb region and outer suburb region) of Guangzhou, respectively using convenience sampling. Second, three junior middle schools were selected from each selected district/county using convenience sampling. A total of nine schools were thus selected. All grade 7 and 8 students from the selected schools were invited to take part in the study. Participant Recruitment Consent and permission to administer the survey was obtained by school principals prior to data collection. An anonymous structured questionnaire was administered to the students by trained field workers in the absence of teachers in classroom settings. Information on the study's background and purpose was printed in the cover page of the questionnaire. The voluntary nature and the right of refusal to take part in the study were also highlighted by the field workers. Participants received no incentive and return of the questionnaire implied informed consent. The study was approved by the Survey and Behavioral Research Ethics Committee of the Chinese University of Hong Kong. A total of 5472 students completed the survey. Only participants who have provided complete data on the studied variables were included for analysis. As a result of this, 290 participants were excluded from the analysis, resulting in a total of 5182 valid responses. Measures Background characteristics. Participants were asked about their gender, grade, and parental educational attainment. They were also asked whether they were social networking users, and to rate their own academic pressure and perceived academic performance. PIU. PIUs were measured by scores of IA and SNA. IA was measured by the eight-item Young's diagnostic questionnaire (YDQ) [33]. All items involved "yes/no" response categories and participants who provided five or more "yes" answers were classified as cases of IA [34]. The scale has commonly been used in the Chinese student population [35,36]. The Cronbach's α of the scale was 0.66 in the present study. SNA was measured by a modified version of the Facebook addiction scale [37], which included eight items describing addictive symptoms (i.e., cognitive and behavioral salience, conflict with other activities, euphoria, loss of control, withdrawal, and relapse and reinstatement). In the present study, the word "Facebook" was replaced by "online social networking", and translation and back-translation processes were implemented by two bilingual researchers. Response categories rated from 1 = not true to 5 = extremely true. The total score of the scale ranged from 8 to 40, with a higher score indicating a higher level of addictive tendency to social networking. The scale showed good internal reliability in the present study (Cronbach's α = 0.86). Depressive symptomatology. Depressive symptomatology was assessed using the Chinese version of the 20-item Center for Epidemiological Studies-depression scale (CES-D) [38]. The CES-D is one of the most commonly used self-report instruments in screening depressive symptomatology [39]. It has been used in Chinese adolescent population [28,40]. All items were responded on a four-point Likert scale ranging from 0 = rarely or none of the time (less than 1 day) to 3 = almost or all of the time (5-7 days). The total score ranged from 0 to 60, with a higher score reflecting more depressive symptoms. As suggested by the scale, a score of 16 to 20 was classified as mild depression, 21 to 24 was classified as moderate depression, and 25 or above was classified as severe depression [40]. Based on the suggested cut-off, participants who scored higher than 16 were classified as probable depression in the present study. The scale showed good internal reliability in the present study (Cronbach's α = 0.86). Family support. Family support was measured by the four-item perceived family support subscale of the multidimensional scale of perceived social support (MSPSS) [41]. It has been used on the Chinese adolescent population [42]. Each item was rated on a seven-point Likert scale ranging from 1 = very strongly disagree to 7 = very strongly agree, with higher scores indicating higher level of perceived family support. The scale showed good internal reliability in the present study (Cronbach's α = 0.90). Smoking. Participants were asked to report whether they have smoked in the past month, and the number of cigarettes they smoked per day. Data Analysis Descriptive statistics (e.g., means (M), standard deviation (SD), percentages) were presented. Associations between background characteristics (i.e., socio-demographic variables and variables related to academic issues) and smoking were tested by multilevel logistic regression, with individual students as level 1 and school as level 2; their univariate odds ratios (ORu) and their respective 95% confidence interval (CI) were derived. For IA and depressive symptomology, binary variable was created based on the suggested cut-off. For the continuous variables (i.e., SNA and family support), We standardized the score by its mean and standard deviation to calculate the Z-scores respectively. Multiple multilevel logistic regression models were then fit separately for each of the PIU (i.e., score on IA/SNA scale) and psychosocial (i.e., family support, probable depression) variables on smoking, adjusted for background variables that were significant at the p < 0.05. Resulting adjusted odds ratio (ORa) and 95% CI were reported. Next, based on the procedures proposed by Baron and Kenny, a series of multilevel logistic regressions were conducted to test the mediating role of depressive symptomatology and family support on the association between PIU and smoking. First, the association between the independent variable (i.e., score on IA/SNA scale) and dependent variable (i.e., smoking) was tested after adjusting for significantly background characteristics. Second, the association between the independent variable (i.e., score on IA/SNA scale) and the potential mediators (i.e., family support, probable depression) were examined, adjusting for significant background characteristics. The association between score on IA/SNA scale and family support was calculated using linear regressions (with the Z-score of family support as dependent variable); while the association between IA/SNA and probable depression was calculated using logistic regressions. Third, the independent variable (i.e., the score on the IA/SNA scale) and the potential mediators (i.e., family support, probable depression) were entered into the same model to predict smoking, adjusting for significant background characteristics. The Sobel test was performed to calculate the significance of the mediation, and the proportion of the mediation was calculated based on the procedure set out by Vanderweele et al [43]. Descriptive Statistics Slightly less than half of the participants were in 7th grade (48.0%), and similar proportion of the participants were female (47.6%). Respectively, 55.4% and 49.2% of the participants reported that their father and mother had senior secondary school level of education of above. About half (47.0%) reported that their family financial situation was very good. In terms of academic-related variables, One-third (34.2%) reported that they had an upper academic performance; and 80.6% perceived they had moderate level of study pressure or above. Most of the participants (92.2%) were social networking users, and 6.4% and 40.5% of the participants scored above the cut-off for IA and probable depression respectively. A total of 184 (3.6%) participants reported they have smoked in the past month, among them 18.5% smoked three cigarettes or more per day (Table 1). The mediation hypothesis was then tested by entering each of the two potential mediators (i.e., family support and probable depression) individually into the regression model plus score on IA/SNA scale (Model 2 and 3 for IA and Model 5 and 6 for SNA, Table 4). Each of these models contained a single mediator plus score on IA/SNA scale, and was compared against the model of score on IA/SNA scale alone (Model 1 for IA and Model 4 for SNA, Table 4). The odds ratio of score on IA/SNA scale diminished though remained significant. For the model of association between SNA and smoking, the odds ratio of probable depression on smoking become marginally significant (Model 6, Table 4). Results of the Sobel test revealed that the mediation effects of family support and probable depression on the association between score on IA scale and smoking, and the mediation effect of family support on the association between score on SNA scale and smoking were significant (p < 0.05, proportion of mediation ranged from 11% to 43%, Table 5), while the mediation effect of probable depression on the association between score on SNA scale and smoking was marginally significant (p = 0.07, proportion of mediation 36%, Table 5). Table 3. Association between problematic Internet use (PIU), family support and probable depression. * p < 0.05, ** p < 0.01, *** p < 0.001 †: Odds ratio obtained by two-level multilevel logistic regression (level 1: student; level 2: school) adjusted by gender, family financial situation and academic performance. a: regression coefficient between independent variable (i.e., IA, Z-score of SNA) and the mediator (i.e., Z-score of family support, probable depression). b: coefficient between the mediator (i.e., Z-score of family support, probable depression) and the dependent variable (i.e., smoking) when both mediator and independent variable were included in the same model. c': coefficient between independent variable (i.e., IA, Z-score of SNA) and the dependent variable (i.e., smoking) when both mediator and independent variable were included in the same model. SEa, SEb: standard errors of the corresponding coefficients. Discussion Both PIU and smoking are important public health concerns among adolescents. The present study examined the association between PIU and smoking, and elucidated the mechanism underlying such an association. It is first important to point out that 3.6% of our sampled students had smoked over the past month. The prevalence was similar to those reported among junior secondary school students in China [5]. Our findings show that PIU is a potential significant risk factor of smoking. Results corroborate with those previous studies showing a significant association between IA and smoking [26,27] and extend the knowledge that smoking is also associated with SNA. Addiction with the Internet may interfere with normal, adaptive functioning, leading to engagement in risky behaviors. It is important to note that nearly all participants in the present study were social networking users and about 6.5% of them had IA. The figures are alarming given that the sampled students are at a very young age. Findings highlight the importance of investigating the effect of addictions to specific Internet platform (i.e., social network) and the need to designing interventions to reduce PIU, which might possibly reduce the risk of smoking among adolescents. The present study also suggests that 40.5% of the sample scored above the cut-off for probable depression. Using the same scale (CES-D) with the same cut-off, the prevalence of probable depression was lower than those reported among secondary school students in Hong Kong (53.2% among male and 62.1% among female) [28]. In the present study, probable depression was also shown to be a potential significant risk factor to smoking. Findings support the literature that depression is associated with an increased frequency of smoking [44][45][46], and that depressive symptomatology is more common among smokers [45]. The self-medication hypothesis, which individuals with depression might use smoking as an emotion regulation strategy to cope with their depressive mood or reduce their negative affect, might explain such association [47]. Alternatively, depressive symptomatology may leave adolescents more vulnerable to peer smoking influences, which increase their chance of smoking [47,48]. Findings suggest that improving adolescents' mental state may be a useful strategy for smoking cessation for adolescents. To effectively combat adolescent smoking, it is important to promote protective factors and the same time, reduce the risk factors associated with smoking. Our findings revealed that family support was a potential protective factor to smoking, results which are consistent with the extant literature [49,50]. Adolescence represents an important period when young people strive for independence in order to establish their self-identify. During this period, family remains an important source of support and influence on their development. Support from parents is important and consequential for them in coping with stressors, and establishing a positive personal development. Adolescents with higher level of parental support might have better problem solving skills such that they would be less likely to turn to maladaptive behaviors (such as smoking) when they encounter adversities. Currently there is a dearth of studies looking at the mechanism between PIU and smoking. Examining the association between PIU and smoking and its underlying mechanism would help us elucidate specific factors for PIU and smoking. Such information would be useful for health care professionals to design appropriate interventions to reduce smoking among the adolescents. Findings suggest that PIU was associated with lower level of family support and higher levels of depressive symptomatology, which in turn, associated with a higher likelihood to smoke. Adolescents with PIU might prefer to turn to the Internet for communication, which ultimately lead to decreased time spend in face-to-face interactions, decreased level of support from family and increased level of depressive symptoms, and subsequently increased chance of smoking. PIU might also weaken the protective effect of family support, which increased one's vulnerability to smoke. Interventions for smoking should not only reduce PIU but promote family support and reduce depressive symptomatology in order to lower the potential negative impact of PIU on smoking. Implications for Practice Given the prevalence of PIU and its potential positive association with smoking, evidence-based interventions to reduce smoking are warranted. Findings of the present study suggest that reducing PIU could possibly be a useful way to combat adolescent smoking problem. A few evidence-based interventions to reduce PIU among the adolescences have been reported in the literature. For example, it has been shown that school-based group using cognitive behavioral theory (CBT) was effective in reducing IA [51]. Other studies also reported that reality therapy counselling [52], reality therapy combined with mindfulness mediation [53], and motivational interviewing [54,55] were effective in reducing PIU. Findings also suggest that reducing participants' depressive symptoms may be a useful strategy for reducing smoking. Indeed, evidence has suggested that psychological interventions are effective in reducing smoking [56]. To develop effective interventions for reducing smoking, it is important to enhance protective factors of smoking, besides reducing risk factors. Findings of the present study suggest that interventions to increase family support may have the potential to reduce smoking. There is a need for health care professionals to understand the context of family support and the way in which adolescents obtain support from the family. Family-based approaches that focuses on improving the parent-child relationship, increasing communication and understanding among family members can be a direction for increasing family support of adolescents. Review studies have suggested that family-based interventions were effective in reducing the onset of smoking among children and adolescents [57]. Limitations of the Study There are several limitations of the study that should be noted. First, the study is limited by its cross-sectional nature, therefore the association between PIU, depressive symptomatology, lower levels of family support and smoking are only suggestive. It might also be plausible that smoking leads to depressive symptomatology and lower levels of family support, which in turn lead to PIU. Longitudinal studies are warranted to elucidate the prospective relationship between the variables. Second, all data were self-reported so the levels of IA and smoking might have been underestimated. Third, due to the lack of an established cut-off point for SNA, the extent of SNA of the sample was unclear. Fourth, the association between PIU and smoking could have been mediated by other individual factors, such as individual differences in circadian rhythm [58]. Future studies should explore other potential mediators of such association. Fifth, written consent was not obtained from the students, and their return of the questionnaire implied informed consent. We have been advised by the schools that written consent was not necessary, but students needed to be briefed very thoroughly and the participation should be completely voluntary. We have taken careful steps to make sure that a very detailed briefing was provided to ensure confidentiality, anonymity, and voluntariness. Sixth, the internal reliability of YDQ was low in the present study. We were aware that other validated scales for IA (e.g., the Internet addiction test, IAT [59]) were available but due to concern about length of the questionnaire, YDQ was chosen over other validated scales. Finally, the current sample was collected from nine secondary schools in Guangzhou, generalization to other secondary school students might be limited. Conclusions To conclude, findings of the present study call for a need to reduce PIU and smoking among junior secondary school students in China and provide insights on the variables that could be targeted when designing anti-smoking interventions targeting this population. In particular, findings show that PIU was associated with smoking among junior secondary school students in China and such associations were mediated by depressive symptomatology and lower levels of family support. Results provide important implications that interventions to reduce smoking among the current sample should target not only to reduce PIU, but also to reduce depression and promote family support. Academic Activities in the Chinese University of Hong Kong, it was also partially supported by National Science Foundation of China (No.: 81373021). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The authors would like to give great appreciation to all participants and schools. Conflicts of Interest: The authors have no conflicts of interest to report.
2019-12-15T14:01:03.098Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "f3eb0786f4ccf575ccc167af6cbd27c08612f529", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/16/24/5053/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "375ca3a6840a3274cd74c8c5f109921a95ee4753", "s2fieldsofstudy": [ "Psychology", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
244640838
pes2o/s2orc
v3-fos-license
Detection of Body Behavior Characteristics in Sports Training Based on Grey Relational Model In order to deal with the problems of low detection accuracy and efficiency existing in traditional detection methods of body behavior characteristics in sports training, this paper proposes a new detection method of body behavior characteristics in sports training based on the grey correlationmodel. According to this method, the foreground binary image is obtained by the interframe difference method, and the action information and contour information are extracted, firstly. And then, the behavior feature descriptor is obtained. After the description of body behavior characteristics in sports training is completed, the grey correlation coefficient is calculated, and the complete observation equation of time delay estimation of image grey value under complex sports training background is established to complete the detection of body behavior characteristics. +e experimental results show that, compared with the traditional methods, the detection accuracy and efficiency of the text method can always maintain a high level, indicating that the practical application performance of this method is strong. In fact, the grey analysis does not attempt to find the best solution but does provide techniques for determining a good solution. Introduction In recent years, the development of computer has made amazing progress; people began to consider how to apply the computer to intelligent processing and apply it to the experience of practical problems in life to train the computer to work more like a real human brain [1]. In today's computer field, the detection and recognition of body behavior in sports training video has become an important research direction. It has an important application in human motion analysis [2]. Applied Behavior Analysis is used in sports and athletic training to teach and reinforce skills used in training and competition. Behavioral coaching has been used in sports from football to gymnastics to swimming to improve athlete training regimes, such as enforcing a healthy diet and regular exercise programs, and boost the performance of particular athletic skills, such as maintaining proper body form. However, due to the nonrigidity of human motion training and the complexity of video background, this method is still a challenging research direction [3]. As a science of understanding and using information, information science provides a new way of thinking and method for the detection and recognition of body behavior in the sports training videos. Meanwhile, intelligent computing research aims at bringing intelligence, reasoning, perception, information gathering, and analysis to computer systems [4][5][6]. In sports training video images, how to obtain the body information in the image through computing technology, how to obtain related behaviors based on this information, and how to realize the recognition of the sports training machine target and related behaviors in the video have become a difficult point in this field. [7][8][9]. In [10], a feature descriptor-based motion training body behavior feature detection method is proposed to extract the optical flow field information in the motion training video and calculate the interframe acceleration optical flow. e acceleration information in a space-time block is analyzed by histogram statistics, and the histogram features of all space-time blocks in several frames are spliced to obtain the acceleration descriptor of body motion, so as to describe the characteristics of body behavior in sports training. Reference [11] puts forward a method for detecting the behavior characteristics of sports training body based on infrared array sensor. e infrared sensor designed in this paper is small in size, is easy to install, is stable in any environment, and can collect low resolution information. Based on the information collected by the sensor, the k-nearest neighbor algorithm is used to detect the body behavior. Reference [12] puts forward a method for detecting the behavior characteristics of sports training body based on spatial clustering. is method uses spatial clustering algorithm to cluster the coordinate data of sports personnel into different clusters and carries out a single data processing for each cluster. Finally, machine learning method is used to detect the characteristics of body behavior. Reference [13] develops an algorithm to detect and classify goalkeeper training exercises using a wearable inertial sensor attached to a goalkeeper glove. eir approach first detects the exercises using an event detection algorithm based on a high-pass filter, a peak detector, and Dynamic Time Warping to detect and eliminate irrelevant motion instances. en, it extracts a set of statistical and heuristic features to describe the different exercises and train a machine learning classifier. Reference [14] proposes a wearable flow-MIMU human motion capture device by incorporating a microflow sensor with a microinertial measurement unit. Motion velocity is detected by a microflow sensor and utilized to figure out the motion acceleration. e gravity accelerations are extracted by eliminating the motion accelerations from the accelerometer outputs. Finally, posture estimation is implemented using a tailor-designed Kalman-based data fusion of the gyroscope outputs and the extracted gravity accelerations. e flow-MIMU device with wireless communication is designed like a watch to be wearable. A method based on 2D skeleton and two-branch stacked Recurrent Neural Networks (RNNs) are reconstructed starting from RGB video streams, therefore allowing the use of the proposed approach in both indoor and outdoor environments [15]. Reference [16] proposes a novel Graph-Based Object Semantic Refinement based on Bi-GRU to extract multilevel semantic features for visual detection, which uses convolutional neural networks to extract visual features from images and collaborates with semantic features model to achieve better objection recognition results. Although the above methods can complete the detection of sports training body behavior characteristics, the above methods have the problems of low detection accuracy and detection efficiency. To overcome the problems of low detection accuracy and efficiency existing in traditional detection methods of body behavior characteristics in sports training, a new detection method of sports training body behavior characteristics based on grey correlation model is proposed in this paper. A grey correlation model means a system in which part of the information is known and part of the information is unknown. Grey systems will give a variety of available solutions. Grey analysis does not attempt to find the best solution but does provide techniques for determining a good solution, an appropriate solution for real-world problems. Furthermore, compared with the traditional methods, the detection accuracy and efficiency of the text method can always maintain a high level, indicating that the practical application performance of this method is strong. According to this method, the foreground binary image is obtained by the interframe difference method, and the action information and contour information are extracted, firstly. And then, the behavior feature descriptor is obtained. After the description of body behavior characteristics in sports training is completed, the grey correlation coefficient is calculated, and the complete observation equation of time delay estimation of image grey value under complex sports training background is established to complete the detection of body behavior characteristics. e contributions of this paper can be described as follows: (1) is paper proposed a method which can detect the body behavior characteristics in sports training. According to this method, it takes the advantages of grey relational model, and the detection accuracy and efficiency of the text method can always maintain a high level. (2) According to the proposed method, it can detect the body behavior characteristics which is always very important but very difficult. e rest of this paper is organized as follows. e framework and technical details of our proposed system are described in Section 2. In Section 3, we present extensive experimental results to demonstrate the effectiveness of the proposed model. Finally, we conclude our work in Section 4. Detection of Body Behavior Characteristics in Sports Training Based on Grey Relational Model Scientific and reasonable sports training is based on the feedback information of technical monitoring indicators, so as to regulate the intensity of sports training and related actions and realize a reasonable sports training mode. e realization of this technology lies in the detection of human body behavior characteristics [17]. However, the traditional detection methods of body behavior characteristics in sports training cannot feedback the athletes' key technology and range of action in real time and often need to use people's experience mode discrimination to achieve action analysis, which is not accurate and real-time. In addition, the traditional body behavior feature detection algorithm is manually interpreted, which seriously affects the feedback speed. Moreover, the accuracy and technicality of manual recognition are not high, and the workload is huge. For busy coaches, the operability is poor [18]. erefore, in this research process, the grey correlation model is applied to the sports training body behavior characteristics, and the grey correlation theory is used to realize the accurate analysis of the training image, so as to detect the body behavior characteristics more effectively. Body Behavior Characteristic Model of Sports Training. e body behavior of sports training includes not only the static spatial information (such as posture) of sports training target, but also its dynamic information (such as limb or global motion information). e construction of body behavior feature model of sports training can avoid singularity problem and improve the accuracy of body behavior detection [19]. e body behavior of sports training includes action information and contour information. e foreground binary image is obtained by using the interframe difference method, and all the foreground binary images in the sports training video are merged to obtain the motion energy map of the sports training video [20], where D(x, y, t) represents the two-value foreground image obtained after the t-frame and the t − 1 interval; the motion energy graph calculation formula of the video is (1) In the same way, the calculation of motion energy map also requires the method of frame difference to find the binary image. However, the motion energy map can be used to represent the motion training information in the foreground image, and the specific formula is By adjusting its weight, it can be expressed by adjusting its weight to express the historical information of motion training. According to the motion energy map constructed above, the R conversion method is used to describe the motion training body behavior information [21]. R transformation is an improved algorithm for Radon transformation. R transform is translation, rotation, and scales unchanged compared to Radon transformations, so it is more suitable for describing motion training body behavior information. R transformation is defined as e R transform has translation invariance, where the image f(x, y) is translated to uf(x ′ , y ′ ); the following results can be obtained: From this, it can be seen that the image translation does not change the result of the R transform, so it has translational invariance. Similarly, R transform has rotational invariance, and the image rotation angle is c; it is From this, it can be seen that the cycle of R is c, and the body region can be sufficiently described using the 180dimensional vector [22]. When the scale changes to the image, the scale factor is α, obtained by At this time, when the scale changes, the amplitude of the R transform changes, so it needs to be normalized or standardized to the image. R transformation definition and schematic are shown in Figure 1. In order to realize image normalization, Fourier transform is used. In Fourier descriptors, the overall contour of the image determines the low-frequency component of the descriptor, while the detailed information of the image determines the high-frequency component of the descriptor. erefore, when describing the body contour information, the low-frequency component and high-frequency component need to be taken into account at the same time [23]. Before Fourier transform, it is necessary to transform the image from space-time domain to frequency domain; that is, the point (x i , y i ) on the plane contour line in XY space-time domain is transformed to the complex plane orderly, and its horizontal axis is the real axis and the vertical axis is the imaginary axis. e complex plane coordinates of each point on the contour line are defined as In the formula, (x i − x c ) is the centroid coordinates of the contour. Fourier transform is performed on the set of contour points: In the formula, N represents the number of points on the outline. e characteristic description of the body behavior Security and Communication Networks after normalization is normalized [24]. e normalization calculation formula is e outline of body behavior is shown in Figure 2. According to this figure, the points in this figure represent the characteristic points, which can form the outline of body behavior. When the body behavior changes, the outline will change too. Generally speaking, the more the characteristic points are used, the higher the performance of the algorithm will be. Detection of Body Behavior Characteristics in Sports Training. After the description of the body behavior characteristics of sports training, the grey correlation analysis method is used to detect the body behavior characteristics of sports training. e theory of grey relational analysis system is a theory of Uncertainty Research founded by Professor Deng Julong of China in 1982. e research content is "small sample, poor information" uncertainty method of "some information is clear, some information is unknown." It realizes the accurate description and cognition of display time through the generation and development of known "part" information. Grey correlation analysis mainly studies the object of "explicit denotation, unclear connotation" [25]. Grey relational theory mainly includes grey relational analysis, modeling, prediction, evaluation, decision-making, control, and optimization system. Grey relational analysis theory has been widely used in industry, agriculture, social economy, energy, transportation, and many other fields and has successfully solved a large number of practical problems. Compared with the high dimension and huge amount of information contained in the image, the training body behavior image is a "small sample," so the detection of body behavior feature is regarded as a "small sample" problem. erefore, the application of grey relational analysis theory to the detection of body behavior characteristics has theoretical feasibility. e specific research process is as follows [26]. (11) Satisfaction: (1) Normative: (2) Integrity: for X i , X j ∈ X � X s |s � 0, 1, 2, . . . , m; m ≥ 2}, have c(X i , X j ) ≠ c(X j , X i ), i ≠ j (3) Even symmetry: c(X i , X j ) � c(X j , X i )⇔X � X i , X j (4) Proximity: the smaller |x 0 (k) − x i (k)| is, the larger c(x 0 (k), x i (k)) is en c(X 0 , X i ) is called the grey correlation degree of X i and X 0 . Among them, normalization, integrity, even symmetry, and proximity become the four axioms of grey relation. e real number c(x 0 (k), x i (k)) is the grey correlation coefficient between X i and X 0 . c(X 0 , X i ) ∈ (0, 1] indicates that any two behavior sequences cannot be strictly unrelated. where the resolution coefficient ξ ∈ (0, 1]. e calculation steps of grey correlation degree are as follows: (1) Find the initial value image or mean value image of each sequence, and record it as . , x i (n)) be the behavior sequence of factor X i , D 1 and D 2 are sequence operators, and the initial values are as follows: image is as follows: (3) Find the difference between the two levels: (5) Calculate the correlation degree: On the basis of the above processing, according to the constructed sports training body behavior characteristics, calculate the correlation degree results, and carry out grey correlation analysis [27]. Based on the quantum evolution and particle filter algorithm of grey correlation, the motion training tracking is carried out. Firstly, whether the grey level range after greying meets the requirements of effectively displaying the behavior characteristics of the training body is judged, and the selection window is 3 × 3. X i,j is used to represent the grey value of the pixel at position (i, j), and M i,j is the output vector of the enhanced image; the calculation formula is M i,j � med X i−1,j−1 · · · X i,j · · · X i+1,j+1 . (15) rough grey correlation analysis, finally generated greyscale histogram binary pixel features where F i,j is the value of greyscale histogram binary pixel features. According to the torque coefficient of each body shape, different thresholds ω are obtained. When the pixel value is lower than (i, j), the dilution value of the body motion sequence is expressed as containing noise. On the contrary, the point does not contain noise. e machine feature driven key frame feature under the effect of grey correlation is obtained; the smooth output of the image is as follows: In the above formula, n � 1, 2, . . . , T represents the number of iteration steps, T represents the total number of iterations, u (n) (x, y) represents the pixel value, and δ represents the grey correlation update speed. rough the difference between the current frame and the binary background image, the complete observation equation of time delay estimation of image grey value under complex motion training background is established: Among them, When k � 0, the motion component in the initial state is obtained by solving s(0), which represents the characteristics of the body action state in the sports training scene and realizes the detection of the body behavior characteristics of the sports personnel. Experimental Verification In order to verify the practical application effect of the proposed method based on grey correlation model, the simulation experiment is carried out. Before the beginning of the experiment, we need to set the experimental samples and experimental scheme. Experimental Samples and Schemes. Experimental data: MySQL database is a more common database that selects 400 motion training images in the database, and the size of each image is 92 × 112. e body behavior characteristics mainly include running, squatting, biblays, sit-ups, push-ups, and more. MySQL database is currently using the most widely used standard database, containing a lot of comparison results. Experimental scheme: taking the detection accuracy and detection efficiency of body behavior features as experimental comparison indexes, the method in this paper is compared with the method based on infrared array sensor and the method based on spatial clustering. Body behavior feature detection accuracy: feature detection accuracy refers to the degree of consistency between the detection results of different methods and the actual body behavior feature detection results. e higher the detection accuracy is, the better the detection performance of the method is. Detection efficiency of body behavior features: detection efficiency refers to the detection time consumed when different methods detect the same number of images. e shorter the detection time, the higher the detection efficiency. Experiments in this paper are carried out using one GPU (GeForce GTX 1050 ti) and an Intel CORE i7 with 16 GB RAM memory system. Comparison of Detection Accuracy of Body Behavior Characteristics. Due to the rich movement of sports training, experimental samples are set to include a variety of body behavior, and different methods are used to detect the characteristics of body behavior. e detection results of different methods are compared with the actual results to verify the detection accuracy of different methods, containing a deep learning-based method. Specifically, the infrared array sensor based method is a traditional type of method for body behavior detection, which typically relies on the speed of hardware and the improvement of the algorithm. is kind of hardware-based method requires some manual assistance to detect different types of body behaviors. us, the infrared array sensor based method is also to detect different body behaviors, and the detection accuracy and efficiency are summarized as the experimental results. e clustering-based method is also a traditional type of method for body behavior detection; technically, the input space is clustered by the clustering method, and based on the result, the clustering results are classified into different body behaviors based on the classification criterion and experience. Here, we choose the spatial clustering method as one baseline, which aims to partition spatial data into a series of meaningful subclasses, called spatial clusters, such that spatial objects in the same cluster are similar to each other and are dissimilar to those in different clusters. We choose the bidirectional recurrent convolutional network as the deep learning-based baseline method, which is a two-branch stacked LSTM-RNNs model [15]; it restricts the receptive field of the original full connection to a patch rather than the whole frame, which can capture the temporal change of visual details. Meanwhile, it replaces all the full connections with weight-sharing convolutional ones, which largely reduces the computational cost. e loss function is the classification and regression loss. e optimizer we choose in the experiment is the commonly used Adam optimizer. e comparison results of the detection accuracy of the three methods are shown in Figure 3. e results of the comparison of the accuracy of the measurement of the body behavior characteristics shown in Figure 3 show that the detection accuracy of this method keeps rising continuously in the course of many comparative experiments and is finally stable at 98%. e detection accuracy based on infrared array sensor method has a large increase, but the final detection accuracy is only 73%. Similarly, the maximum detection accuracy based on spatial clustering method is not more than 80%. erefore, the above experimental results show that the method can effectively improve the accuracy of detecting the characteristics of the body behavior. Even compared with the deep learning-based method, the proposed method in this paper still shows its superiority. Detection Efficiency of Body Behavior Characteristics. Because the process of sports training generally lasts half an hour or more, the sports training data is very large, so it is necessary to verify the detection efficiency of the detection method. Formally, the detection efficiency is defined as follows: where I i denotes the detection efficiency of the i-th time of the detection method. b detection,i denotes the number of behaviors detected by the method at i-th time. b groundtruth,i denotes the number of behaviors of the ground truth at i-th time. According to the above analysis, the detection efficiency is selected as the experimental index, and the proposed method is compared with the infrared array sensor method and the spatial clustering method. Furthermore, a deep learning-based method is also compared to verify the superiority of the proposed method. e comparison results of detection efficiency of the three methods are shown in Figure 4. By observing the comparison results of the detection efficiency of body behavior characteristics shown in Figure 4, it can be seen that the detection efficiency of this method is always higher than that of the two comparison methods in the experimental time of 30 minutes. e detection efficiency of this method is always above 95%, and the detection efficiency of the two comparison methods has strong volatility. Importantly, the efficiency of our proposed method is better than the deep learning-based method. erefore, this method can effectively improve the efficiency of body behavior feature detection. Conclusion In order to realize the effective detection of body behavior characteristics, a method of body behavior characteristics detection in sports training based on grey relational model is proposed. e performance of the method is verified theoretically and experimentally. is method has high detection accuracy and efficiency in the detection of body behavior characteristics. Specifically, compared with the method based on infrared array sensor, the detection accuracy is significantly improved, and the highest detection accuracy is 98%. Compared with the spatial clustering method, the detection efficiency is greatly improved, and the detection efficiency is always above 95%. erefore, it fully shows that the proposed detection method based on grey correlation model can better meet the requirements of body behavior feature detection in sports training. e experimental results show that the proposed method is better than the baselines in practical applications. Generally speaking, the algorithm is successful and meets the expected requirements of scene information collection in general football matches. However, there is still a lack of reasonable theoretical analysis of our method. In the future work, we will discuss the models and methods in the analysis theoretically. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e author declares that he has no conflicts of interest.
2021-11-26T17:01:33.819Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "88cbb6dc875e3db4b7a8946fd17282538c67835a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/scn/2021/3450712.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1d4a1061f392f1ad1c3dde4be27f6bad379465f7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
234569039
pes2o/s2orc
v3-fos-license
The Governance Dilemma and Direction of Rural Collective Economic Organization Legal Person The promulgation of theGeneral Provisions of the Civil Law of the People’s Republic of China officially established the status of rural collective economic organizations as special legal persons, and the Civil Code of the People’s Republic of China adopted on May 28, 2020 also affirmed the status of rural collective economic organizations as legal persons. At present, there is a lack of national-level legislation on the specific governance content of rural collective economic organizations. Most local regulations directly stipulate the governance issues of rural collective economic organizations, resulting in formalism in the content of regulations and inconsistent governance structure of rural collective economic organizations in the process of governance. Improve and “separate politics from economics” difficult issues. In this regard, it is necessary to combine the particularity of collective economic organizations, pay attention to the procedural matters and content of the formulation of the charter, establish a “three-tier” governance structure model for internal and external supervision, and achieve “politics” by separating the functions and personnel of rural collective economic organizations and village committees. After separation, the direction of governance will be clarified, and the governance problems of rural collective economic organizations as legal persons will be solved. Introduction At present, the exploration of local legislation on the governance of rural collec-tive economic organizations has gone ahead of the national legislation. Many areas, represented by Heilongjiang Province and Guangdong Province, have explored the regulations, governance structure and asset management of rural collective economic organizations through the formulation of "Measures for the Organization and Management of Rural Collective Economic Organizations" and "Regulations on the Management of Rural Collective Assets". Although the General Provisions of the Civil Law of the People's Republic of China at the level of national legislation stipulates that rural collective economic organizations are established according to law and have the status of special legal persons, it lacks detailed provisions on the specific content of rural collective economic organizations. However, local normative documents are limited by the lower level of legislation, and different regions have different provisions on the governance content of rural collective economic organizations, so a high-level legislation is still needed to carry out special regulations on the governance content of rural collective economic organizations. Obviously, the state has also recognized this problem. The State clearly recognizes this problem, and the "No. 1 Central Document" from 2017 to 2019 proposes a macro-strategy for the development of rural collective economic organizations, which indicates that the development of rural collective economic organizations is an inevitable trend and also reflects the importance of studying rural collective economic organizations. Therefore, it is necessary to explore the governance problems of rural collective economic organizations, so as to find specific solutions. The Special Nature of Legal Person of Rural Collective Economic Organization Special legal person is the status symbol of rural collective economic organization. Therefore, the primary goal of constructing the governance content of rural collective economic organization is to clarify its particularity, thus to clarify the difference between rural collective economic organization and other legal persons, and to provide specific direction for the design of the constitution and governance structure of rural collective economic organization. But the current legislation does not make clear the rural collective economic organization legal person's special place, then still needs to explore the rural collective economic organization's special nature, in order to make clear the specific development direction of rural collective economic organization governance. The Uniqueness of the Legal Person Type The legislation adopted a new legal person classification method, which stipulated the special legal person for the first time in the legal person type and endowed the rural collective economic organization with the special legal person status. This classification method highlights the parallelism of special legal person, profit-making legal person and non-profit legal person, and also clarifies the special nature of rural collective economic organization as special legal per- doning the title of special legal person and tend to classify the rural collective economic organization into the type of profit-seeking legal person (Guo, 2019). This is obviously inconsistent with the General Provisions of the Civil Law of the People's Republic of China. The emergence of special legal persons is to make up for the omission of profit-seeking legal persons and non-profit legal persons. It is precisely because the rural collective economic organizations cannot be classified into either profit-seeking legal persons or non-profit legal persons that special legal persons emerge at the right moment. Profit-seeking legal person with rural collective economic organizations on the one hand, there exists a significant difference between the two: the purpose of the for-profit legal person is for profit, and the rural collective economic organizations of for-profit isn't the only targets, it still take into account a certain public service function and the function of asset management, prevent the loss of collective assets and revitalize the collective assets and its important function; On the other hand, non-profit legal person and rural collective economic organizations also have a significant difference: non-profit legal person does not have the purpose of making profits. The establishment of non-profit legal person is mainly for public welfare. Its members do not enjoy the distribution of profits, and all the profits will be invested in public welfare undertakings free of charge. Collective economic organizations, on the other hand, have the functions of making profits and providing community public services. For example, each rural collective economic organization designs individual shares while designing collective shares, which not only satisfies the function of providing public services, but also enables the income to be effectively controlled by individuals. Therefore, special legal person is not just a simple name, the rural collective economic organization is different from the other two types of legal person in the type of legal person, we should accurately grasp the special nature of its legal person type. Special Nature of Membership The composition of the members of rural collective economic organizations has the characteristics of community. The acquisition of membership is limited to the members of the collective economic organization, and the non-members of the collective economic organization are not allowed to become shareholders, which also determines the prohibition of the transfer of member shares to the outside world. Moreover, there is no problem of inheritance in such membership, and once the identity relationship of collective members is lost, the identity of shareholders will be lost. Identity is an important criterion for evaluation. All The Special Nature of Voting Legal persons have independent subject status and need to go through the voting procedure when making decisions, special legal persons are not excluded. Rural collective economic organizations of a vote also adopt the method of "majority", just a vote at the rural collective economic organization is different from stock company as a legal person and the enterprise as a legal person in accordance with the amount of equity allocation of voting rights, its shareholders to follow is the principle of "one person one vote" exercise their voting rights, Ministry of Agriculture and Rural Affairs of the People's Republic of China printed and distributed to the articles of association of the rural collective economic organizations demonstration (try out) "affirmed the same voting rights allocation". The owners of rural land are not villagers, and the members' shareholder status is not obtained through capital contribution, but based on a specific status. Therefore, the right to vote cannot be allocated according to the amount of land occupied. Exercising the right to vote according to the requirement of "one person, one vote" can better reflect the special and fairness of collective economic organizations. Special Nature of Contributions by Members The membership of a member as a shareholder does not presuppose actual capital contribution. The current law of our country has clearly stipulated the operation mode of household contract responsibility system, which is still playing a positive role. With the continuous progress of rural property right reform, there are more alternative forms of rural collective economic organization, such as stock cooperative and cooperative organization. Through the reform of rural property rights, members can enjoy the shareholder status in rural collective economic organizations, but this shareholder status is not obtained through investment, but through membership, with the characteristics of non-payment. Based on the characteristics of the household contract responsibility system, members get the contracting right of collective land free of charge. While ensuring the ownership of rural land does not transfer, the collective members transfer the right of land management at the same time, so that the collective assets can be quantified into shares and distributed to members (Jiao, 2019). The shareholder's contribution is to convert the contracted land contracted by villagers into shares, which is very different from the actual contribution of corporate shareholders. Formalism in the Constitution of Rural Collective Economic Organizations The constitution of rural collective economic organization is the key to coordi- First, there is a lack of procedural provisions. Constitution is the criterion that rural collective economy organization is engaged in an activity, it is the key that legal person manages. Although local norms have made corresponding provisions on the subject of the constitution and the amendment procedure, the overall provisions are relatively crude, lacking special chapter provisions on the amendment matters of the constitution, ignoring the provisions of meeting minutes and the principle of publicity. Not only does the constitution of local rural collective economic organizations ignore the procedural provisions of the constitution, amendment and publicity, but even the "Model Constitution of Rural Collective Economic Organizations (Trial)" issued by the Ministry of Agriculture and Rural Affairs of the People's Republic of China also ignores the procedural provisions of the constitution, only stipulating the subject of the constitution and amendment in the organizational structure. The lack of procedural provisions in the articles of association leads to the fact that the amendment of the Articles of Association in reality is often separated from procedural matters, such as meeting minutes lacking the amendment of the Articles of Association and procedural problems such as the number of participants not reaching the standard when the amendment of the Articles of Association is made. Lawsuits arising from this are also common. For example, in case No. 3331 of Zhejiang Hang Minzizhong (2014), the joint-stock economic cooperative failed to hold a meeting of shareholders' representatives and forged the minutes of the third meeting of shareholders' representatives, thus forming a new constitution and finally denying the shareholder status of some villagers. These problems are caused by the lack of meeting minutes and other procedural matters, which obviously violates the amendment procedure of the bylaws and ignores the guiding role of the bylaws. Second, the constitution lacks specificity. Following the promulgation of the Imperfect Governance Structure The governance structure is the essential organ of a legal person (Qu, 2018). Second, lack of internal and external supervision mechanism. In reality, the problem of insider control is prominent and the intervention of external supervision is lacking. Although many regions have set up governance structures and stipulated the composition of the personnel of governance structures, they have not gotten rid of the closed nature of rural collective economic organizations. Therefore, members outside the collective economic organizations cannot participate in the governance and supervision of the legal person of the collective economic organizations, which easily leads to the emergence of internal minority control problems. Taking the management of collective assets as an example, although corresponding governance structures have been introduced in various regions in practice, many regions do not require hiring professional intermediary institutions for the verification of collective assets, which will lead to the encroachment of collective assets. The members of the supervisory organization are all members of the collective, and there are affinity and blood relationship among the members of the collective, which makes the supervisory organization become a decoration in practice. At this time, the lack of external supervision easily leads to the situation that the decision-making power is controlled by a few hands, which undoubtedly aggravates the operation difficulty of collective economic organizations as legal persons. The Separation of Government and Economy Has Not Yet Been Achieved The demise of the people's commune represents the mode of "political and eco- With the dissolution of the people's commune, its economic and political functions will not dissipate. The ideal state is that the economic functions of the people's commune are undertaken by the rural collective economic organization, while the administrative functions of the people's commune are undertaken by the village committee, so as to achieve the mode of "separation of politics and economy". Limited by the regionalism of our country's development, the construction of rural collective economic organizations in many areas is still not independent, and there are a lot of confusion between villagers' autonomous organizations and rural collective economic organizations (Guan, 2019). So far, the separation of government and economy has not been realized, which is shown in practice as follows: First, conflicts exist between the functions of rural collective economic organizations and villagers' committees. The main conflict between the two functions is that both have administrative functions over collective assets, the law confirms the functions of village committee and village collective economic organization in the management of collective land and endows them with legitimacy in the management of collective assets (Zhang, 2016). Land Contract Law of the People's Republic of China states that both the villagers' committee and the village collective economic organization have the right to grant the collective land, but it does not distinguish the relationship between the two as well as the specific economic and administrative functions. Legislation failed to distinguish between villagers committee and the rural collective economic organizations on the collective asset management of administration and economic functions, is reflected in practice do less than mutual interference between functions, many of the local regulations set forth the collective economic organizations should accept the supervision of the villagers committee, such as "daguan district rural collective economic organization and management method (trial)" is explicitly give the villagers committee for the rural collective economic organizations have supervision power. Second, there is overlap between the staff of rural collective economic organizations and villagers' committees. "More than half of China's villages have not yet established independent rural collective economic organizations." (Guan, 2020) This is an issue left over from history in a specific period. The 1983 Notice on the "Separation of Political and Social Organizations and the Establishment of Township Governments" is a practice of supporting rural collective economic organizations and villagers' committees in implementing a set of two bodies. Up to now, when the conditions for "separation of political and social sectors" have gradually matured, the form of "semi-separation of political and social sectors" still exists in many places, with overlapping staff among institutions, such as Meitan County in Guizhou Province (Xia, 2018). "Two brands, a set of team" approach fundamentally did not achieve "political and economic separation", the crossover of personnel easily lead to the confusion of two independent legal person, not conducive to the realization of "political and economic separation". The Governance Structure Emphasizes the Combination of Internal and External Supervision First, unify the governance structure mode. First of all, the construction of the governance structure should adopt the "three-tier" governance model of authority-executive agency-supervision agency. In the governance structure, there is no lack of supervision institutions. The absence of supervision institutions will lead to the lack of supervision of the power institutions and executive institutions in the exercise of power, which will easily lead to the abuse of power. It is necessary to adhere to the special nature of rural collective economic organization when constructing the specific governance structure, but this does not mean that the "three-tier" governance structure of corporate style cannot be used for reference. The special characteristics of rural collective economic organizations can be reflected through the selection of personnel in the governance structure. For example, the selection procedures of personnel in the governance structure shall be reasonably guided by the people's government at a higher level, and the members of the governance structure shall be guaranteed to be members of the collective. Moreover, the "three-tier" governance structure model has been tested by the company and proved that it can ensure the normal operation of the company, while the rural collective economic organization and the company are both economic organizations in nature and have certain similarities. Secondly, the responsibilities and composition rules between the governance structures should be clarified to achieve the effect of checks and balances. A good institutional design is the basis for the normal operation of legal persons. In order to make the internal structure of governance reach the level of checks and balances, power separation is the most appropriate way to choose. In terms of personnel composition, the personnel of the supervisory and executing agencies must not be confused; In terms of functions and duties, the power of the authority must be clearly defined, and the exercise of the supervisory power by the supervisory authority must be implemented. It insists that the executing agency must exercise specific management and execution functions under the supervision of the supervising agency, and at the same time entrusts the executing agency with the right to review the objections to the supervising agency, so as to achieve the balance of power. Second, pay attention to external supervision of the governance structure. Many local normative documents generally attach importance to the supervision of administrative organs on the operation and management of rural collective economic organizations, but generally ignore the supervision role of specialized intermediary agencies. At present, the local rural collective economic organizations generally set up supervision institutions, and it is clear that the management of collective assets shall be managed by the executing agency, and the specific management behaviors shall be supervised by the supervision agency. But only relying on internal supervision is not enough, because of the closed nature of rural collective economic organizations, it is easy to lead to the problem that a few people control collective assets. It is important to emphasize the external supervision of administrative organs, but the supervision and guidance of people's governments at all levels are in principle and cannot interfere with the normal operation of rural collective economic organizations. In particular, rural collective economic organizations are required to have special accounting accounts to ensure that the collective property is not harmed. Due to the concern for the collective's own rights and interests, and the professional formulation of accounting accounts, it is easy for collective assets to be controlled by a few people only relying on internal supervision. Accounting firms are both professional and supervisory, and their involvement is conducive to the guidance and external supervision of the financial situation, so as to form a reasonable situation of internal and external supervision. After verification by the accounting firm, the results shall be publicized to all shareholders to ensure that shareholders' right to know will not be infringed. "Separation of Government and Economy" at Different Levels The situation of "no division of government and economy" focuses on reflecting the public service function of rural collective economic organizations, which is not conducive to the realization of the market status of rural collective economic organizations. To achieve the goal of separating politics from economy, it is necessary to clarify the relationship and functions between collective economic organizations and other grass-roots organizations, and fully reflect the economic functions of rural collective economic organizations. Therefore, we should take the initiative to create opportunities and separate the administrative functions and economic functions of grassroots organizations (Fang, 2018). sons, whose functions are also different, and their functional boundaries should be clarified. The villagers committee is a non-profit-making legal person in a special legal person with certain administrative color in nature (Li, 2017), and its scope of engaging in civil activities is limited. The function of the mass autonomous legal person is to perform public services, while the function of the rural collective economic organization is more reflected in the aspect of engaging in economic activities, transforming the collective land into real assets. Even though the "Organic Law of the Villagers' Committees" states that the villagers' committees have the right to manage collectively owned land according to law, collective land includes land for public welfare purposes as well as land for commercial purposes. Therefore, the economic function and public service function can be defined according to the different USES of land, and the economic function of the villagers committee on collective assets can be separated. First, in places where rural collective economic organizations are established, the land for public welfare purposes should be managed by villagers' committees, so as to realize their public service functions. Second, land for profit-making purposes should be turned over to rural collective economic organizations for operation and management. In particular, economic functions can be realized by building workshops, taking land as a factor of production to become a shareholder and inviting public bidding, so as to further clarify the boundary of villagers' committees' economic functions over collective land. Second, solve the problem of overlapping personnel. Specifically, to ensure the separation of the staff of the rural collective economic organization and the villagers' committee, all local regulations explicitly prohibit the issue of overlapping staff between the board of directors and the board of supervisors, but generally ignore the issue of overlapping staff between the rural collective economic organization and the villagers' committee. The form of organization of villagers committee and rural collective economic organization is different, the former has more administrative color, while the latter represents economic organization. The overlapping of personnel between them and the practice of implementing a set of team is more inclined to "the integration of politics and economy". For the realization of "political and economic separation", the constitution should clearly stipulate that the management structure of the rural collective economic organization and the villagers' committee shall not overlap in personnel, and establish the personnel selection mechanism independent of the villagers' committee, so as to realize the separation situation of "two brands, two groups of people". Third, clarifying the economic subject status of rural collective economic organizations. It is necessary to make clear the essential attribute of rural collective economic organization (Guan, 2018). In the aspect of property supervision, an independent asset supervision system should be established, which shall be su-Open Journal of Social Sciences pervised by the supervision institutions in the governance structure, supplemented by the verification of accounting institutions, so as to eliminate the appearance of "village account and township management". Grassroots political power should not escrow the property of collective economic organizations, its guidance and supervision should be in principle supervision. Specifically, it provides guidance for the formulation of the constitution of the collective economic organization, supervises the macro-operation status of the rural collective economic organization, and provides market information for its economic activities, but should not affect the status of the market subject of the rural collective economic organization. Conclusion Regions are paying more and more attention to the exploration of the governance of rural collective economic organizations. It is only limited by the different levels of development in various regions of our country and the lack of national legislation for rural collective economic organizations, which has led to various regions. There are certain flaws in the regulations, which undoubtedly puts a severe test on the governance of rural collective economic organizations. Therefore, it is necessary to grasp the special nature of collective economic organizations, and put the special nature into the constitution and governance structure of rural collective economic organizations, so as to distinguish rural collective economic organizations from other legal persons and villagers' committees, and then realize the independent legal person status of rural collective economic organizations. Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2020-12-31T09:05:39.329Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "a8c768396b3e4432349cc05ed6f33ed28e2ee0a3", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=106204", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "cc085864cfd05c358f55b77de6f37b4d12b48970", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Business" ] }
59223514
pes2o/s2orc
v3-fos-license
Deformation twinning mechanism in hexagonal-close-packed crystals The atomic structure of {10 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\bar{{\bf{1}}}$$\end{document}1¯ 2} twin boundary (TB) from a deformed Mg-3Al-1Zn (AZ31) magnesium alloy was examined by using high-resolution transmission electron microscopy (HRTEM). By comparing the lattice structure of TB with the previously established model, a kind of special atomic combinations, here named primitive cells (PCs), were discovered at the TB. The PCs reorientation induced mechanism of twinning in hexagonal-close-packed (HCP) crystals was hence verificated. Meanwhile, the relationship between the misorientation of adjacent layers of PCs and the width of TB was discussed. The verification of the mechanism clarifies the twinning mechanism in HCP crystals and opens up opportunities for further researches. Results The picture recording of the atomic array around the TB of a {1012} twin from the deformed AZ31 alloy was obtained by HRTEM detection (Fig. 1a). The image was divided into three parts consisting of the parent, twin, and TB. The orientations of the parent and the twin were symmetrical about the {1012} twinning plane, but not the TB. The TB was not an imagined thin interface composed of single-layered atoms, but a large range of distorted lattice regions composed of muti-layered atoms. Around the TB, some atoms gathered together closely, making them distinguishable in the formation of some specific atomic combinations, as denoted by the diamonds in Fig. 1b,c. According to the geometric shape characteristics, these atomic combinations are identified as the proposed "PCs". To clarify that the PCs observed in the TB region in Fig. 1 are not artifacts, the corresponding Fast-Fourier-Transform (FFT) patterns of Fig. 1b,c (Fig. 1d,f) together with the FFT patterns of the neighboring parent and twin regions (Fig. 1a) were provided. The result indicates that the FFT patterns of the PCs are not simply a combination of the FFT patterns of the twin and parent regions. Some related literature can also verify the existence the PCs in HCP twinning. Figure 2a was the HRTEM image of {1012} TB of the Mg 97 Zn 1 Y 2 alloy 27 . Figure 2b is the enlarged view of the area enclosed by the box in Fig. 2a, where the reorientating PCs can be clearly observed, as indicated by the diamonds. Figure 2c was a schematic of HCP {1012} twinning by shuffle mechanism 28 , where the PC can also be distinguished as indicated by the added dotted line parallelograms. Figure 2d was the diagram of {1012} twinning nucleation in Mg obtained by atomistic simulations 29 . Around the TB, the PCs can also be marked off, as indicated by the diamonds. In summary, the appearance of PCs in HCP twinning is universal. Discussion According to the PC model, the PCs were considered to rotate as a whole to induce the migration of TB. However, the specific mechanism depends on the structure of the TB. As mentioned above, the TB was composed of layers of PCs oriented between the parent and the twin. Although the total misorientation of PCs between the parent and the twin is fixed, the misorientation between adjacent layers of PCs depends on the width of TB, namely the number of layers of PCs. The simplest TB structure is that there is only one layer of PCs serving as the TB, as indicated in Fig. 3. The orientation of PCs at the TB is between that at the parent and the twin. Figure 3(a-e) illustrates the process of the migration of the TB, where the reorientation of PCs make the TB sweep over the lattice and transform the parent into the twin. The border between adjacent layers of PCs presents 'steps' shape due to the lattice distortion ( Fig. 3b-f), which was named twinning dislocation elsewhere 4 . When the PCs in Fig. 3 rotate anticlockwise the TB migrates along the arrows, namely, twinning occurs. As a requirement for the activation of PC rotation, the PCs must break through the lattice resistance sustained from the surrounding atoms. The resistance was closely linked to the misorientation θ between two adjacent layers of PCs (suppose that the misorientation was uniform). For the TB composed of a single layer of PCs, θ = α/2, α denotes the total misorientation of PCs from parent to twin. When this equation is applied to {1012} twinning in magnesium, α ≈ 15.9°, θ ≈ 8°. Often cases vary from the above hypothesis. Because TB is usually composed of muti-layered PCs. Figure 4a presents a HRTEM image of {1012} TB containing three layers of PCs in AZ31 alloy. Figure 4b is the schematic of distribution of PCs orientations corresponding to the box in Fig. 4a. The arrows denote the orientations of the PCs. As the arrows show, the PCs exhibit a strong gradual reorientating inclination from the parent to the twin. The five arrows form four misorientations, α 1 , α 2 , α 3 , and α 4 , between every two adjacent layers (Fig. 4c). Thus, the total misorientation of PCs from parent to twin can be expressed as: If these reorientating PCs were well-distributed from parent to twin, the above equation changes to: where θ is the average misorientation between adjacent PCs. By extension, the value of θ for a TB contains n layers of PCs can be expressed as: Thus, when this equation is applied to the case seen in Fig. 4b (where α ≈ 15.9°, n = 3), θ ≈ 4°; and when it is applied to the TB with the width of d 2 that consisted of approx. 15 layers of PCs, θ ≈ 1°. (Fig. 1a). Apparently, the resistance for twinning to break through decreases with the increasing number of layers of PCs. In another word, an increased number of layers of PCs can reduce the critical resolved shear stresses (CRSS) required in twinning activation. These conclusions may help to explain or predict phenomena regarding the TB movement. For example, for the TB shown in Fig. 1a the width of TB at the tip was larger than at the edge (Fig. 1a), which resulted in a more rapid growth along the longitudinal direction. The establishment of PC model is also helpful to explain the mechanism of twin nucleation. Since a twin was formed from a nucleus during TB migration from inside to outside, an inverse process can restore the original appearance of the nucleus. The verification of the PC mechanism opens the opportunity for further researches relevant to twinning. Conclusions The atomic combinations discovered at the TB were identified as proposed PCs in accordance to their characteristics, verifying the PC induced mechanism. The twinning process was induced by the rotation of the PCs. To accomplish this, the PCs must overcome the resistance from the surrounding lattice that was closely related to the CRSS. This was not determined by their total rotational angle from the parent to the twin, but rather from the misorientation between adjacent PCs. Methods A cuboid sample was cut from a hot-rolled AZ31 sheet with a dimension of 30 × 30 × 22 mm 3 in the rolling direction, transverse direction, and normal direction. The sample was compressed by about 7% at a strain rate of ~10 −3 s −1 at room temperature, with the loading direction parallel to the rolling direction. The HRTEM sample was prepared via low temperature ion thinning. A FEI Tecnai F30-G2 electron microscope with a voltage of 300 kV was used to carry out the HRTEM observations.
2019-01-25T15:11:39.014Z
2019-01-24T00:00:00.000
{ "year": 2019, "sha1": "0b4e746b2f54d79ba043dac2bcad62fcc7f063ec", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-37067-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5e69b37501211aa7b236425dbd632613eff9119e", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
245802118
pes2o/s2orc
v3-fos-license
Effects of WeChat platform-based health management on health and self-management effectiveness of patients with severe chronic heart failure BACKGROUND Epidemiological studies have found that the prevalence of chronic heart failure in China is 0.9%, the number of people affected is more than 4 million, and the 5-year survival rate is even lower than that of malignant tumors. AIM To determine the impact of WeChat platform-based health management on severe chronic heart failure patients’ health and self-management efficacy. METHODS A total of 120 patients suffering from chronic heart failure with cardiac function grade III-IV, under the classification of the New York Heart Association, were admitted to our hospital in May 2017. In January 2020, they were divided into two groups: A control group (with routine nursing intervention) and an observation group (with WeChat platform-based health management intervention). Changes in cardiac function, 6-min walking distance (6MWD), high-sensitivity cardiac troponin (hs-cTnT), and N-terminal pro B-type natriuretic peptide (NT-proBNP) were detected in both groups. The Self-Care Ability Scale (ESCA) score, Minnesota Living with Heart Failure Questionnaire score, and compliance score were used to evaluate self-management ability, quality of life, and compliance of the two groups. During a follow-up period of 12 mo, the occurrence of cardiovascular adverse events in both the groups was counted. RESULTS The left ventricular ejection fraction, stroke output, and 6MWD increased, and the hs-cTnT and NT-proBNP decreased in both the groups, as compared to those before the intervention. Further, cardiac function during the 6MWD, hs-cTnT, and NT-proBNP improved significantly in the observation group after intervention (P < 0.05). The scores of self-care responsibility, self-concept, self-care skills, and self-care health knowledge in the observation group were higher than those of the control group before intervention, and their ESCA scores were significantly improved after intervention (P < 0.05). The Minnesota heart failure quality of life (LiHFe) scores of physical restriction, disease symptoms, psychological emotion, social relations, and other items were decreased compared to those of the control group before intervention, and the LiHFe scores of the observation group were significantly improved compared to those of the control group (P < 0.05). With intervention, the compliance scores of rational diet, regular medication, healthy behavior, and timely reexamination were increased, thereby leading to the compliance scores of the observation group being significantly improved compared to those of the control group (P < 0.05). During the 12 mo follow-up, the incidence rates of acute myocardial infarction and cardiogenic rehospitalization in the observation group were lower than those of the control group, and the hospitalization time in the observation group was shorter than that of the control group, but there was no significant difference between the two groups (P > 0.05). CONCLUSION WeChat platform-based health management can improve the self-care ability and compliance of patients with severe chronic heart failure, improve the cardiac function and related indexes, reduce the occurrence of cardiovascular adverse events, and enable the avoidance of rehospitalization. INTRODUCTION Chronic heart failure is the final stage of various cardiovascular diseases. It is complex and involves multiple complications, a high case fatality rate, and a profoundly negative prognosis. Patients frequently need to be hospitalized, which may not only lead to deterioration of their condition, but also add an economic burden on them, causing medical resource waste. Therefore, maintaining a stable condition of chronic heart failure has become a key objective in clinical treatments [1]. However, the phenomena of worsening cardiac situations and repeated hospitalizations are currently very common given that there are no effective approaches to address the issues of health intervention subsequent to the discharge of patients and their poor self-management capabilities. Under the present conventional nursing model, interventions for patients outside the hospital consist of discharge guidance and December 6, 2021 Volume 9 Issue 34 telephonic interviews, and their impacts are barely satisfactory [2]. Continuing nursing care is an emerging nursing model that is an extension of hospital care. It ensures that patients receive sustained and efficient care interventions and are able to solve health problems when they are discharged [3]. WeChat is a common and good real-time social application with high interactivity and is utilized frequently in the medical field [4]. In this study, we applied WeChat to continue nursing care outside the hospital for severe patients with chronic heart failure and observed the impact of the WeChat platform-based health management approach on the health of the patients and the efficiency of self-management. General information One hundred and twenty patients with chronic heart failure with cardiac function of grade III-IV, under the New York Heart Association (NYHA), were admitted to our hospital in May 2017. In January 2020, they were divided into two groups: A control group (with routine nursing intervention) and an observation group (with WeChat platform-based health management intervention). The inclusion criteria for the patients were as follows: (1) Suiting the standard of chronic heart failure provided in the Chinese Guidelines for the Diagnosis and Treatment of Heart Failure; (2) being in the age group of 18-75 years; (3) having NYHA grade III-IV cardiac function; (4) having a good mastery over using WeChat and residing locally; (5) having an expected lifetime of 12 mo or more; and (6) providing their informed consent. The exclusion criteria were as follows: (1) Having an abnormal function of limbs; (2) suffering from valvular heart disease and/or Cor pulmonale; (3) being diagnosed as insane; (4) having severe infections; and (5) having uncontrollable diseases such as hypertension and diabetes. There were 60 cases in the control group, with 36 patients being male and 24 being female. The age range was 40 years to 75 years and the average age (mean ± SD) was 58.69 ± 10.13 years. There were 60 cases in the observation group, with 32 patients being male and 24 being female. The age range was 40 years to 75 years and the average age was 59.41 ± 11.05 years. Methods The control group received conventional care intervention and discharge guidance, including reasonable diet, usage of drugs under instruction, proper exercise, and an appointment for the next visit to the hospital. Telephonic follow-ups were done regularly when they were discharged from the hospital. The observation group received WeChat platform-based health management intervention. The WeChat health management group was composed of a doctor, a nurse, and an administrator on the network platform. The administrator built the group and the official accounts of health management, and ensured that both were maintained and run routinely. Medical staff regularly published relevant knowledge about self-management of chronic heart failure, including basic knowledge of cardiovascular diseases, a regular schedule to adhere to, diet and drug instructions, sports guidance, emotion management, etc. This content was issued in the form of pictures, texts, audio notes, and video notes, once a day. WeChat provided personalized instructions, propagated health behavior interventions, and instructed patients, whose conditions were getting worse, to obtain medical treatment instantly, and also assisted them with arranging hospitalization via private talks. Measurements The cardiac function indexes, left ventricular ejection fraction (LVEF) and stroke output (SV), were detected using an ultrasonic cardiogram before and after the 12-mo interventions. The detection equipment used was a Philips IE33 Color Doppler Ultrasound diagnostic instrument with a probe frequency of 3.0-7.5 MHz. Fasting venous blood (3 mL) was collected from the patients, and centrifuged for 10 min at 3500 r/min within 1 h after the blood collection. The serum was tested for high-sensitivity cardiac troponin (hs-cTnT) and N-terminal pro B-type natriuretic peptide (NT-proBNP) by enzyme-linked immunosorbent assay. The kit was manufactured by Shanghai Enzyme Link Biotechnology Co., Ltd., and the instrument used was the RT-96A enzyme label instrument manufactured by Shenzhen Mindray Medical Electronics Co., Ltd. December 6, 2021 Volume 9 Issue 34 Evaluation standards The Self-care Ability Scale (ESCA) score, Minnesota heart failure quality of life (LiHFe) score, and compliance score were used to evaluate the self-management ability, quality of life, and compliance of both groups. The ESCA score includes 43 items of self-care responsibility, self-concept, self-care skills, and self-care health knowledge, and the score is positively correlated with selfmanagement ability. The LiHFe score includes 21 items in total, including physical limitations, disease symptoms, psychological emotions, and social relationships. A 6segment scoring method is applied, and the score is inversely proportional to the quality of life [5]. The compliance score includes a reasonable diet, regular medication, healthy behavior, and timely review. This scale is a self-designed score by the hospital, with a single score ranging from 0 to 10 points, which is proportionate to compliance by the patient. Follow-up information The occurrence and hospitalization time of cardiovascular adverse events (i.e., aggravation of heart failure, acute myocardial infarction, severe arrhythmia, cardiogenic readmission, etc.) in both groups were recorded by the outpatient service or WeChat platform for 12 mo. Statistical analysis Statistical analyses were performed with SPSS19.0. Measuring index are expressed as the mean ± SD and were compared by the t test. Count data were compared by the χ 2 test. Statistical significance was defined as P < 0.05. Comparison of baseline data between the two groups There was no statistical significance when comparing the baseline data between the two groups (P > 0.05; Table 1). Comparison of heart function between the two groups The LVEF and SV rose after intervention in both groups. Further, the heart function after intervention of the observation group significantly increased compared to that of the control group (P < 0.05; Table 2). Comparison of 6-min walking distance, hs-cTnT, and NT-proBNP between the two groups After intervention, the 6-min walking distance (6 MWD) increased, and the hs-cTnT and NT-proBNP decreased in both groups; the 6MWD, hs-cTnT, and NT-proBNP after intervention of the observation group significantly increased compared to those of the control group (P < 0.05; Table 3). Comparison of ESCA scores between the two groups After intervention, ESCA scores of self-care responsibility, self-concept, self-care skills, self-care health knowledge, etc. increased in both groups and ESCA scores after intervention of the observation group significantly increased compared to those of the control group (P < 0.05; Table 4). Comparison of LiHFe scores between the two groups After intervention, LiHFe scores of physical limitations, disease symptoms, psychological emotions, social relationships, etc. decreased in both groups and the LiHFe scores after intervention of the observation group significantly increased compared to those of the control group (P < 0.05; Table 5). Comparison of compliance scores between the two groups After intervention, compliance scores of reasonable diet, regular medication, healthy behavior, timely review, etc. increased in both groups and compliance scores after intervention in the observation group significantly increased compared to those of the control group (P < 0.05; Table 6). December Table 2 Comparison of heart function between the two groups (mean ± SD) Comparison of adverse cardiovascular events between the two groups During the follow-up period of 12 mo, the observation group had lower acute myocardial infarction incidence and cardiogenic readmission rates, and also had shorter hospital stays compared to the control group. There was no statistical difference in the incidence rates of the aggravation of heart failure and severe arrhythmia between the two groups (P > 0.05; Table 7). Number of cases Preintervention After intervention After intervention Self-care responsibility Self-concept Self-care skills Self-care health knowledge Group Number of cases Preintervention After intervention Preintervention After intervention Preintervention After intervention DISCUSSION WeChat platform-based health management carries out health education, drug instructions, management of health behaviors etc. by utilizing a social application called WeChat. It belongs to the field of continuing nursing care [6][7][8]. In recent years, WeChat platform interventions have been applied to various fields, such as chronic diseases, diabetes, coronary heart disease, chronic renal failure, and antenatal guidance [9]. A WeChat platform-based health management style was utilized in cases of severe chronic heart failure in this study, which could promote the capabilities of self-care responsibility, self-conception, self-care skills, self-care health knowledge, etc., as well as moderate life qualities of physical limitations, disease symptoms, psychological emotions, social relationships, etc.; and improve compliance with a reasonable diet, regular medication, healthy behavior, and timely review. This is because official accounts on the WeChat platform regularly published self-management-related intellectual property relating to chronic heart failure to help patients grasp the main points and skills of self-management. They also answered questions online on WeChat group communications to assist patients in mastering the main points of knowledge better through interaction, as well as urge them to engage in health management in order to improve self-care capability and treatment compliance. After building an electronic medical record, we required patients to report their self-measuring indexes every day to give medically accurate information on changes in their disease conditions and enable them to gain personalized intervention through private talks to recognize and deal with risk elements in time, control disease conditions effectively, and improve quality of life. LVEF and SV are indicators of cardiac pumping function. A decrease in LVEF indicates myocardial contractility weakening [10][11][12][13]; and the 6MWD reflects the supportive force of cardiopulmonary function for exercise [14]. Hs-cTnT is a structural protein of cardiomyocytes, and its elevation in serum levels indicates myocardial injury and necrosis [15][16][17][18][19]. NT-proBNP is an endogenous hormone secreted by ventricular myocytes, and its serum level reflects the degree of myocardial damage, which is an important index for clinical evaluation of the degree of heart failure[20]. This study used indexes of ultrasound cardiograms and laboratory serum to estimate the condition of patients. The 6MWD was used to appraise exercise tolerance. We found that a health management style based on the WeChat platform in cases of severe chronic heart failure can promote the expression of heart function and related indicators, which favor disease control. During the 12-mo follow-up, we found that the WeChat platform-based health management style, in cases of severe chronic heart failure, reduced the acute myocardial infarction incidence and cardiogenic readmission rates and shortened hospital stays. Patients experienced the favorable effects of intervention in many aspects, such as healthy lifestyle, objecting to medical advice, and controlling their diseases during the interventions out of the hospital, by improved compliance with a reasonable diet, regular medication, healthy behavior, timely review, etc. In daily reports, in every self-measuring index, the medical staff and patient were able to easily note changes in disease condition in time, make relative adjustments in treatment, and prevent deterioration and relapse of the condition, which will ultimately have a better curative effect in the long term.
2022-01-08T05:16:55.499Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "a76c4d6503c52efc1f99edc055faf1cba2b40cdc", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12998/wjcc.v9.i34.10576", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a76c4d6503c52efc1f99edc055faf1cba2b40cdc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209071646
pes2o/s2orc
v3-fos-license
Efficient Identification in Large-Scale Vein Recognition Systems Using Spectral Minutiae Representations Large biometric systems, e.g. the Indian AADHAAR project, regularly perform millions of identification and/or de-duplication queries every day, thus yielding an immense computational workload. Dealing with this challenge by merely upscaling the hardware resources is often insufficient, as it quickly reaches limits in terms of purchase and operational costs. Therefore, it is additionally important for the underlying systems software to implement lookup strategies with efficient algorithms and data structures. Due to certain properties of biometric data (i.e. fuzziness), the typical workload reduction methods, such as traditional indexing, are unsuitable; consequently, new and specifically tailored approaches must be developed for bio-metric systems. While this is a somewhat mature research field for several biometric characteristics (e.g. fingerprint and iris), much fewer works exist for vascular characteristics. In this chapter, a survey of the current state of the art in vascular identifi-cation is presented, followed by introducing a vein indexing method based on proven concepts adapted from other biometric characteristics (specifically spectral minutiae representation and Bloom filter-based indexing). Subsequently, a benchmark in an open-set identification scenario is performed and evaluated. The discussion focuses on biometric performance, computational workload, and facilitating parallel, SIMD and GPU computation. Introduction One of many catalysts for the rapid market value increase of biometrics is governmentdriven, large-scale biometric deployments. Most prominent examples include the Indian AADHAAR project [20], which aims to enrol the entire Indian population of 1.3 billion individuals and-at the time of writing-has already enrolled over 1.2 billion subjects, as well as several immigration programmes like the UAE or the European VIS-and EES-based border control. The operation of such large-scale deployments yields immense computational load in or duplicate enrolment checks, where-in the worst case-the whole database has to be searched to make a decision. Upscaling the hardware in terms of computing power quickly reaches certain limits in terms of, e.g. hardware costs, power consumption or simply practicability. Therefore, the underlying system's software needs to implement efficient strategies to reduce its computational load. Traditional indexing or classification solutions (e.g. [21,37]) are ill-suited: the fuzziness of the biometric data does not allow for naïvehashing or equality comparison methods. A good read for further understanding the problem with traditional approaches is found in [17]. This matter is the key motivation and the main focus of this chapter. One emerging biometric characteristic that steadily increases its market share 1 and popularity is the vascular (blood vessels) pattern in several human body parts. The wrist, back of hand and finger vessels hold the most interest since they are intuitive to capture for users and feature several advantageous properties, whereby back of hand and wrist vessels are less prone to displacement due to stretching or bending the hand. Many accurate (in terms of biometric performance) approaches and algorithms for vascular pattern recognition have emerged over time (i.e. [7,23,39]). However, most of them employ slow and complex algorithms, inefficient comparison methods and store their templates in an incompatible format for most template protection schemes. In other words, they generate a very high computational workload for the system's hardware. While several biometric characteristics such as fingerprint [9] and iris [15] are already covered by workload reduction research, it is only a nascent field of research for the vascular characteristics. This chapter addresses the palm vein characteristic with a focus on the biometric identification scenario and methods for reducing the associated computational workload. Organisation This chapter is organised as follows: • Section 9.1.3 outlines requirements and considerations for the selection of the algorithms used later in this chapter. • Section 9.2 outlines four key computational workload reduction concepts with an algorithm proposal for each concept. • In Sect. 9.3, an overview of the conducted experiments using the presented concepts and algorithms is provided. • Subsequently, Sect. 9.4 lists and discusses the results obtained from the experiments. • Finally, Sect. 9.5 concludes this chapter with a summary. Workload Reduction in Vein Identification Systems While computational cost is not a pressing issue for biometric systems in verification mode (one-to-one comparisons), high computational costs generate several concerns in large-scale biometric systems operated in identification (one-to-many search and comparison) mode. Aside from the naïveapproach of exhaustively searching the whole database for a mated template resulting in high response times and therefore lowering usability, frustrating users and administrators, and thus lowering acceptance, another issue is presented by Daugman [12]. Accordingly, it is demonstrated that the probability of having at least one False-Positive Identification (FPI)-the False-Positive Identification Rate (FPIR)-in an identification scenario to be computed using the following formula 2 : FPIR = (1 − (1 − FMR) N ). Even for systems with very low FMR, this relationship is extremely demanding as the number of enrolled subjects (N ) increases. Without a reduction of the penetration rate (number of template comparisons during retrieval), large biometric systems quickly reach a point where they will not behave like expected: the system could fail to identify the correct user or-even worse-allow access to an unauthorised individual. While this is less of an issue for very small biometric systems, larger systems need to reduce the number of template comparisons in an identification or Duplicate Enrolment Check (DEC) scenario to tackle the computational workload and false-positive occurrences. Therefore, it is strongly recommended to employ a strategy to reduce the number of necessary template comparisons (computational workload reduction) for all, not only vein, modalities. As already mentioned in Sect. 9.1, computational workload reduction for vein modalities remains an insufficiently researched topic and-at the time of writing-no workload reduction approaches directly target vascular biometric systems. However, certain feature representations used in fingerprint-based biometric systems may also be applicable to vein-based systems, and hence facilitating the usage of existing concepts for computational workload reduction, as well as development of new methods. Since the vascular pattern can also be presented by minutiae (further called vein minutiae) which show almost identical characteristics compared to fingerprint minutiae, several workload reduction methods targeting minutiae-based fingerprint approaches might be usable after adaption to the more fuzzy vein minutiae. Concept Focus To utilise the maximum potential of the system's hardware, all of the methods and algorithms presented in this chapter are carefully selected by the authors upon the following requirements: 1. The lookup process has to be implementable in a multi-threaded manner, without creating much computational overheads (in order to manage the threads and their communication). 2. The comparison algorithm has to be computable in parallel without stalling processing cores during computation. For requirement 1, the lookup algorithm has to be separable into multiple instances, each working on a different distinct subset of the enrolment database. In order to understand requirement 2, a brief excurse in parallel computing is needed (for a more comprehensive overview, the reader is referred to [8]). Parallel computation (in the sense of SIMD: Single Instruction, Multiple Data) is not as trivial as multi-threading where one process spawns multiple threads that are run on one or multiple CPU cores. There are multiple requirements for an algorithm to be computable in parallel, of which the two most important are as follows: 1. No race conditions must occur between multiple cores. 2. Multiple cores need to have the same instructions at the same time in their pipeline. Therefore, the comparison algorithm should not rely on if-branches or jumps and the shared memory (if any) must be read-only. This results in another requirement: the feature vectors should be fixed length across all queries and templates to avoid waiting for processing templates of different sizes. However, while fixed-length template comparisons are not automatically more efficient to compute, they offer various other benefits. For example, comparisons in systems utilising fixed-length templates can usually be better optimised and implemented as simple and fast binary operations (e.g. XOR, see, for example, [16]). Furthermore, most binarisation and template protection approaches also rely on fixed-length vectors (e.g. see [22]). Fulfilling these requirements allows for an efficient usage of SIMD instructions on modern CPUs and general-purpose GPUs (GPGPUs), hence utilising the maximum potential of the system's hardware. Therefore, the Spectral Minutia Representation (SMR) [35] was chosen as data representation in this chapter. Compared to shape-or graph-based approaches-like the Vascular Biometric Graph Comparison earlier introduced in this book-it fulfils all requirements: the templates using this floating-point based and fixed-length data representation can be compared by a simple image-correlation method, merely using multiplications, divisions and additions. Further, the SMR is very robust towards translations and rotations can be compensated fast. The SMR can further be binarised, which replaces the image-correlation comparison method with a simple XOR operation comparison method and thus fully allows for utilising the maximum potential of the system's hardware. Thus, it is also compatible with various template protection approaches which rely on fixed-length binary representations. The computational efficiency of the binary SMR comparison is the main reason for the selection of the SMR as data representation. Other methods like the maximum curvature (see [24]) or Gabor filters (e.g. [38]) offer binary representations too and are less expensive in terms of computational costs while extracting the biometric features in the designated data representation. However, both the maximum curvature and the Gabor filter template comparisons are-benchmarked against the binary SMR template comparison-rather complex and expensive in terms of computational cost. Facing the high number of template comparisons needed for an identification or a duplicate enrolment check in large-scale biometric databases, the computational cost of a single SMR feature extraction is negligible with respect to the aggregate computational costs of the template comparisons. Therefore, in large-scale identification scenarios, it is more feasible to employ a computationally expensive feature-extraction algorithm with a computationally efficient comparator. Furthermore, the SMR is applicable to other modalities that can be represented by minutiae. This includes most vascular biometrics, fingerprints and palm prints. Therefore, the same method can be used for those modalities and facilitate feature-level information fusion. In particular, in this chapter, the presented system was also applied successfully for the fingerprint modality. Workload Reduction Concepts Section 9.1.2 covered the motivation behind the reduction of template comparisons in a biometric system. The same section also covered the motivation to reduce the complexity of template comparisons, namely, to achieve shorter template comparison times, thus additionally reducing the computational workload and shorten transaction times. The following sections propose components to reduce the number of necessary template comparisons and reduce the complexity of a single template comparison for a highly efficient biometric identification system. Later in the chapter, the proposed system is comprehensively evaluated. tional drawbacks (at least in parallel computing and the usage of CPU intrinsics) due to variable-sized feature vector sizes, whereby even probes of the same subject differ in their number of features (minutiae points). A post-processing stage can convert the raw feature vector to a fixed feature size representation that should not be reversible to the raw representation. Inspired by the Fourier-Mellin transform [10] used to obtain a translation, rotation and scaling-invariant descriptor of an image, the SMR [18,33] transforms a variablesized minutiae feature vector in a fixed-length translation, and implicit rotation-and scaling-invariant spectral domain. In order to prevent the resampling and interpolation introduced by the Fourier transform and the polar-logarithmic mapping, the authors introduce a so-called analytical representation of the minutiae set and a socalled analytical expression of a continuous Fourier transform, which can be evaluated on polar-logarithmic coordinates. According to the authors, the SMR meets the requirements for template protection and allows faster biometric comparisons. Spectral Minutiae Representation In order to represent a minutiae in its analytical form, it has to be converted into a Dirac pulse to the spatial domain. Each Dirac pulse is described by the function represents the location of the i-th minutiae in the palm vein image. Now the Fourier transform of the i-th minutiae (m i (x, y)) located at (x, y) is given by with a sampling vector w x for the angular direction and sampling vector w y for the radial direction. Based on this analytical representation, the authors introduced several types of spectral representations and improvements for their initial approach. This chapter focuses on one of the initial representations, called the Spectral Minutia Location Representation (SML), since it achieved the best stability and thus the best biometric performance in previous experiments in [25]. It only uses the minutiae location information for the spectral representation: In order to compensate small errors in the minutiae location, a Gaussian low-pass filter is introduced by the authors. Thus, the magnitude of the smoothed SML with a fixed σ is defined as follows: in its analytical representation. By taking the magnitude-further denoted as absolutevalued representation-the translation-invariant spectrum is received ( Fig. 9.1b). When sampling the SML on a polar-logarithmic grid, the rotation of the minutiae becomes horizontal circular shifts. For this purpose, sampling of the continuous spectra (Eq. 9.3) is proposed by Xu and Veldhuis [33] using Xy = 128 (M in [33]) in the radial direction, with λ logarithmically distributed between λ min = 0.1 and λ max = 0.6. The angular direction β for SML is proposed between β = 0 and β = π in Xx = 256 (N in [33]) uniformly distributed samples. A sampling between β = 0 and β = π is sufficient due to the symmetry of the Fourier transform for real-valued functions. Since the SML yields spectra with different energies, depending on the number of minutiae per sample, each spectrum has to be normalised to reach zero mean and unit energy: Throughout this chapter, statements that only apply for the Spectral Minutiae Location Representation will explicitly mention the abbreviation SML, while statements that are applicable to the Spectral Minutiae Representation in general will explicitly mention the abbreviation SMR. Spectral Minutiae Representation-Feature Reduction Sampling the spectra on a Xx = 256 and Xy = 128 grid yields a Xx × Xy = 32, 768 decimal-unit-sized feature vector. This large-scale feature vector introduces two drawbacks as given below: Storage Considering Xx × Xy = 32,768 double-precision float (64 bit) values, each template would take 2,097,152 bit = 256 kB RAM or data storage. Comparison Complexity Processing a Xx × Xy = 32,768 sized feature vector is a large computational task and limits comparison speeds, especially with largescale databases in biometric identification scenarios. In order to address these issues, the same authors of the SMR approach introduced two feature reduction approaches in [36]. Both are based on well-known algorithms and are explained in the following subsections. In this chapter, the Column Principal Component Analysis (CPCA)-based on the idea of the well-known Principal Component Analysis (PCA) originally presented in [26]-is used. In summary, to receive the SMR reduced with the CPCA feature reduction (SMR-CPCA), the PCA only is applied to the columns of the SMR. The features are concentrated in the upper rows after applying the CPCA, and thus the lower rows can be removed, resulting in a Xx × Xy CPCA sized feature vector. According to [36], the achieved feature reduction is up to 80% by employing the SML reduced with the CPCA feature reduction (SML-CPCA) approach while maintaining the biometric performance of the original SML. every element in X is defined as a 32 bit or 64 bit floating-point real-valued number. Comparisons or calculations (especially divisions) with single-or double-precision floating points are a relatively complex task compared to integer or binary operations. In order to address this computational complexity and comply with other template protection or indexing approaches where a binary feature vector is required, the SML (e.g. Fig. 9.2a, d) as well as the other SMR can be converted to a binary feature vector as presented in [32]. The binarisation approach yields two binary vectors: a so-called sign-bit vector and a so-called mask-bit vector: sign bit The sign-bit vector ( Fig. 9.2b, e) contains the actual features of the SMR in a binary representation. Each bit is set according to one of the two binarisation approaches. mask bit Since binary representations suffer from bit flips on edges in fuzzy environments, a second vector ( Fig. 9.2c, f) is introduced. This vector marks the likelyto-be-stable-called reliable-sign bits and is generated by applying a threshold (MT ) to the spectrum. The mask contained in the mask-bit vector is not applied to the sign bit; instead, it is kept as auxiliary data and applied during the comparison step. This approach equals the masking procedure in iris recognition (see [13,14]). Spectral Minutiae Representation-Comparison The most proven performance in SMR comparison is reached with the so-called direct comparison. 3 It yields the most reliable comparison scores, while keeping a minimal computational complexity. Let R(m, n) be the spectrum of the reference template and P(m, n) the spectrum of the probe template, both sampled on the polar-logarithmic grid and normalised. Then, the similarity-score E (R,P) DM is defined as The score is thus defined by correlation, which is a common approach in image processing. For comparing two binary SMRs or SMR-CPCAs, a different approach is introduced in [32], which is also used in the iris modality [13,14]. The inclusion of masks in the Hamming Distance masks out any fragile (likely-toflip) bits and only compares the parts of the sign-bit vector where the mask-bit vectors overlap. Therefore, only the reliable areas are compared. This typically improves the recognition performance. Spectral Minutiae Representation-Template Protection Properties It is not possible to revert the spectral minutiae representation back to their initial minutiae input [33], so the irreversibility requirement of the ISO/IEC 24745 [28] standard is fulfilled. However, the spectral minutiae representation itself does not fulfil the unlinkability and renewability requirements. This issue can be tackled, e.g. with permutations of columns with application-specific keys. Depending on which templates are used in the training set of the CPCA feature reduction, a partial renewability and unlinkability (see [28]) can also be achieved, as explained in [25]. Spectral Minutiae Representation-Embedding Minutiae Reliability Data It is possible that a feature-extraction pipeline may generate falsely extracted minutiae (a.k.a a spurious minutiae). Some pipelines are able to determine a genuine certainty for each minutiae, which describes the certainty that the extracted reference point is a genuine minutiae and not a spurious minutiae. When this minutiae reliability (q M , ranging 1-100% 4 ) is known, the Dirac pulse (Eq. 9.1) of each minutiae can be weighted linearly (w i , ranging 0.01-1.0, corresponding to q M ) to its reliability: Stronger reliability corresponds with a higher weight w i for minutiae m i (x, y, q M ). This approach is further called Quality Data-Enhanced Spectral Minutia Location Representation (QSML) throughout this chapter. Spectral Minutiae Representation-Conclusions The SML is a promising, flexible and highly efficient data representation that allows for fast comparisons using simple floating-point arithmetic in its real-or absolutevalued form. Even faster comparisons are achieved using only bit comparisons in its binary form with apparently no impairment in biometric performance. It is also possible to embed quality information. Furthermore, the SML is adaptable to template protection method which is a requirement of the ISO/IEC 24745 standard. This fixed-length representation can be compressed up to Xx = 256 and Xy CPCA ≈ 24 bits, whereby every template is sized only 0.75 kB resulting in a 750 MB database with 1,000,000 enrolled templates. Serial Combination of SMR In the previous section, the SMR variant SML was introduced. As already mentioned, the SML can be represented as a real-or absolute-valued vector of its complex feature vector. Experiments in previous work (see [25]) have shown that both representations show different results in terms of comparison scores when applied on fuzzy vein minutiae. We found that for the absolute-valued SML, the corresponding template is less often the template with the highest score. However, on the other hand, we observed a much lower preselection error for rank 10 shortlists when using the absolute-valued representation compared to the real-valued SML, which more often fails to return a high score for the corresponding template among many templates. However, among just a few templates, the real-valued SML finds the correct template more reliably with a better score distribution than the absolute-valued SML. The discussion of this behaviour is beyond scope of this chapter. However, this behaviour can effectively be used as an advantage by the proposed biometric system. Instead of using either absolute-or real-valued SML, both variants are incorporated: the absolute-valued representation is used during the identification lookup process to find a rank-1 to rank-10 shortlist, whereas the real-valued representation is then used to verify the rank-1 shortlist or find the correct reference template among the rank-n shortlist. The usage of both representations does not increase the computational workload when creating the templates over the level of working with the absolute-valued representation alone since the real-valued representation is a by-product of calculating the absolute-valued representation. However, the storage requirements are doubled. Furthermore, in the shortlist, the comparison costs of the real-valued representation are also added. Indexing Methods In Sect. 9.2.1, an efficient data representation to effectively reduce the computational costs and time spent for template comparisons is presented. Despite the efficient data representation, the system is still subject to the challenges introduced in Sect. 9.1.2. In this section, two methods necessary to reduce the number of template comparisons are presented. Bloom Filter Following the conversion of the SML templates into their binary representation, the enrolled templates are organised into tree-based search structures by adapting the methods of [27] and [15]. The Bloom filter-based templates are-to a certain degree-rotation invariant. This is because H columns are contained within a block and hence mapped to the same Bloom filter in the sequence, which means that contrary to the raw SML, no fine alignment compensation (normally achieved via circular shifts of the template along the horizontal axis) is needed during the template comparison stage. Furthermore, the data representation is sparse, which is a crucial property for the indexing steps described below: 1. The list of N enrolled templates is (approximately evenly) split and assigned to T trees. This step is needed (for any sizeable N values) to maintain the sparseness of the data representation. 2. Each node of a tree (containing I = N /T templates) is constructed through a union of templates, which corresponds to the binary OR applied to the individual Bloom filters in the sequence. The tree root is constructed from all templates assigned to the respective trees (i.e. I i=1 B i ), while the children at subsequent levels are created each from half of the templates of their parent node (e.g. at first level-the children of the root node- 3. The templates (B 1 , . . . , B i ) are inserted as tree leaves. After constructing the trees, the retrieval can be performed as shown below: 1. A small number of the most promising trees (t) out of T constructed trees can be pre-selected (denoted t T ) based on comparison scores between the probe and root nodes. 2. The chosen trees are successively checked until the first candidate identity is found or all the pre-selected trees have been visited. Note that for the genuine transactions, thanks to the pre-selection step, the trees most likely to contain the sought identity are visited first. A tree is traversed by-at each level-computing the comparison score between its nodes and the probe, and choosing the path with the best score. Once a leaf is reached, a final comparison and check against a decision threshold takes place. The tree traversal idea is based on the representation sparseness: as long as-at each level-the relation DS genuine DS impostor Fig. 9.3 Indexing and retrieval in the Bloom filter-based system. In this case, the retrieval follows the bold arrow path down to a leaf, where the final decision is made generally holds true, the genuine probes will be able to traverse the tree using the correct path to reach a matching leaf template. The complexity of a single lookup is O (T + t * (2 * log 2 I )). As it is sufficient to preselect only a small fraction of the constructed trees, i.e. t T , the lookup workload remains low, while arbitrarily many enrollees can be accommodated by constructing additional trees. For reference, Fig. 9.3 shows the indexing and retrieval in a single tree. If multiple trees are constructed, the search is trivially parallelisable by simultaneously traversing many trees at once. CPCA-Tree The second approach-called an SMR-CPCA binary search tree (CPCA-Tree)follows the same tree construction and traversal strategy as the Bloom filter-Tree introduced in the previous section. However, instead of using a Bloom filter or another template transformation approach, the CPCA-Tree stores binary SML-CPCA templates directly. The CPCA-Tree approach has shown an advantage in terms of biometric performance over the Bloom filter-Tree in previous experiments (see [25]) when benchmarking both indexing methods with heavily degraded (i.e. very fuzzy) data since the comparison of CPCA templates does not strongly rely on stable columns like the Bloom filter. However, while the CPCA-Tree is more robust in fuzzy environ-ments, it is to be expected that one single CPCA-Tree cannot store as many templates as one single Bloom filter-Tree: the binary SMR-CPCA features a high inter-class variance, whereby all set bits in the binary SMR-CPCA matrices are differently distributed and there are few unanimous bits. Therefore, the bits set in a binary SMR-CPCA have few bit collisions with SMR-CPCA from other subjects, respectively, from other biometric instances and when merging SMR-CPCA, the population count rises quickly, thus diminishing the descriptive value. In other words, the sparsity of upper level nodes quickly decreases to a point-typically around more than 65% of the bits set-where no correct traversal direction decisions are possible. There are at least three approaches to store the binary SMR-CPCA templates. In the experiments, the SMR-CPCA-M is used since it achieved the best biometric performance of these three representations in previous work [25]. Thus, it is required to extend the binary tree to store the applied bit and the mask bit, since both are required for the SMR-CPCA-M approach, which is commonly referred to as an auxiliary data scheme. In terms of tree construction, the applied bits are merged and the mask bits are merged upon fusing two leaves to one node. Hardware Acceleration Strictly speaking, the usage of hardware acceleration in the sense of multi-threaded systems, parallel systems or distinct hardware like FPGA processors is no workload reduction per se, as it does not reduce the number of template comparisons needed or reduce the size of the data. However, it is an important step to achieve an optimum efficiency of the system's hardware and is therefore also in scope of this chapter. As already accentuated in Sect. 9.1.3, the selected approaches should be implementable in congruency with the requirements of parallel and multi-threaded systems. Our system combines two approaches (SML and indexing with binary search trees) that are evaluated for these requirements. Implementing the binary search tree in a parallel manner is not feasible. Search trees might not be balanced or when using multiple trees, the trees differ in size. However, they are well suited for multi-threaded computation. When multiple trees were built (as would be the case in any sizeable system), each tree can be searched in one of a pool of threads. However, the SMR is perfectly suited for real parallel processing. Each element of its fixed-length feature vector can be calculated equally without any jumps or conditions. Furthermore, the calculation of one single element can be broken down to very few instructions and basic arithmetic. For example, in SSE-SIMD environments, up to four 32 bit vector elements can be calculated at a time [2] and in modern AVX-512-SIMD up to 16 32 bit vector elements at a time [1] for the real-or absolute-valued SMR. The whole calculation is also easily implementable in languages like OpenCL, which enables parallel computation on GPGPUs and other parallel systems. Next, the comparison process is also free of jumps or conditions and can also be processed in a paralleled environment where the previous statements 5 also apply. Fusion of Concepts The previous sections introduced several workload reduction concepts. In fact, these concepts can be combined. This section describes the process visualised in Fig. 9.4, where all concepts are joined to one biometric system. In terms of data processing, both the enrolment and query process are equal: after extracting the minutiae from the biometric sample, the absolute-and real-valued representations of the SML are calculated and the binary form of the absolute-valued SML is derived as introduced in Sect. 9.2.1. For the enrolment process, a binary representation (X b ) of an SML template (X) is now enrolled in the indexing trees and the floating-point representation (X f ) is kept for each enrolled template. Upon receiving a biometric probe that has to be identified, the binary representation is used to find a shortlist (rank-1 or rank-n) by traversing the built trees. Choosing n > T , respective n > t is not feasible since every tree will always return the same enrolled template for the same query. Figure 9.4 is simplified to the case where t = n. Subsequently, the floating-point representation of the SML query will then be compared to the real-valued SML reference templates found in the shortlist by the comparison and decision subsystem. Accordingly, all previous concepts are fused: the binary representation-regardless of whether it is extracted from the real-or absolute-valued representation-is used to efficiently look up a small shortlist and the floating-point representation-again independent of whether it is the real or absolute valued-is used to receive a more distinct comparison score distribution. There are multiple combination possibilities, e.g. real-valued binary for enrolment and real-valued floating point for the shortlist comparison or absolute-valued binary for enrolment and real-valued floating point for the shortlist comparison. It is expected that the former yields the best biometric performance since similar experiments in [25] already revealed competitive results and it is unclear, whether the binary representation of the absolute-valued SML retains the same properties (see Sect. 9.2.2) as the floating-point SML. Experiments The following sections describe the vein data used for experiments, its preparation and a description of how the experiments to evaluate the proposed methods were conducted. This chapter merely focuses on open-set scenarios, whereby verification experiments are beyond the scope. Dataset At the time of writing, the PolyU multispectral palm-print database (PolyU) [3] is the largest publicly available vascular dataset containing Near-Infrared (NIR) palm-print images usable for (palm) vein recognition known to the authors. It comprises images of 250 subjects with 6 images per hand. The images have a predefined and stable Region of Interest (ROI). All images have a very low-quality variance and are all equally illuminated. It is not possible to link the left-and the right-hand instance of one subject by their labels and vascular pattern; therefore, every instance is treated as a single enrolment subject identifier (in short "subject") as listed in Table 9.1. Since the PolyU dataset aims for palm-print recognition, it features a high amount of skin texture, which interferes with the vein detection and makes it a challenging dataset for the feature-extraction pipeline, which comprises the maximum curvature [24] approach with some prepended image optimisation like noise removal. Performance Evaluation Comparison scores are obtained in an open-set identification as follows: 1. One reference template is enrolled for each subject. 2. All remaining probe templates are compared against the enrolled references. 3. The scores are then categorised as false-positive identification identification transactions by data subject not enrolled in the system, where an identifier is returned. false-negative identification identification transactions by users enrolled in the system in which the user's correct identifier is not among those returned. The dataset has been split into four groups: enrolled, genuine, impostor and training (for the CPCA feature reduction). An overview of the relevant numbers is listed in Table 9.2. In order to ease the indexing experiments, an enrolment set of 2 n is preferred. With a limited number of subjects (500), 256 enrollees offer the best compromise between largest 2 n -enrolment and the number of impostor queries. The results of the experiments are reported as a Detection Error Trade-off Curve (DET). To report the computational workload required by the different approaches, the workload metric W = N * p * C (9.9) where N represents the number of enrolled subjects, p represents the penetration rate and C represents the costs of one single one-to-one template comparison (i.e. number of bits that are compared), and the fraction = W proposed W baseline (9.10) introduced by Drozdowski et al. [15] will be used. In tables and text, the biometric performance is reported with the Equal Error Rate (EER). However, when evaluating the best biometric performance, the results are first ordered by the False-Negative Identification Rate (FNIR) at FPIR = 0.1%, then ordered by the EER. This is due to the nature of the EER, whereby it does not describe the biometric performance at important low FPIR, i.e. an experiment with EER = 5% can feature an FNIR at FPIR = 0.1% of 20% while an experiment with EER = 5.5% can feature an FNIR at FPIR = 0.1% of 13%. In real-world scenarios, the latter example result is more relevant than the former. Experiments Overview The following enumeration serves as an overview of the experiments conducted in this chapter: Spectral Minutiae Representation The basic implementation, comparing vein probes based on minutiae with the SML in both absolute-and real-valued representation. In an identification scenario, the database is searched exhaustively, e.g. every query template (probe) is compared with every enrolled template (reference). These experiments represent the biometric performance and workload baseline for the workload reduction approaches in the following experiments. CPCA Feature Reduction Repetition of the above identification experiments, but with CPCA feature reduction for both binary and floating-point SML-further called SML-CPCA-to evaluate whether the biometric performance suffers from the feature reduction. Binary Spectral Minutiae Representation The same experiments (for both SML representations) as above are repeated with the binary representations of the SML to evaluate whether the biometric performance is degraded by this binarisation process. Serial Combination of SML With the baseline for all representations of the SML, 6 these experiments are used to validate the assumption that the observed advantages of both SML representations can be used to increase the biometric performance. Indexing Methods The binary representations of both absolute-and real-valued SML are indexed with the presented Bloom filter-Trees and CPCA-Trees approaches to evaluate whether the biometric performance is degraded by these indexing schemes. Fusion of Concepts Both indexing and serial combination of SML will be combined as presented in Sect. 9.2.5. This experiment evaluates whether both concepts can be combined to achieve a higher biometric performance due to the serial combination, combined with a low computational workload due to the indexing scheme. Results This section reports and comments on the results achieved by the experiments presented in the previous section. Spectral Minutiae Representation The SML experiments are split in multiple stages to approximate its ideal settings and tuning for fuzzy vascular data. Baseline In order to assess the results of the main experiments (indexing approaches), a baseline is needed. Figure 9.5a shows the DET curves of the introduced SML and QSML in both real-and absolute-valued sampling. It is clearly visible that the real-valued representation is much more accurate than the absolute-valued representation. Furthermore, Fig. 9.5b contains plots of the real-valued SML, QSML and Spectral Minutia Location Representation with minutiae pre-selection (PSML) thresholds of *0.1, 0.2 and 0.3. While the authors of [31,34] recorded good results using the absolute-valued sampling for their verification purposes, it falls far behind the real-valued sampling in identification experiments. The selected dataset introduces some difficulties for the feature-extraction pipeline used. Recall that the PolyU dataset is a palm-print and not palm vein dataset, and therefore it includes the fuzzy skin surface, which would not be included in a designated vascular dataset. It is mainly selected because due to its size rather than quality. Various optimisation experiments were run and are reported in the following section to increase the recognition performance. Implementing a robust feature extractor is beyond the scope of this chapter. Optimisation The feature extractor (maximum curvature [24]) used is able to report quality (reliability; q M ) data in the limits of (0, 1]-where 1 represents a 100% minutiae reliability and 0 no minutiae at all-about the extracted pattern; therefore, the QSML can be used. Using this data to remove unreliable minutiae in terms of defining a q M threshold (i.e. a minutiae reliability of at least 20%), the recognition performance can be increased as shown in Fig. 9.5b. Using q M ≥ 0.2 as a threshold for the so-called PSML and quality data-enhanced Spectral Minutia Location Representation with minutiae pre-selection (PQSML) achieved the best results in the experiments. Additionally, it is possible to reduce the SML and QSML samplings λ max to fade out higher (more accurate) frequencies, which increases the significance of the lower, more stable (but less distinct) frequencies. Experiments showed that using λ max ≈ 0.45 instead of the original λ max ≈ 0.6 resulted in the best compromise between low and high frequencies. This optimisation process is further referred to as tuning. CPCA In order to investigate the impact of the CPCA compression on the recognition performance, the same procedure as for the SML and QSML is repeated using the CPCA compression. Applying CPCA to the tuned SML and QSML results in no noticeable performance drop, as shown in Fig. 9.6. Again, using λ max ≈ 0.45 instead of the original λ max ≈ 0.6 resulted in the best compromise between low and high frequencies. One mentionable result of these experiments is that the tuned QSML-CPCA performs slightly better than the full-featured and tuned QSML. Summary In summary, even with a moderately reliable feature-extraction pipeline, the SML achieved acceptable results. Employing quality data in terms of minutiae reliability improved the biometric performance and an additional λ max -tuning also improved the biometric performance (as shown in Fig. 9.7). For the following experiments, the tuned QSML-CPCA with minutiae pre-selection of q M ≥ 0.2 will be used as a biometric performance baseline and will further be called PQSML-CPCA. The corresponding workload for the SML 7 is W ≈ 2.52 × 10 7 . Binary Spectral Minutiae Representation The next SML optimisation step is to binarise the SML floating-point vector. This step shrinks the feature vector by a factor of 32 and enables the usage of highly efficient binary and intrinsic CPU operations for template comparisons. Intrinsic CPU operations are also available for floating-point values. However, the binary intrinsics are favourable since they are more efficient and allow for a higher number of feature vector element comparisons with a single instruction. Practically, it is possible to binarise the full-featured SML, as well as the more compact SML-CPCA. However, it holds special interest to achieve a high biometric performance with the binarised (P)SML-CPCA or (P)QSML-CPCA to receive the smallest possible feature vector. Interestingly, the binary CPCA-reduced variants perform better than their larger counterpart, as is visible in the DET plots of Fig. 9.8. Moreover, the binary QSML-CPCA outperforms its minutiae-pre-selection counterparts. By analysing the other binary QSML-CPCA results, this result could be a coincidence. At this point, the 256 × 128 floats sized (PQ)SML got shrunk to a 256 × 20-bit sized (PQ)SML-CPCA without exhibiting a deterioration of the biometric performance. The workload for Serial Combination of SMR The serial combination of PQSML experiments was run with different settings ranging from rank-1 to rank-25. Only the PQSML was experimented with since it mostly performed better than the other representations. Using a rank-10 to rank-15 (∼5%) pre-selection with the absolute-valued PQSML then comparing the realvalued PQSML templates of the generated shortlist achieved the best results as shown in Fig. 9.9. Both were sampled with the same settings, whereby only one SMR sampling is needed; recall that, the real-valued SMR is a by-product when calculating the absolute-valued SMR. However, it is questionable whether the EER decrease Indexing Methods The previous experiments demonstrated that it is possible to reduce the workload drastically without a major impairment of the biometric performance by compressing and subsequently binarising the PQSML. However, it is still necessary to exhaustively search the whole database. In this section, the results of the indexing experiments conducted to reduce the number of necessary template comparisons are reported. Bloom Filter First experiments showed a severely impaired biometric performance loss of about 15% points (Fig. 9.10 PSML-CPCA templates is the high number of bit errors when comparing two mated binary PSML-CPCA. This Bloom filter implementation strongly relies on stable columns in the J blocks of the binary vector across their height H to offer a high biometric performance. The iris naturally yields comparatively stable columns when aligned and unrolled and therefore the Bloom filter performs exemplary. However, due to the nature of the SMR-which includes various frequencies-this stability is not ensured: smaller feature-extraction inconsistencies yield much more noise in the upper frequencies of the SMR, which then result in more Bloom filter errors, mostly along the columns. A more in-depth discussion of this behaviour is given in Sect. 9.4.2 of [25]. Even at a very high MT of 0.9, the average bit error rate is 13% with an error in more than 50% of the columns, which is excessive for a reliable Bloom filter transformation that needs stable column bits. While analysing the issue in further depth, it was found that the Bloom filter reliably looked up correct templates for genuine queries but failed to achieve a separable score distribution. Therefore, the Bloom filter indexing might not be feasible if used on its own, although it performs well in a serial combination approach. The best biometric performance was recorded at t /T = 31 /64 resulting in a workload 9 of W ≈ 7.7 × 10 5 , achieving ≈ 3.1%. CPCA-Tree In its basic implementation, the CPCA-Tree surpasses the basic Bloom filter indexing in both the FNIR-to-FPIR ratio, as well as EER. It achieves a similar EER to the naïve binary QSML-CPCA and naïve QSML-CPCA. Thus, the CPCA-Tree indexing approach reaches a similar biometric performance as the naïve approaches, albeit with a much lower workload 10 of W ≈ 6.8 × 10 5 , which results in = 1.7%. Therefore, if a serial combination approach is not desired because due to its complexity, the CPCA-Tree is a good compromise between complexity, workload and biometric performance. Fusion of Concepts As already mentioned in the experiment's description, the fusion of concepts combines the serial combination of (PQ)SML and the indexing schemes following the scheme presented in Sect. 9.2.5. In the first run of the experiment, X b was extracted from the real-valued QSML and X f is the real-valued QSML, and out of t selected trees only one (rank-1) template was selected for the shortlist. While this did not affect the biometric performance of the CPCA-Tree indexing, the Bloom filter indexing transcends the biometric performance of the CPCA-Tree indexing approach for lower FPIR with the rank-1 serial combination scheme. However, the Bloom filter indexing could not catch up at higher FPIR rates. Using a higher pre-selection rank for the Bloom filter indexing scheme did not result in a higher biometric performance. In these experiments, the pre-selection rank is set equal to the number of searched trees t. Upon first glance, the results of the higher pre-selection rank experiments for the CPCA-Tree indexing do not deviate much compared to the rank-1 experiments, whereby only the EER is slightly lower. Note the number of searched trees t, with a higher rank, a comparable biometric performance is achieved by traversing lesser trees. This is an important property for scaling in large-scale databases. For medium-scale databases, the overhead introduced by the additional floating-point comparisons when comparing the query with the templates in the shortlist would void the workload reduction achieved by the reduction of traversed trees. Furthermore, the experiments using a real-valued pre-selection/real-valued decision achieved a higher biometric performance than the absolute-valued pre-selection/real-valued decision 9 and the absolute-valued pre-selection/absolute-valued decision. Therefore, the statement of Sect. 9.2.2 that the absolute-valued SML is better suited for lookup but the real-valued SML yields more distinctive comparison scores does not apply on the binary representation of the absolute-valued (PQ)SML. The recorded workloads in this experiment are consolidated in Table 9.3 and the DET curves are shown in Fig. 9.11. Discussion Most results have already been discussed in previous sections. Finally, at least three properties for a new biometric deployment have to be considered when choosing one of the presented approaches: scalability, complexity and biometric performance. If a system simple to implement is desired, the CPCA-Tree indexing is recommended, given that it is easy to implement and it achieved biometric performance comparable with the contestants. Conversely, if the implementation complexity is less an issue, scalability and biometric performance have to be considered. In terms of scalability, the rank-n serial combination is the recommended approach, whereby it achieved a biometric performance comparable with that of the other approaches at the smallest number of traversed trees (smallest computational workload). Regarding the biometric performance, the rank-1 serial combination real/real indexing scheme achieved the best results. Table 9.4 summarises the rating for all best performing configurations of each approach from best (++) to worst (−−) with gradations of good (+), neutral (o) and bad (−). To deterministically benchmark the different indexing methods and configurations, the Euclidean distance between the baseline operation point (B EER = 5.5%, B = 1%) and the best performing configuration of each approach-as shown in Eq. 9.11-can be used. The smaller the (TP, ) for an approach, the closer that its point of operation is to the baseline operation point, whereby smaller is more preferable. Choosing the baseline operation point ( = 1%, EER = 5.5%) instead of the optimal operation point (EER = 0%, ≈ 0%) moves the emphasis of the distance to the performance of the indexing schemes rather than to the performance of the baseline system. The data of Table 9.5 is visualised as scatterplot in Fig. 9.12. Note that the naïvePQSML system is not plotted since = 100% would render the y-axis scaling of the plot impractical. Summary Vascular patterns are an emerging biometric modality with active research and promising avenues for further research topics. With the rising acceptance of biometric systems, increasingly large-scale biometric deployments are put into operation. The operation of such large deployments yields immense computational load. In order to maintain a good biometric performance and acceptable response times-to avoid frustrating their users-computational workload reduction methods have to be employed. While there are many recognition algorithms for vascular patterns, most of them rely on inefficient comparison methods and hardly any computational workload reduction approaches for vein data can be found. A recently published biometric indexing approach based on Bloom filters and binary search trees for large-scale iris databases was adopted for vascular patterns. In order to further develop this indexing approach, the vascular pattern skeletal representation of the raw palm vein images was extracted and the minutiae-the endpoints and bifurcations-of the extracted vascular pattern were then transformed using a Fourier transformation based approach originally presented for the fingerprint characteristic. When transforming the floating-point representation yielded by the Fourier transformation to a binary form, it is possible to apply the Bloom filter indexing. It has been demonstrated that the Bloom filter indexing system is capable of achieving a biometric performance close to the naïvebaseline, while reducing the necessary workload by an additional ≈37% on top of the workload reduction achieved with the CPCA compression and binarisation. Some of the approaches used by the Bloom filter in [15] were not feasible and the fuzziness of the vascular pattern prevented a higher workload reduction without losing too much biometric performance. However, the most important approaches have been successfully applied, and thus the system appears to be scalable in terms of workload reduction, biometric performance and enrollees. An additional, less complex, biometric indexing approach merely using a reduced form of the binary Fourier transformation representation and binary search trees has been presented. It adopts most workload reduction strategies that are used for the Bloom filter indexing approach and achieved a better biometric performance with only a slightly lower computational workload reduction (compared to a naïveimplementation using the reduced binary Fourier representation) of ≈ 36%. Since the presented approach follows the same theory and implementations as the binary search trees of the Bloom filter indexing, it also appears to be scalable in terms of workload reduction, biometric performance and enrollees. The respective advantages and disadvantages of the two indexing methods were outlined based on the results from the previous sections. It has been shown that the CPCA-Tree achieves good performance with less stable templates than the Bloom filter. However, it is to be expected that the Bloom filter will outperform the CPCA-Tree approach with more stable templates. Furthermore, the potential for computational workload reduction is much higher using the Bloom filter based method. The overall workload is reduced to an average of 3% compared to the baseline of the naïveimplementation using the Fourier representation in both systems. All approaches used are perfectly implementable in either multi-threaded or parallel environments. The presented indexing approaches are well suited to run in multiple threads yielding hardly any overhead. Furthermore, the data representation used can efficiently be computed and compared with SIMD introduction and intrinsics, whereby both computation and comparison do not rely on jumps or conditions. Therefore, it is perfectly suited for highly parallel computation on GPGPUs or manycore CPUs, hence utilising the maximum potential of the system's hardware. The workload reduction approaches achieved very promising results, which were doubtless limited by the biometric performance of the base system. It is to be expected that with a higher biometric baseline performance, a higher workload reduction can be achieved: with more stable templates, a more robust indexing can be achieved, thus further reducing the workload. Several early experiments and approaches in [25] already achieved a significant biometric baseline performance gain (EER < 0.3%), which will be used in future work. Since the base system achieved a very high biometric performance for fingerprints, the workload reduction approaches can be adopted to the fingerprint modalities and is subject to future work. Finally, it should be noted that there is a lack of publicly available large (palm-) vein datasets (with more than 500 palms) suitable for indexing experiments. Most datasets comprise only 50-100 subjects (100-200 palms). In order to fairly and comprehensively assess the computational workload reduction and scalability of indexing methods, large-scale data is absolutely essential. As such, entities (academic, commercial and governmental alike) that possess or are capable of collecting the requisite quantities of data could share their datasets with the academic community, thereby facilitating such evaluations. Another viable option is an independent benchmark (such as, e.g. FVC Indexing [6], IREX one-to-many [4] and FRVT 1:N [5] for fingerprint, iris and face, respectively), which could also generate additional interest (and hence research) in this field from both the academic and the commercial perspective. Lastly, the generation of synthetic data (e.g. finger veins [19]) is also a possibility, albeit on its own, it cannot be used as a substitute for real large-scale data. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2019-11-22T01:00:42.655Z
2019-11-14T00:00:00.000
{ "year": 2019, "sha1": "6af9e2d87fc2e84a0ce00d2decb0f58d920f5535", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-27731-4_9.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "c0972cb8b2f9e9c2765947a8513bc12d1879c7ce", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
231898472
pes2o/s2orc
v3-fos-license
Can we detect conditioned variation in political speech? two kinds of discussion and types of conversation Previous work has demonstrated that certain speech patterns vary systematically between sociodemographic groups, so that in some cases the way a person speaks is a valid cue to group membership. Our work addresses whether or not participants use these linguistic cues when assessing a speaker’s likely political identity. We use a database of speeches by U.S. Congressional representatives to isolate words that are statistically diagnostic of a speaker’s party identity. In a series of four studies, we demonstrate that participants’ judgments track variation in word usage between the two parties more often than chance, and that this effect persists even when potentially interfering cues such as the meaning of the word are controlled for. Our results are consistent with a body of literature suggesting that humans’ language-related judgments reflect the statistical distributions of our environment. Introduction What can you tell about someone who addresses a group of people as "you guys" versus "yinz," or someone who stresses the vowel sound in the word "aunt" or doesn't pronounce the "r" in "car" [1]? Socially conditioned variation refers to systematic and idiosyncratic shifts in the language used by members of a particular group [2]. Speech patterns that exhibit socially conditioned variation can be used to identify members of a group [3][4][5][6][7]. For instance, in Glasgow, how a person pronounces the letter "T" reliably indicates that person's age [7,8], and in New York the pronunciation of "r" reveals a number of sociodemographic attributes [5,9]. While language is a vehicle for explicitly-constructed semantic content, structural and systematic variation in language also conveys information about a speaker's environment and past experiences. But do listeners take advantage of this variation as a source of social information? Can we learn-without being explicitly taught-to associate glottal stops with younger speakers [8], longer words with male speakers [6,10], and the phrase "yinz" with Pittsburghers [11]? Recovery of statistically regular patterns is an important part of language acquisition [12][13][14], suggesting that such learning may be possible. Previous work has shown that the relative frequency of linguistic signals can indeed be used to discriminate between members of different demographic groups [6,15,16]. More generally, people associate certain "linguistic profiles" with members of different communities and cultural backgrounds-although these profiles can reflect misleading stereotypes as well as systematic variation in speech patterns [17]. Our work examines whether people use socially conditioned variation as a cue to a particular form of social identity: political identity. In particular, we investigate whether or not participants respond to the relative frequency of linguistic signals when categorizing speakers as Democrats or Republicans. Throughout this paper, we distinguish between the conditioned variation in usage and sense of a word. A word's sense is its meaning, often operationalized as its dictionary definition [18][19][20]. Cognitively, we think of a word's sense as contributing to an inference drawn from the concept conveyed by the word. For example, upon overhearing a politician using a word that conveys a money-related concept, such as "financial" or "monetary," a listener could make an inference about whether the politician is more likely to be a Democrat or a Republican on the basis of the degree to which they associate each party with the concept of money. However, even when the concept conveyed is held constant, the listener can make an even more informed guess on the basis of the specific word the speaker chose: Did the listener overhear "financial" or "monetary"? Although the two words convey the same concept and have very similar definitions, according to our data Democrats use the phrase "financial" more frequently, while Republicans use the phrase "monetary" more frequently. In other words, if they overheard the politician say "financial," the listener should infer that the speaker is more likely to be a Democrat, while if they overheard "monetary," the listener should infer that the speaker is more likely to be a Republican. In this case, the listener would be relying on the conditioned variation in usage of each word. The phrase "conditioned variation" generally does not convey the nature of the conditioning variable. For example, variation in the pronunciation of the final consonants of words is often a function of the phonetic features they precede or follow [2]. Following Samara, Smith, Brown, and Wonnacott (2017), we refer to linguistic variation that can be anticipated on the basis of demographic or social characteristics of the speaker as socially conditioned variation [2]. We use the term politically conditioned variation to refer to linguistic variation that can be anticipated on the basis of the speaker's political identity. The driving question of our work is: Without relying on their sense of what the two parties stand for, can people use politically conditioned variation to make accurate categorization judgments-in other words, learn to pick up on the political analog of "yinz"? political in-group [25]. Knowledge of our sensitivity to politically conditioned variation could help us better understand the kinds and validity of cues we use to make implicit judgments about others (e.g. contributors to our first impressions [26]), and the mechanisms of group formation and appeal (e.g. the effectiveness of techniques such as "dog-whistle politics;" see the general discussion). However, the fact that listeners often do not know the political affiliation of a speaker might also make it nearly impossible for them to acquire associations between partisan identity and conditioned variation in speech in the first place. While political identity does correlate with observable demographic characteristics, such features are noisy cues of partisanship, and observers may attribute variation in speech patterns to a more salient social category. Thus, while people have been shown to be sensitive to other sources of socially conditioned variation [2,6,15,16], it remains an open question whether this sensitivity extends to ideological categories. Detecting signals using NLP With the advent of readily-accessible, large-scale datasets, many researchers have attempted to isolate linguistic variation conditioned on a variety of social identities [27,28]. Diermeier, Godbout, Yu, and Kaufmann (2012) [29], Jensen, Naidu, Kaplan, and Wilse-Samson (2012) [30] and Gentzkow, Shapiro, and Taddy (2019) [31] also investigate variations in speech patterns between the two major U.S. political parties. Diermeier et al. (2012) build a support vector machine (SVM) classifier of a speaker's political ideology and perform post-hoc feature analysis to identify the words that were especially informative in the SVM's classification decisions [29]. Jensen et al. (2012) measure the partisanship of trigrams (contiguous sequences of three words) as the correlation between the frequency with which a speaker utters a given trigram and the speaker's political identity [30]. Gentzkow et al. (2019) posit a generative model of speaker phrase choice, and derive a measure of phrase-level partisanship from components of the parameterized model [31]. The aforementioned models have provided crucial convergent evidence that there is reliable and detectable politically conditioned variation in language use. However, the question at hand is whether people are capable of picking up on that signal. To test this, we turn to the work of Preoţiuc-Pietro, Xu, and Ungar (2016) [6] who find that human raters are able to correctly detect sociodemographic characteristic of speakers in 70% of cases. Preoţiuc-Pietro et al. (2016) use a log-odds measure (see the following section) to isolate linguistic variation between speakers of various demographic groups (e.g. age and gender) [6]. Their method provides a roadmap to how best extend investigation into the domain of U.S. political ideology. Foreshadowing our results, we find sensitivity that is considerably less extreme than the findings of Preoţiuc-Pietro et al. (2016) [6], highlighting that the range of factors in the correspondence of our judgments with statistical variation in word usage is far from completely understood. Method We used the congressional-record project [32] to access the transcripts of all proceedings in the U.S. House of Representatives between 2012 and 2017, made publicly available as part of the U.S. Congressional Record. The Congressional Record was also the basis of the results reported in the three studies on conditioned variation in political speech summarized above [29][30][31]. One advantage of the Congressional Record is that it is systematically formatted, allowing us to more easily label the text by matching speakers with entries in databases of members' party affiliations. Another advantage the Congressional Record has over other corpora of political speech, such as transcripts of the U.S. presidential debates, is that it is substantially larger and has a roughly equal balance of Republican and Democratic speech. Our corpus consisted of data from each of the most recent five calendars years. (We began data collection in 2018, meaning that 2017 was the last complete calendar year before our studies were run). A time window that was too short would not have yielded enough data for us to recover reliable statistical indicators of linguistic divergence. On the other hand, a time window that was too large would have obscured recent divergences, as politically conditioned speech patterns drift over long periods of time [30,31]. We chose a five-year window on the basis of our judgment that this time frame would provide a reasonable sample of data which had the power to detect and isolate contemporary patterns of politically conditioned variation. We first assembled all words spoken by a member of Congress identified as a Democrat or a Republican. While "Republican" and "Democrat" do not exhaust the set of possible political identities, they overwhelmingly dominate the party affiliations of congressional representatives. (At the time of writing, the U.S. House of Congress is composed of 232 Democrats, 196 Republicans and 1 Independent [33]). We then coerced all text in this initial corpus to lowercase, and excluded words we determined were unlikely to reflect meaningful or generalizable variation in word usage, e.g. common prepositions and the names of other sitting members of Congress. The full list of exclusion criteria is included in S1 Appendix. The corpus we used for analysis contains 13,523,319 instances of 16,218 unique words (6,924,484 instances spoken by Democrats and 6,598,835 instances spoken by Republicans). Language can be analyzed and perceived at many different scales of analysis, e.g. phonemes, word forms, phrases and sentences [16,34,35]. We conducted our analyses at the word level primarily for feasibility: Word boundaries are much easier to detect by natural language processing algorithms than the boundaries of phonemes or phrases. It is also interesting to note that some argue that the word is the appropriate unit of analysis in linguistic change and language learning [16,34,36]. Measures of relative frequency. Adapting the approach of Preoţiuc-Pietro et al. (2016) [6], we calculate exact measures of the relative frequency with which a word was spoken by a Democrat [Republican]. We use the log odds that the word was spoken by a member of a given party, shown in Eq 1, as our measure of the conditioned variation exhibited by a word. The conditional probability terms are calculated directly as the empirical probability that a word w was spoken by a Republican [Democrat] according to our corpus. For example, if our corpus of Republican speech contained only the words "quick," "brown" and "brown," and the corpus of Democratic speech contained only the words "brown," "fox" and "fox," Pð}brown}jRÞ ¼ 2 3 and logodds R ð}brown}Þ ¼ log 2=3 To avoid the discontinuities that arise when some probabilities are 0, we incorporated L1 smoothing in our measurements, i.e. imputed one "phantom" observation of each word in both the Republican and Democratic distributions. 8,345 of the 16,218 unique words in our corpus had a corresponding logodds R > 0. We refer to these as Republican words. The remaining 7,873 words had a logodds R < 0. We refer to these words as Democratic words. The mean logodds R value was.02 (SE = .01). This was significantly greater than 0 (t 16217 = 1.96; p = .05), indicating that the distribution of logodds R was shifted to the right of zero: In the absence of other information, odds were slightly higher that a word was spoken by a Republican. Fig 1 shows the distribution of logodds R values. The logodds R measure also closely resembles an element of a traditional model of behavioral response to perceptual inputs. In signal detection theory, the optimal detection threshold is the likelihood ratio of signal to noise: the probability of the stimulus conditional on a signal being present divided by the probability of the stimulus conditional on no signal being present [37,38]. Validating logodds R as a measure of politically conditioned variation. Implicit in our operationalization of politically conditioned variation is the assumption that the logodds R measure calculated from speech recorded in the Congressional Record captures differentiating patterns in political speech more generally. While we assume that most of our participants have been exposed to political speech by members of both parties, we do not assume that they are regularly exposed to speech on the floor of the U.S. House of Representatives. In this section, we show the extent to which the direction of the political signal-whether or not the word is Distribution of the values of logodds R , our measure of how much more likely a word is to be said by a Republican than by a Democrat, corresponding to each word in our corpus (see text for details; logodds D = −logodds R ). Republican words (logodds R > 0) are red, while Democratic words (logodds R < 0) are blue. The black line shows the approximate density of the distribution. https://doi.org/10.1371/journal.pone.0246689.g001 more often spoken by a Republican or a Democrat-estimated from the Congressional Record cross-validates to a more public-facing corpus of political speech: the U.S. presidential debates. We accessed transcripts of all the debates held as part of the 2012 and 2016 presidential election cycles (general and primary, presidential and vice presidential, and main and undercard) from the American Presidency Project [39], and pre-processed them in the same way we preprocessed the raw text from the Congressional Record. In total, 2,408 words (14.85% of the vocabulary from the Congressional Record corpus) appeared in both the Congressional Record and presidential debates corpora. The correlation between the logodds R values calculated from the Congressional Record and from the debates is.33 (t 2406 = 17.412; p <.01). 1,421 of these words (59.01%; SE = 1.00%) have the same estimated polarity (the direction of the sign of the associated logodds R value) in the two corpora. An exact one-sided binomial test shows that this is significantly greater than chance (p <.01). Overall, there is systematic variation in the speech of Republicans and Democrats that is present in a variety of contexts. While the correlation between the logodds R values in the two corpora is highly significant, it is admittedly moderate. While we cannot completely explain the sources of divergence between the distributions in the two corpora, S1 Fig shows that the more politically conditioned variation a word exhibits, the more likely that measure of politically conditioned variation is to generalize across the two corpora. In other words, the stronger the political signal, the more likely it is to operate in both contexts. Study 1: Testing alignment of judgments with the direction of politically conditioned variation Before we can test whether human judgments align with politically conditioned variation when word sense is held constant, we first have to determine whether people's judgments align with politically conditioned variation at all. In Study 1, we test this basic intuition by presenting participants with words that were statistically more likely to have been said by a Democrat or a Republican, and ask them to make judgments about the most likely party identity of the speakers of those words. All studies reported in the following sections were approved by the Carnegie Mellon University Institutional Review Board under IRB IDs STUDY2018_00000167 and STUDY2017_00000367. We obtained electronic consent from all participants. Participants 201 subjects completed Study 1 on MTurk. Our use of MTurk as a recruiting tool was driven by two primary considerations: i) convenience and ii) access to a more representative population than we would achieve with in-person samples (even with a local non-university sample, well below 10% of our population would be Republican given the demographics in the city in which we conducted our research). It is worth noting that scholars have documented disadvantages to using MTurk as a recruitment tool, including the possibility of non-naïvety and low quality responses (see Chandler, Mueller, and Paolacci (2014) [40] for a discussion of these issues), although our use of attention and quality checks should have mitigated that to a large degree (see details below). Moreover, a number of scholars have shown that MTurk can yield reliable data [41,42] (our studies were completed before the "MTurk Crisis" [43] began affecting data quality). However, we cannot rule out the possibility that non-diligent responders corrupted our sample. To the extent that this is the case, we believe that would only serve to reduce our power by introducing noise, making our results conservative estimates of the population effect. After excluding 54 participants for failing the attention check (described in the following section), our analyzed sample contains 147 participants, including 61 self-identifying Democrats and 38 self-identifying Republicans. These exclusions do not affect our main results. In this and all subsequent studies, we restricted participant eligibility to those of voting age residing in the U.S. Participants in the analyzed sample had a mean self-reported age of 37.66 (SE = .84), and included 74 men and 72 women (1 participant did not report their gender identity). 81.63% of participants reported having voted in the 2016 presidential election. Methods After completing a demographics questionnaire, participants were presented with a list of words and asked to ". . .estimate how likely it is that the word is spoken either by a Democrat or by a Republican [Republican or by a Democrat]" (the full instructions are included in S2 Appendix). The words "Democrat" and "Republican" were presented in a random order. Participants rated each word on a 6-point scale, from "I am almost certain the speaker is a Democrat" (which we coded as 1) to "I am almost certain the speaker is a Republican" (which we coded as 6). Each page of the survey contained 20 items. For approximately half of participants the presentation order of response options was reversed. For Study 1, we wanted to use stimuli that both exhibited significant conditioned variation (had a logodds R with a large magnitude), and were spoken frequently enough by the associated party that participants were likely to have been exposed to the variation in usage. For each word w, we calculated the partial Kullback-Leibler divergence (PKL), a measure that combines the logodds R with the word's probability of occurrence (interested readers can consult Klingenstein, Hitchcock, and DeDeo (2014) [44] for further details). Words with a high PKL D are both more likely to be spoken by Democrats and are spoken frequently by Democrats, while words with a high PKL R are both more likely to be spoken by Republicans and are spoken frequently by Republicans. In other words, PKL isolates strong and frequent statistical signals of party identity. As stimuli for Study 1, we selected the 39 words with the highest PKL D and the 39 words with the highest PKL R . (Elements of the pre-processing pipeline have changed slightly since these stimuli were selected. All analyses were run using the versions of the metrics calculated as described in the previous section). Three words were excluded from analysis of Study 1 since in subsequent pre-processing they were considered to be Congressional-specific stopwords according to the criteria identified in S1 Appendix: affordable, trump and obama. The reported analyses include 37 Democratic words and 38 Republican words. Two of these words were randomly reselected. If in response to either of these two attention-check stimuli a participant gave a rating that differed more than one point from their original rating on that stimulus, we removed them from our analysis. In total, each participant was presented with 78 stimuli chosen on the basis of the partisanship of the stimulus (39 Democratic words and 39 Republican words) and two words included as attention checks. The full list of stimuli used for all studies is included in S2 Appendix. All analyses reported in this paper were conducted in the programming languages Python or R [45][46][47][48]. We relied on several packages for statistical analyses and visualization, including but not limited to SciPy [49], scikit-learn [50] and Plotly [51]. All participant data and code for all of our analyses can be found at https://github.com/sabjoslo/talking-politics. Results The mean judgment on the Republican stimuli was 3.81 (SE = .08), just above the indifference point of the scale of 3.5 (participants could express maximum indifference with responses of "I am unsure but think the speaker is a Republican" or "I am unsure but think the speaker is a Democrat," which we coded as a 4 and a 3, respectively). A one-sided, onesample t-test led us to reject the null that this value was less than or equal to the indifference point (t 5580 = 3.77; p <.01). Standard errors reported and used by the inferential tests in this subsection are clustered at the participant and item level using the method in Arai (2011) [52]. Unless stated otherwise, this is the case for results reported in the main results sections for all studies. The mean judgment on the Democratic stimuli was 3.30 (SE = .12), just below the indifference point. A one-sided, one-sample t-test led us to reject the null that this value was greater than or equal to the indifference point (t 5435 = −1.66; p = .05). A one-sided two-sample t-test of the difference in means also led us to reject the null hypothesis that the mean judgment of the Republican words was equal to the mean judgment of the Democratic words (t 11015 = 3.50; p <.01). The results of the inferential tests reported thus far demonstrate that in this experiment, judgments do align with the direction of politically conditioned variation. To understand the relative strength of the effect, we calculated a standardized effect size [53]. Because our data included multiple judgments from each participant, a measure like Cohen's d would be difficult to interpret. We instead used the method of Rouder, Morey, Speckman, and Province (2012) [54] and demonstrated by Westfall (2016) [55]: We used a mixed modeling framework to calculate an effect size, and standardized this effect size by the standard deviation of the residuals of this model. More specifically, we estimated a linear mixed model of the ratings data: We considered the set of each of the individual ratings to be the endogenous variable. We included a fixed effect on a dummy variable indicating whether the word was Democratic or Republican, and considered the estimated coefficient on this variable as our effect size. In addition, we included random effects corresponding to individual participants and items. (We incorporated both participant-level random intercepts, participant-level random slopes on the fixed effect and item-level random intercepts. Since items do not overlap between the groups indicated by our main independent variable-i.e. the sets of Democratic and Republican words are mutually exclusive-it would have been meaningless to specify item-level differences in the fixed effect). We estimated a standardized effect size of.41, which can be interpreted as the effect of the direction of politically conditioned variation on a listener's judgment of the speaker's political identity. (Using a version of the model that includes just the fixed effectwhich is equivalent to estimating the traditional Cohen's d-we calculated a standardized effect size of.36). We therefore consider the effect of the direction of politically conditioned variation to be of moderate strength [53,56]. Item-level analyses. While clustering standard errors at the participant-and item-level provides some assurance that our effects are not simply driven by high performance on a single item or an especially discriminating participant, it does not completely rule out these possibilities. In the following two subsections, we describe and analyze sensitivity to the direction of politically conditioned variation at the item-and participant-level, respectively. In both sections, we consider a judgment to be accurate if it is above [below] 3.5 and the item is more likely to be said by a Republican [Democrat]. Item-level accuracies are computed on the basis of the mean of all judgments collapsed across participants. 50 of the 75 words (66.67%) were accurately classified (24, or 64.86%, of the Democratic words, and 26, or 68.42%, of the Republican words). The average item-level accuracy was 58.62% (SE = 2.33%), which was significantly different from chance performance of 50% (t 74 = 3.70; p <.01). We also ran a Mann-Whitney U test, a non-parametric alternative to the t-test. We ranked all 75 words by their associated mean judgment, and tested against the null hypothesis that the Republican words were as likely to be ranked above the Democratic words as to be ranked below. The U statistic was 416 (p <.01), leading us to reject the null hypothesis. In summary, the item-level analyses reaffirmed our conclusion that participant judgments aligned with the direction of politically conditioned variation more often than chance. Fig 2 shows the correspondence between the logodds R of each stimulus and its corresponding average participant rating. In general, words with a higher logodds R -words that are more likely to be spoken by a Republican-tend to be judged as more likely be spoken by a Republican. Participant-level analyses. On average, participants accurately classified 58.62% (SE = .59%) of the items (or about 44 out of 75 words), with most participants (127 out of 147) performing better than chance. This number was significantly different from chance performance (t 146 = 14.66; p <.01). They classified 57.52% (SE = 1.10%) of the Democratic words correctly, and 59.69% (SE = .91%) of the Republican words correctly. Both of these percentages were significantly higher than chance performance. We also defined a measure of participant-level discriminability as the Cohen's d between each participant's judgments on the Republican stimuli and their judgments on the Democratic stimuli. Recall that higher ratings indicate that the participant judged the speaker as more likely to be a Republican, so a participant with a positive Cohen's d tended to make judgments in the right direction. The mean individual-level Cohen's d is.40 (SE = .03), which is significantly higher than 0 (t 146 = 15.56; p <.01). Discussion From Study 1, we concluded that participants' judgments aligned with the direction of politically conditioned variation, and that this effect held for most participants and most items. However, recall that Study 1 did not control for our main theoretical confound: word sense. Studies 2 and 3 examine whether the alignment between human judgment and politically conditioned variation holds even when word sense is controlled for. PLOS ONE Can we detect conditioned variation in political speech? Study 2: Controlling for word sense using cosine similarity In Study 2 we test whether the sensitivity to politically conditioned variation we observed in Study 1 persists when word sense is held constant. We therefore wanted participants to make choices between pairs of words whose corresponding senses were as close as possible. But how does one measure the closeness of two words' senses? Work in linguistics and cognitive science suggests that a word's context is an important contributor to its meaning: If words A and B appear near sets of words C A and C B respectively, A and B have similar senses when C A and C B are very similar [57,58]. Consider the example given in the introduction: While the words "financial" and "monetary" may not be used concurrently (the phrase "financial monetary policy" sounds redundant and quite awkward), both words will likely co-occur with words like "policy," "markets," and "banks." This perspective can be summed up in a famous quote by Firth (1957): "You shall know a word by the company it keeps" [59]. Distributional semantics (DS) models are computational instantiations of this perspective: DS models build representations of word meaning on the basis of information about how the words co-occur in natural language data [60]. One popular DS model is word2vec, which projects each word of its input into a common, lower-dimensional space [61,62]. Words that are closer together in this space are more semantically similar [63,64]. One common way to measure this distance is using cosine similarity, or the cosine of the angle between the vector representations of two words. We therefore operationalized the word sense similarity of two words as their cosine similarity. Method We trained a word2vec algorithm using the software package Gensim on the Congressional Record corpus [65]. We took the 5% of words with the highest PKL D , and the 5% of words with the highest PKL R . For every pair of Democratic and Republican words in this set, we calculated the cosine similarity between the two words (662,596 pairwise comparisons in total). We then selected the 88 pairs with the highest cosine similarity (excluding pairs that contained words with proper names or acronyms that were not filtered out by the exclusion criteria listed in S1 Appendix). (We had predetermined that we wanted to present participants with 100 words pairs. Ten of the presented word pairs were included to address a separate research question, reported in Sloman, Oppenheimer, and DeDeo (under review) [66], and two were included as part of our attention check, detailed below. This meant 88 of the 100 pairs were left to directly test sensitivity to politically conditioned variation). It was possible that the highest cosine similarities among our restricted list of candidate pairs were not especially high in the context of the entire distribution of cosine similarities. To ensure that the constraint that each pair contain one highly Democratic word and one highly Republican word did not effectively impact the degree of sense similarity of the word pairs, we compared the cosine similarity of each item to the cosine similarities of 10,000 randomly selected word pairs from our corpus. The least similar pair, enable/encourage, has a cosine similarity of.65, which is higher than 99.77% of this random sample of cosine similarities. We exclude responses on one pair from analysis, since the polarity of one of the words was not robust to later development of the list of stopwords (indicated in the list of stimuli in S2 Appendix). This was due to relative differences in the number of words excluded from Democratic and Republican speech (the total number of words spoken by Democrats and Republicans that were retained from the corpus after pre-processing are included in the calculation of P(w|D) and P(w|R), respectively). Assuming that cosine similarity is a successful proxy of word sense similarity, the set of stimuli presented to participants contains the 88 word pairs whose word senses are the most similar, subject to the constraint that each pair contains one highly Democratic word, and one highly Republican word. Participants were asked to "[f]or each word pair, please guess which is indicative that the speaker is a Republican [Democrat]" (the full instructions are given in S2 Appendix). Each participant was randomly assigned to select either the word indicating that the speaker was a Republican, or the word indicating that the speaker was a Democrat. Both response and question order were randomized. As before, two items were randomly reselected and included as attention checks. If, for either attention check question, a participant didn't respond to the question or didn't provide the same response as they had the other time they were presented with this pair, they were excluded from analysis. Results Participants selected the word whose conditioned variation in usage aligned with their assigned condition 52.04% (SE = 1.94%) of the time (an aligned response occurred when a participant asked to select the word more likely to be said by a Republican [Democrat] did indeed select the Republican [Democratic] word). While performance was slightly better than chance, a one-sided t-test against the null of 50% accuracy did not meet the conventional threshold for statistical significance (t 8351 = 1.05; p = .15). Further analysis showed that the effect was almost entirely driven by participants in the Democratic condition (i.e., participants who were asked to choose the word more likely to have been spoken by a Democrat). These participants (n = 47) selected the Democratic word 53.71% (SE = 2.04%) of the time, which was significantly higher than chance performance (t 4088 = 1.82; p = .03). On the other hand, performance in the Republican condition (n = 49) was slightly and not statistically different than chance (μ = 50.43%; SE = 2.44%; t 4262 = .18; p = .43). To calculate a standardized effect size, we used the same method as for Study 1. We estimated a linear mixed model of participants' decisions to select the Democratic or Republican word (coded as a 0 and 1, respectively). The fixed effect was an indicator of whether the participant was in the Democratic or Republican condition (coded as a 0 and 1, respectively). Higher effect sizes thus indicate that selections align with the direction of conditioned variation. We specified participant-level intercepts as random effects (since the fixed effect did not vary within participants, we did not specify random participant-level slopes). We specified both item-level random intercepts and slopes. Using this model, we estimated a standardized effect size of.09. This reflects our interpretation of the results above: The effect of conditioned variation in Study 2 is in the hypothesized direction, but small and of questionable statistical reliability. Item-level analyses. We computed item-level accuracies by taking the mean of all participant judgments on each item. (Responses from participants in the Republican [Democratic] condition were coded as 1 if the word they selected was Republican [Democratic], and 0 otherwise). The mean item-level accuracy was 52.03% (SE = 1.89%). In other words, participants selected the "correct" word (i.e. the word consistent with the direction of conditioned variation) for about 45 out of the 87 pairs. A one-sided t-test shows this is not significantly different than chance performance (t 86 = 1.07; p = .14). Fig 3 shows the pairs participants classified most accurately, and the pairs on which participants systematically gave the wrong response. Consistent with our overall finding above, performance was notably higher among participants in the Democratic condition. Among these participants, the average item-level accuracy was 53.67% (SE = 2.00%; about 47 out of 87 word pairs), which was significantly different from chance (t 86 = 1.84; p = .03). Among participants in the Republican condition, the average itemlevel accuracy was 50.44% (SE = 2.37%), and was statistically indistinct from chance performance (t 86 = .19; p = .43). Participant-level analyses. We computed participant-level accuracies by taking the mean of participant responses across all items (responses were coded in the same way as for the item-level analyses). The mean participant-level accuracy was 52.03% (SE = .68%; about 45 out of 87 word pairs). A one-sided t-test rejects the null of chance performance (t 95 = 3.00; p <.01). We again found that participants in the Democratic condition performed significantly better than participants in the Republican condition: Participants in the Democrat condition had an average accuracy of 53.68% (SE = .88%; about 47 out of 87 word pairs), which was significantly different than chance (t 46 = 4.17; p <.01). Participants in the Republican condition had an average accuracy of 50.45% (SE = .98%), which was statistically indistinct from chance performance (t 48 = .46; p = .33). The logodds R of the Republican word against the logodds R of the Democratic word comprising each item. Markers are colored and sized by the mean participant accuracy. Large black dots indicate that when presented with this word pair, participants were usually able to identify which word was Republican and which word was Democratic. Small white dots indicate that when presented with this word pair, participants were systematically wrong about which word was Republican and which word was Democratic. The five pairs on which participants attained the highest average accuracy and the lowest average accuracy are labeled using the format Republican word/Democratic word. Highest accuracy: illegal/undocumented, man/woman, father/mother, freedom/ democracy and aliens/immigrants. PLOS ONE Can we detect conditioned variation in political speech? Discussion In both Studies 1 and 2, participant responses aligned with the direction of politically conditioned variation more often than chance. However, in Study 2, when we attempted to control for word sense, the effect was small and not always significant. In particular, we found that participants asked to select the word more likely to have been spoken by a Republican performed marginally higher but not statistically differently from chance. One possibility was that this reflected the partisan skew of our sample: Our sample contained almost three times as many self-identifying Democrats as Republicans. If participants do indeed use the statistical distributions of language in their environment, a skewness in the statistics of that environment would lead to skewness in their cognitive representations of those distributions. Participants who are selectively exposed to speech from members of their own party may be better at recovering the signals from that party-i.e., Democrats may be particularly adept at recognizing Democratic words. However, when we broke down responses in the Democrat condition by self-reported party affiliation, we found that Republicans actually performed better than Democrats. Members of the two parties selected the correct word an average of 54.26% and 52.25% of the time, respectively. Another possibility was that there were systematic differences in the degree of politically conditioned variation in the Democratic and Republican words. If the Democratic words exhibited a higher degree of conditioned variation, they could have been easier for participants to recognize. However, this was inconsistent with the fact that the mean logodds R value of the Republican words was actually larger in magnitude than the mean logodds R value of the Democratic words (.64 and -.58, respectively). The asymmetry is puzzling to us, and we cannot rule out the possibility that it reflects a latent response bias that leads participants to select the Democratic word for reasons unrelated to politically conditioned variation. The way Democrats and Republicans speak likely differ in other systematic ways. For example, voters in urban areas are more likely to be Democrats [67], and we expect socially conditioned variation indicative of regional association often aligns with politically conditioned variation. A general bias to, e.g., select the word whose use or sense conveyed the speaker was from an urban area would result in apparently higher sensitivity to politically conditioned variation among participants in the Democratic condition. Relatedly, a limitation of Study 2 is that our use of word2vec to control for word sense was not as effective as we had initially hoped. Many of the items used were not sense equivalents as we had intended. While word pairs with a high cosine similarity tend to be semantically related, they are not always synonymous. For example, many of the items were antonyms (e.g. evening/morning and bad/good) or words that both conveyed a quantity of something on different orders of magnitude (e.g. billion/trillion and months/years). Study 3 was designed to more rigorously establish sense equivalence between items. In addition, Study 3 incorporated the same task format as Study 1, which should eliminate any general response bias engendered by forced choice driving the results of Study 2. Study 3a Participants. 202 subjects completed Study 3a on MTurk. After excluding participants who failed a version of the instructional manipulation check [68], our analyzed sample includes 174 participants, including 77 self-identified Democrats and 45 Republicans. These exclusions do not affect our main results. Participants in the analyzed sample had a mean selfreported age of 40.35 (SE = .87), and included 88 men and 85 women (1 participant reported their gender identity as "other"). 84.80% of participants reported having voted in the 2016 presidential election. Participants completed the demographics questionnaire before the main survey. Methods. Our goal in Study 3a was to more completely isolate politically conditioned variation from variation in word sense. Because we wanted to ensure that the words we presented to participants were characterized by robust and generalizable politically conditioned variation, we restricted candidate stimuli to words for which the sign of logodds R was invariant to whether it was calculated using the Congressional Record or the presidential debates corpus (this step limited the possible stimuli to the 1,421 unique words that were present in both the Congressional Record and debates corpora, which is 8.76% of the original Congressional Record corpus). We identified 530 Democratic words and 891 Republican words that are characterized by generalizable politically conditioned variation according to this criterion (37.30% and 62.70% of the candidate stimuli, respectively). We used these sets of words to construct pairs of sense equivalents: pairs of words that, as in Study 2, contained one Democratic word and one Republican word matched on sense. For Study 3, we relied on human coding of sense equivalence, detailed below, rather than the cosine similarity of the word pairs. To extract sense equivalents, we manually sorted through this list of words, identified words which have familiar synonyms, and recorded these synonyms. To identify synonyms, one of the authors and a research assistant went through the list of words and consulted online resources, such as https://www.thesaurus.com/ and https://www.google.com/. We then further refined this list to only include words paired with synonyms that had the opposite partisan polarity in both corpora (in other words, if the target word was Republican, we only considered synonyms that were Democratic), and words that were not homonyms (excluding words like run, the meaning of which depends on if the person referred to is running a marathon, running a meeting, running out of time or running for office). This left us with 26 pairs of sense equivalents in which one word was Republican and the other of which was Democratic (e.g. kinds (Democratic) vs. types (Republican) and discussion (Democratic) vs. conversation (Republican)). All participants saw one word from each pair, and rated those 26 words on the same scale used in Study 1. Based on the results of Studies 1 and 2, we chose the single-item rating format as opposed to the two-alternative forced choice format in order to maximize our power of detecting an effect. Another advantage of this task format is ecological validity: When we encounter words "in the wild," we usually make passing judgments, rather than forced choices. We also gathered human ratings of the "substitutability" of each word. On the basis of preregistered criteria (described in S3 Appendix), we excluded items whose constituents could not naturally replace each other in most ecological contexts. On this basis, we excluded one item from our analysis (fear/terror), leaving us with 25 items. To further ensure the robustness of the direction of partisan signal in our items, we calculated 95% bootstrapped confidence intervals of the logodds R values of each word. To do this, we generated 100 artificial corpora. Each artificial corpus was created by randomly sampling speeches (with replacement) from the Congressional Record corpus. This allowed us to calculate 100 values of logodds R for each word, such that each value was calculated using data from a different artificial corpus. We determined 95% confidence intervals for each word as the interval within which the logodds R values corresponding to at least 95 of these artificial corpora fell. For 45 out of the 50 words included in our analyses, this interval did not cross 0 (the exceptions are maintain (a Democratic word), types, employees, fundamental and gain (Republican words)). For these words, we say that the direction of polarity is significant at the 95% level. Our results are robust even when the words with non-significant polarity are excluded from analysis. Materials and pre-registrations associated with this study can be found at https://osf.io/ x9qsp/. Results. The mean rating on the Republican items was 3.70 (SE = .12), just above the indifference point of the scale (recall that a higher rating corresponds to a judgment that the speaker is more likely to be a Republican). A one-sided t-test showed that this was significantly higher than the indifference point (t 2168 = 1.61; p = .05). The mean rating on the Democratic items was 3.49 (SE = .10), which was not significantly different than the indifference point (t 2173 = −.05; p = .48). A two-sided t-test indicated that the difference between these two means was marginally significant (t 4341 = 1.29; p = .10). Using exactly the same method as for Study 1, we calculated a standardized effect size of. 16. Because the design of Study 3a exactly mirrors the design in Study 1, we can directly compare these effect sizes: In Study 3a, which controls for word sense, we find a substantially smaller effect size than in Study 1. This is suggestive but inconclusive evidence that participants are sensitive to the direction of politically conditioned variation. Fig 4 shows the correspondence between the logodds R value of each item and the average corresponding judgment collapsed across participants. In general, participants judge words as more likely to have been said by a Republican when the words are more likely to be said by a Republican. Item-level analyses. Participants classified 29 out of the 50 items correctly (16 Democratic words and 13 Republican words). (As in our analyses of the data from Study 1, we consider an accurate classification to be when the direction of a participant's judgment with respect to the indifference point aligns with the direction of politically conditioned variation of the stimulus. Calculations of item-level accuracy use the same criterion but consider the average of all participants' ratings of each item). The average item-level accuracy was.53 (SE = .03; i.e. about 53% of participants classified each item correctly), which was not significantly different from chance performance (t 49 = 1.18; p = .12). Consistent with the asymmetry we noted above, PLOS ONE Can we detect conditioned variation in political speech? participants were slightly more accurate when only Republican items were considered (μ = .55; SE = .04) than when only Democratic items were considered (μ = .52; SE = .04; since there are only 25 items in each of these restricted sets, these samples are too small to run inferential tests on). As for Study 1, we also ran a Mann-Whitney U test. The U statistic indicated that the overall rank order implied by participants' ratings was not distinct from the null (U = 256; p = .14). If our effect was driven by attention to other characteristics, e.g. word sense, that happened to align with politically conditioned variation for a couple of critical items, our finding would not be strong evidence in favor of sensitivity to politically conditioned variation. In light of the strict criteria we applied to items for inclusion in Study 3a, the concern that our effect is driven by high performance on one or two items is particularly apt-especially given that our selection process saturated the set of candidate items, making it almost infeasible to address this potential concern in follow-up studies. To see if this was the case, we calculated the average rating on each of the 50 items and looked at how these averages differed within the pre-specified pairs of sense equivalents. The complete results of this analysis are included as S1 File. In summary, while the mean rating on the Democratic word was lower than the mean rating on the Republican word for 18 out of the 25 word pairs, this difference was significant at the p = .05 level for only eight of the 25 pairs (comprehensive/complete, assault/attack, criminal/illegal, changes/reforms, responsibility/duty, barriers/walls, outrageous/excessive and basic/fundamental). We found that the mean rating on the Democratic word was significantly higher than the mean rating on the Republican word for three of the 25 word pairs (contribute/give, values/principles and end/finish). The p-values associated with the remaining 14 word pairs fell between.05 and.95. In the absence of an effect, we would expect the p-values associated with the word pairs to be uniformly-distributed. If, however, our data were unlikely under the null hypothesis, we would expect these p-values to be more skewed towards smaller values. S2 Fig shows the Q-Q plot of the p-values against the quantiles of the standard uniform distribution. Most of the plotted values fall below the 45˚line, indicating that the density of the p-value distribution is concentrated around smaller values than the density of the standard uniform. A Kolmogorov-Smirnov test confirmed that the result that the cumulative density of p-values falls above the cumulative density of the uniform distribution is statistically significant (D 25 = .35, p <.01). Despite the fact that the difference between the means of only eight of the word pairs was statistically significant, the distribution of effects at the item-level was unlikely to occur by chance. We interpret this as evidence against the possibility that the aggregate effect was driven by high performance on a couple of the word pairs. Participant-level analyses. The mean participant-level accuracy was.53 (SE = .01), with 108 participants (62.09%) performing better than chance. In other words, participants classified on average 28 out of the 50 items correctly. This was significantly higher than chance performance (t 173 = 4.73; p <.01). The mean discriminability score across participants, defined exactly as for our analyses of Study 1, is.16 (SE = .03), which is significantly different from 0 (t 173 = 4.95; p <.01). Discussion. Study 3a provides suggestive but inconclusive evidence that participants are sensitive to the direction of politically conditioned variation. Of note was the asymmetry we found between participants' sensitivity to variation in the direction of Republicans and variation in the direction of Democrats. In Study 2, we found that participants asked to select the word more likely to have been spoken by a Democrat were significantly more accurate than participants asked to select the word more likely to have been spoken by a Republican. In contrast, Study 3a finds that participants' judgments are better aligned with the direction of politically conditioned variation for Republican words. This suggests that if an unobserved response bias was responsible for the asymmetry in alignment in Study 2, the design of Study 3a eliminated this bias. But why did we observe a seemingly incongruent asymmetry in the alignment of judgments between the Republican and Democratic items in Study 3a? Our sample was again skewed towards Democrats, suggesting selective exposure to Republican speech was not responsible for this difference. We again considered the possibility that there were systematic differences in the degree of conditioned variation characterizing the Republican and Democratic items. The analyses in S4 Appendix and shown in Fig 4 show that judgments are positively correlated with the magnitude of the logodds R values of the items. However, the Republican items had an average logodds R value that was smaller in magnitude than the average logodds R value of the Democratic items (.38 vs. -.42). Therefore, sensitivity to the magnitude of the logodds R values could not explain the observed asymmetry. Another speculative possibility is that participants in general had more exposure to Republican speech than Democratic speech. At the time of data collection in 2018, the Republican party had had majority representation in the House of Representatives, majority representation in the Senate and held the office of the presidency for the past two years [69]. More exposure may have enabled participants to more reliably encode conditioned variation indicative of Republican affiliation. In addition, dramatic shifts in rhetoric during the Trump administration [70] could have made instances of Republican speech more salient and easier for participants to retrieve from memory (see General discussion). Overall, the statistical significance of the effect of the direction of conditioned variation was not robust to a variety of analysis strategies. This, combined with the small standardized effect size, suggested that the directional effect we found was fragile, a chance directional finding that would not replicate, or that Study 3a was not designed powerfully enough. Because of our stringent stimulus selection criteria, it is possible we were running up against a perceptual floor, and that participants were having a relatively harder time detecting weaker partisan signals. To minimize potential floor effects, in Study 3b we asked participants to make judgments about groups of words. We reasoned that exposing participants to several words with aligned patterns of politically conditioned variation simultaneously would result in a perceptual enhancement of the aggregate signal. Study 3b Participants. 203 participants completed Study 3b on MTurk. After excluding participants who failed the same instructional manipulation check included in Study 3a, our analyzed sample contains 170 participants, including 78 self-identified Democrats and 37 self-identified Republicans. Participants in the analyzed sample had a mean self-reported age of 35.81 (SE = .83). 102 men, 66 women, and two participants who identified as neither male nor female completed the study. 78.82% of participants reported having voted in the 2016 presidential election. Participants completed the demographics questionnaire before the main survey. Methods. In Study 3b, we presented the 50 words we used in Study 3a in groups of five words each. Our rationale for this design was that the signals from the words would combine in such a way that we would overcome perceptual floor effects, increasing the power of our design to detect sensitivity to politically conditioned variation. Each word group contained exclusively Republican words or exclusively Democratic words. Each participant was presented with five groups composed of five words each, randomly generated in such a way that a participant was never presented with both an item and its sense equivalent. Participants were given the same instructions as in Studies 1 and 3a, with the exception that they were asked to imagine that they had overheard all the words in the list. The same version of the instructional manipulation check was used to exclude participants for not paying attention. These exclusions do not affect our main results. Participants made a judgment about one word group per page, and questions auto-advanced after the participant selected their response. Results. As in Studies 1 and 3a, higher ratings correspond to a higher perceived likelihood that the speaker is a Republican. The mean rating on the lists of Republican words was 3.96 (SE = .07), while the mean rating on the lists of Democratic words was 3.54 (SE = .06). (Standard errors are clustered at the participant level but not at the item level, since the study was designed such that exactly the same list of words was very unlikely to appear to more than one participant). As in our analysis of the data from Study 3a, we find that the mean judgment on the Republican items was significantly higher than the indifference point of 3.5 (t 417 = 6.95; p <.01), but that the mean judgment on the Democratic items was not significantly different from the indifference point (t 431 = .67; p = .75). A one-sided two-sample t-test showed that the mean judgment on the Republican items was significantly higher than the mean judgment on the Democratic items (t 848 = 4.51; p <.01). Participants judged clusters of words as more likely to be spoken by a Republican when those words were more likely to be spoken by a Republican. We used the same method as for Studies 1 and 3a to calculate an effect size, with the exception that we did not include item-level random effects. We calculated a standardized effect size of.30, which reflects that participants tended to rate lists of Republican words as more likely to have been said by a Republican than the individual words we presented in Study 3a. Participant-level analyses. The mean participant-level accuracy, defined as for Studies 1 and 3a, was 57.18% (SE = 1.58%), indicating that, on average, participants correctly classified about 3 in 5 word lists. 110 out of 170 participants performed better than chance. Consistent with the asymmetry in performance on the Republican and Democratic items noted above, performance differed dramatically in the two sets of items: The average participant-level accuracy on the Republican items was 64.71% (SE = 2.19%), compared to only 49.80% (SE = 2.17%) on the Democratic items. We do not calculate discriminability scores as for Studies 1 and 3a. Since each participant saw only five word lists in total, we do not believe such a small sample provides reasonable estimates of the Cohen's d between their judgments of Republican and Democratic lists. Discussion Studies 3a and 3b demonstrate that even when word sense is controlled for, people are more likely to associate Republican language with Republicans. However, we found in both studies that participants did about as well as chance in associating Democratic language with Democrats. In our discussion of Study 3a, we speculated that that may reflect a skew in the amount of Republican vs. Democratic public-facing speech in participants' recent memories. While Study 3 was designed to rule out the possibility that our results were due to inferences based on the senses of the words, our study design did not control for other attributes of words that have an impact on judgments of political affiliation. Namely, Sloman et al. (under review) show that the valence of language has a dramatic impact on such judgments [66]. The regression analyses in S4 Appendix show that when valence is controlled for, participants' judgments track the direction of logodds R in both studies: Participants express more certainty that a word indicates the political affiliation of a speaker when that word exhibits stronger politically conditioned variation. Combined with the item-level analysis shown in S1 File, the findings of Study 3 are consistent with a sensitivity to politically conditioned variation that is more pronounced in the set of Republican words we identified (see again Fig 4). Taken together with the results of Studies 1 and 2, our evidence suggests listeners are sensitive to politically conditioned variation. Importantly, both the magnitude of our effects and systematic asymmetries in participants' ability to recover variation were context-dependent. While our work finds that politically conditioned variation is one cue that enters participants' judgments, it has exposed rather than filled gaps in our knowledge of the large range of other cues on which people rely when using language to make inferences about a speaker's identity. We hope that future work can shed more light on the differences between the results from each of our studies. General discussion Our results show that participants tend to classify hypothetical speakers as Democrats or Republicans in a way that reflects politically conditioned linguistic variation. This is consistent with our hypothesis that people can access and use politically conditioned variation in language when making judgments about a person's political identity. As we discussed above, conditioned variation in speech patterns emerges across many different demographic dimensions. To the best of our knowledge, it has remained until now an open question whether or not people are able to recover patterns in variation along the dimension of political identity. Combined with foundational results in categorical perception and perceptual learning [71,72], our findings suggest that speakers' political identity is at least a somewhat salient perceptual category to the respective listeners. This has implications for a deeper understanding of the dimensions that contribute to people's representations of others, and how these higher-level representations interact with low-level perceptual and learning processes. S5 Appendix presents a brief exploration of some potential contributors to this feedback cycle. While those results are inconclusive, we believe the questions raised remain a promising avenue for future research. One of our contributions is to the development of methods to explicitly disentangle conditioned variation from other forms of information that learners absorb from language. Establishing external validity is an especial challenge for researchers interested in behavioral responses to the statistics of language. Socially conditioned variation is a difficult construct to operationalize in an ecologically valid way, when its "ecology" is as varied as the number of distinct languages, contexts and speech patterns. Often, work on statistical language learning uses methods such as generating artificial grammars [2,13,[73][74][75][76], simulation [77] or in-depth analyses of the speech patterns of a particular community or demographic group [4,5,15], all of which compromise some degree of external validity to achieve internal validity. As discussed above, and especially in our discussion of Study 3, our work by no means eliminates this challenge. However, we consider our approach an incremental step forward for a field interested in exploring ways of creating more ecologically-valid experimental stimuli. We along with Preoţiuc-Pietro et al. (2016) [6] exploit the availability of powerful techniques to mine large-scale data sets and recover the statistics of natural language. Performing our analysis at the word level allowed us to both present subjects with meaningful ecological units, and identify and control for plausible mediators. While we believe our methods are a contribution to identifying socially conditioned variation outside the lab, our work nevertheless faces limitations to ecological validity. Participants in our studies made judgments about isolated, decontextualized words. However, speakers almost always produce sentences or paragraphs at a time. We expect politically conditioned variation also operates at the level of phrases [30] and sentences (e.g. "build the wall;" see discussion below and [78]). While extending our approach to phrases and sentences is an obvious avenue for future work, we chose not to do so because of the additional difficulties it would introduce in controlling for sense divergence. To the extent that phrases imply a larger network of semantically related conceptual structures than isolated words, it would be an exponentially more difficult task to identify pairs of phrases with opposing directions of politically conditioned variation which were also matched on every element of their respectively encoded semantic networks. A related limitation to the ecological validity of our results is our reliance on a controlled experimental setting. While this was necessary to isolate and manipulate measures of socially conditioned variation and intuitive judgments, it differs from contexts in which our participants encounter political speech "in the wild" in important ways. It is possible that our effects would be diluted or even eliminated in more ecologically valid settings, where decision-makers have a rich set of contextual and semantic information to draw from. However, socially conditioned variation and word sense often align and may interact in unexpected ways. For example, word sense could reinforce and contribute to the formation of more contiguous and salient episodic linguistic representations, rendering the conditional statistical distributions more accessible and usable. For example, consider a word pair on which participants in Study 3a did especially well: barriers (a Democratic word) vs. walls (a Republican word). One of Donald Trump's policy proposals was to build a barricade on the U.S.-Mexico border [78]. While the structures that exist and which the Trump administration planned to build along the U.S.-Mexico border are as accurately described as barriers as walls (if not more accurately [79]), his campaign came to be associated with the phrase "build the wall" [78]. While the concepts evoked by the proposal were very different from the concepts evoked by the Democrats' proposed immigration reforms, participants relying on only the sense of barrier and wall would not have been able to distinguish one or the other as more likely to be referring to Trump's proposal. Rather, we speculate that the starkness of the conceptual divergence between the two parties' proposed policies made the phrase "build the wall" more salient to participants, making it easier to encode the conditional distributions associated with the word wall. Of course, we cannot completely rule out the possibility that our stimuli in Study 3 were not perfect sense equivalents. For example, one of our words pairs consists of wealth (a Democratic word) and prosperity (a Republican word). The Merriam Webster dictionary defines wealth as "abundance of valuable material possessions or resources" (first of four definitions [80]) and prosperity as "the condition of being successful or thriving" [81]. While the words have extremely similar meanings, one might think of forms of prosperity, such as social or intellectual fulfillment, as less similar to their concept of wealth. It's possible that a difference in responses on this item could have be driven not by attention to politically conditioned variation, but by an association between participants' concepts of a Republican and non-monetary forms of prosperity (while participants on average correctly guessed that prosperity is more likely to be said by a Republican, this difference was not significant at the p = .05 level). As discussed in our explanation of the methods for Study 3a, the stimuli used in Study 3 were the only word pairs that met our criteria for sense equivalence out of an initial pool of 1,421 words. Using the method described in S3 Appendix, we also ensured each word was exchangeable in context for its sense equivalent. We believe this maximized the semantic alignment within each word pair. While we cannot completely rule out the possibility that the words comprising each pair yielded different conceptual information, our data show that participants' judgments align with politically conditioned variation even when we assured there were minimal differences in the senses of corresponding words. Finally, and as we discuss above, the corpus on which we based our operationalization of politically conditioned variation, the Congressional Record, may not be representative of the distribution of political speech to which our participants had been exposed. As we mention above, we chose to use the Congressional Record as the basis of our measurements of politically conditioned variation in favor of, e.g., transcripts of the presidential debates because it is considerably larger and thus has the potential to provide more precise estimates for more words, and because the amount of Democratic and Republican speech is roughly balanced, making it less likely bias or lack of precision in our estimates would differ systematically by party. However, future work could attempt to replicate or extend our findings using corpora that contain more familiar, public-facing language. Mechanisms of politically conditioned variation Above we showed that there is measurable between-party statistical variation in the use of language, variation that can be used to predict the party affiliations of speakers in other contexts. But what drives this systematic divergence? Are Democratic politicians intentionally trying to promote "discussion," while Republicans explicitly encourage "conversation"? In politics, differentiating word use is often strategic. For example, beginning in the 1990s Frank Luntz began to put forth suggestions for specific terminologies Republicans could adopt to influence how voters thought about different issues (e.g. by referring to "tax simplification" instead of "tax reform;" [82,83]). While tactical linguistic shifts such as those suggested by Luntz are intended to influence voters' higher-order representations of party-line issues, linguistic differences could result from strategic word choice even if the speaker does not invoke a specific semantic framing. Smaldino, Flamson, and McElreath (2018) discuss the dynamics of covert signaling, the intentional broadcasting of signals that convey information to in-group members, but are ambiguous enough to avoid offending or stirring conflict with out-group members [84]. Engaging in such "dog-whistle politics" allows politicians to make allusions to controversial stances that escape the awareness of members of their ideological out-group [85][86][87]. Diermeier et al. (2012) speculate that such coded appeals could partially explain variation in how the two parties use sense equivalents [29]. In order to convey a strong ideological stance to their base without alienating more moderate voters, they suggest that politicians may rely on the signaling power of the speaker's selection among these sense equivalents: For example, among the separating adjectives for Democrats we find the word gay, and for the Republicans we find the word homosexual. In other words, the correct use of terms signals one's political 'type' to constituencies that care a great deal about these issues. [29] Alternatively, differences in speech patterns could reflect other latent differences in demographic attributes, personality traits or cognitive style of the members of the two parties. Extensions of our research could involve exploring differences in the degree of analytic thinking conveyed by the language of the two parties (e.g. [88]) or differences in the language used in support of vs. opposition to proposed policies (e.g. [89]). While acts of speech production are the direct generating causes of the natural language data we analyzed, politicians' choices of words are also reflective of the language listening and learning processes they themselves have engaged in. Importantly, conditioned variation can also emerge organically from implicit language learning mechanisms. The iterative compounding of individual-level biases during language learning and transmission can result in systematic linguistic divergence [7,[73][74][75][76]90]. We speculate that the use of glottal stops by younger Scots [8] and the term "yinz" by Pittsburghers [11] does not reflect deliberate, conscious attempts to signal identity, but rather unconscious patterns driven by, e.g., selective exposure on the part of language learners. Implicit learning mechanisms could similarly explain linguistic divergences among Democrats and Republicans. The true drivers of politically conditioned variation are almost certainly a complex and dynamic combination of strategic word choice, latent conditioning variables and implicit learning mechanisms-in addition to other factors we have not mentioned. In light of researchers' unprecedented access to natural language data and computing power, we are optimistic that future work can help construct a more complete picture of the conditioning process. Crucially, regardless of the mechanisms driving socially conditioned variation, the statistical information that emerges is a valid cue of the speaker's group membership. Our work contributes to a growing body of literature that people's judgments do, in fact, reflect this information [2,6,15,16]. Analog vs. discrete signals Although many of our analyses made use of the magnitude of the logodds R values, our interpretations of our results usually only considered behavioral correspondence with the direction of the conditional statistics: whether or not the word was categorized as Democratic or Republican. Prior work suggests that people's ability to categorize language-related stimuli on the basis of differences in perceptual characteristics, including differences in relative frequencies, reflects some amount of sensitivity to the magnitude of these differences [15,35,71]. Indeed, some of our analyses pointed to some such correspondence (see, e.g., Figs 2 and 4 and S4 Appendix). Future work could further investigate the degree of precision with which we encode and respond to identity-related linguistic input. Consequences of sensitivity to politically conditioned variation To the extent that a person's political affiliation is an important predictor of their beliefs, behavior and interests, sensitivity to politically conditioned variation could allow listeners to infer characteristics about a speaker [23,24]. Knowing the cues we use to infer partisan identity could inform an understanding of the bases on which listeners form implicit judgments of speakers. Given that even valid cues are probabilistic, listeners' judgments are likely often inaccurate. More awareness of the cues we rely on may help us recognize when our inferences are based on indirect cues like linguistic variation, rather than on confirmed and explicitly relevant information. Language is perhaps the most important vehicle by which we convey information to others. Much of modern-day political discourse takes place over social media, online articles and email, where writers lack communicative mechanisms such as body language and tone of voice. In this context, we speculate that word choice is an especially important source of information. Linguistic variation is an important tool in the conveyance of partisan messagingand understanding sensitivity to it can help us better understand receptiveness to such messaging. For example, the success of tactics like covert signaling and dog-whistle politics relies on listeners and readers being able to pick up on strategic linguistic variation. Cues to partisanship likely serve as more than just indicators of ideological alignment. While we may feel more ideologically similar to our political in-group members, we may also attribute other positive characteristics to them, such as trustworthiness: Word choice could lend source credibility to a speaker, perhaps leading us to believe or discredit information that is difficult for us to independently verify. Conclusion We used natural language processing techniques to identify politically conditioned variation in public-facing U.S. political speech. We then demonstrated that human judgments align to some extent with this variation, even when cues such as word sense are controlled for. We contribute to a body of work examining the conditions under which people are sensitive to socially conditioned variation [2,6,15,16]. Importantly, in some of our study designs the degree of alignment was small, highlighting that more work needs to be done to more fully triangulate these conditions. Overall, our work shows that political party affiliation is a meaningful dimension along which people have the ability to track not only variation between overt value systems and policy preferences, but also subtle variation between distributions of word usage. Supporting information S1 Fig. Correlation between logodds R values calculated using the Congressional Record and logodds R values calculated using the presidential debates as a function of degree of politically conditioned variation. The x-axis indicates deciles of the distribution of logodds R values (calculated using the Congressional Record) of the 2,408 words that appear in both the Congressional Record and presidential debates corpora. Words in the leftmost bins are highly Democratic (have a very low corresponding logodds R ), while words in the rightmost bins are highly Republican (have a very high corresponding logodds R ). On the y-axis are the Pearson's correlation coefficients between the logodds R values calculated using the Congressional Record and the presidential debates for the words that fall in each bin. For example, when the sample is restricted to the 241 words whose corresponding logodds R was above the 10% and below the 20% cut points, the correlation is.18. The U-shape indicates that this correlation is highest for words with higher absolute values of logodds R . The highest correlation (.40) occurs for words in the 10% decile (the most Democratic words). The lowest correlation (.02) occurs for words in the 60% decile. (PNG) S2 Fig. Q-Q plot of theoretical quantiles of a standard uniform distribution against p-val- ues in S1 File. The closer the points fall to the red diagonal line, the more the distribution of pvalues resembles what would be expected under the null hypothesis. The observed pattern shows that most p-values are smaller than would be expected under the null hypothesis, although the highest p-values, which correspond to the items that show a significant effect in the opposite direction of our hypothesis (see caption for S1 File), are higher than would be expected under the null. (PNG) S1 File. Item-level results from Study 3a. Table displaying word-level results from Study 3a and item-level p-values. For eight word pairs, the Republican word was rated as more likely to have been said by a Republican at the p <.05 level (the pattern predicted by our hypothesis): comprehensive/complete, assault/attack, criminal/illegal, changes/reform, responsibility/duty, barriers/walls, outrageous/excessive and basic/fundamental. For three word pairs, the Republican word was rated as less likely to have been said by a Republican at the p <.05 level (the opposite of the pattern predicted by our hypothesis): contribute/give, values/principles and end/ finish. Out of the 14 items that did not show a significant difference in either direction, ten exhibited a difference directionally consistent with our hypothesis. (XLSX) S1 Appendix.
2020-05-28T09:14:19.007Z
2020-05-22T00:00:00.000
{ "year": 2021, "sha1": "77608cace8c88661f0838a008ced011a31fc917b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246689&type=printable", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e4e84031d3d0e92ddc84fceb29bc7b9d52a90f8d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
118454704
pes2o/s2orc
v3-fos-license
A Note on Regularized Shannon's Sampling Formulae Error estimation is given for a regularized Shannon's sampling formulae, which was found to be accurate and robust for numerically solving partial differential equations. I. INTRODUCTION In previous work [1], one of the present authors proposed a discrete singular convolution (DSC) algorithm for computer realization of singular convolutions involving singular kernels of delta type, Abel type and Hilbert type. One of illustrations for the algorithm was Shannon's sampling formulae [2] which plays an important role in the approximation of the delta distribution and generalized derivatives [3]. However, in practical computations, the truncation error of Shannon's sampling formulae is substantial [4]. A regularization technique [5] was used to construct a regularized Shannon's sampling formulae [1,6], which was found to be extremely accurate and robust for resolving various challenging dynamical problems, such as the homoclinic orbit excitation of the Sine-Gordon equation [7], the Navier-Stokes flow in complex geometries [8], shock capturing of the inviscid Burgers' equation [9], molecular quantum system described by the Schrödinger equation [10] and nonlinear pattern formation of the Cahn-Hilliard equation [11]. The objective of the present note is to provide a theoretical analysis for the previous excellent numerical results. Rigorous error estimations of the regularized Shannon's sampling formulae are given for their applications to interpolations and derivatives of a function. II. MAIN RESULT Theorem. Let f be a function f ∈ L 2 (R) ∩ C s (R) and bandlimited to B, (B < π ∆ , ∆ is the grid spacing). For a fixed t ∈ R and σ > 0, denote where superscript, (s), denotes the sth order derivative. A. Separation of the error The error breaks naturally into a few components. Denote Here, E 1 (t) is regularization error. E 2 (t) and E 3 (t) are truncation errors. From Shannon's sampling theorem [2] which can be differentiated term by term, the total error can be written as a sum of three components The corresponding error norms satisfy B. Estimation of E1(t) Let f(ω) be the Fourier transform of f (x), and f(ω) = R f (x) exp(ixω)dx. Since and one writes From Eq. (13) n=+∞ Since function f satisfies it has a Fourier series expansion where coefficients is given by Equivalently, f(ω) can be written Denote then combining Eqs. (5), (14), (16) and (18), one has For ω ∈ [−B, B], ε(ω) can be evaluated as Moreover, for x ≥ 0, the following inequality [12] is valid Therefore, the estimation for ε(ω) is obtained as It follows from Eqs. (20) and (23) that From the Parseval identity, one has Differentiations can be written where Let l = n − ⌈ t ∆ ⌉, where ⌈x⌉ is the integral part of x and ⌊x⌋ = x − ⌈x⌉, then where Two simple lemmas are required. Lemma 1 (Abel's inequality) [13]. For two sequences {a n }, {b n }, b 1 ≥ b 2 ≥ . . . ≥ b n , a n , b n ∈ R, set then Lemma 2. As all the notations unchanged, set g( The proof is obvious by taking the first order derivative. Let denote It is estimated that Then from (28) and (37), and by using lemma 1 and lemma 2, one has This gives rise to D. Estimation of E3(t) A result like lemma 2 is required. Lemma 3. Notations are the same as before. Denote whenever x ≤ t − M 2 ∆, then {F k (l)} l∈N is an increasing sequence. The proof is also direct. Therefore, by the same treatment as that in the previous subsection, we obtain This is a rigorous error statement for the formulae widely used in the aforementioned numerical computations. Roughly speaking, if exp(− x 2 2 ) = 10 −η , then η = x 2 2ln10 , so the error is 2πr(π − B∆)10 r 2 (π−B∆) 2 2ln10 where r = σ ∆ . One may choose r, B, ∆ and M appropriately to attain desired accuracy. Assume all non-exponential quantities are combined to give unit, and M = M 1 − 1 = M 2 , one has r(π − B∆) > η2 ln 10 (44) and M r > η2 ln 10, where η is the desired order of accuracy. There are some general rules for attaining high accuracy. These are discussed from two different arguments. 1.) For a given function f (x) with a known bandlimit B, other parameters, ∆, r and M , are to be chosen appropriately to achieve a desired accuracy order η: (i) From Eqs. (44) and (45) one has B∆ ≤ π − √ η2 ln 10 r . For fixed r, the higher the frequency bandlimit B is, the smaller ∆ should be, which means the more grid points in the computational domain. When ∆ varies from 0 to π B , r changes from √ η2 ln 10 π to +∞, therefore for sufficiently small ∆, r is near 2.) In practical computations, such as in solving a partial differential equation, the function f and its B are unknown. In this case, ∆ is selected a priori. Then r and M are to be chosen properly for achieving a desired accuracy order η: (i) For a given grid spacing, ∆, and accuracy requirement η, r value determines frequency bandlimit B which can be reached. Then the set of functions f which are almost bandlimited to B can be accurately approximated (where 'almost bandlimited to B' means the function f is not necessarily bandlimited but its Fourier amplitude outside |B| is much smaller than the given error 10 −η ). The choice of M should be consistent with r for a given accuracy requirement. In general, small r and M values lead to an accurate approximation for low frequency component of a function of interest. But the prediction of a high frequency component will not be accurate in such a case. (ii) For a given grid spacing ∆ and r value, the larger M is, the higher bandlimit B can be reached. (iii) To improve computational efficiency with a given ∆, B shall be very close to π ∆ . However, to maintain certain approximation accuracy, r has to be sufficiently large, which implies that M has to be very large too. This in turn results in low efficiency (It takes M → ∞ to maintain the accuracy if one samples at the Nyquist rate). Remark 2. A comparison between the truncation errors of Shannon's sampling formulae and the regularized Shannon's sampling formulae is in order. Reference [4] estimates that the expression has error of where t < N ∆, and E is the total 'energy' of the function given by This is not directly comparable with our error estimate because our sampling is centered around a point of interest, x. Let consider a truncation error of the form In Appendix A, it is shown that in a finite computational domain, the L 2 norm of (E M f )(t) has the order of M∆ , which is much larger than the truncation error of the regularized Shannon's formulae. On the other hand, to achieve the same accuracy, the regularized formulae requires much fewer computational grids [1,6]. Remark 3. Discussions for the higher order derivatives can be presented in a similar manner as those of Remarks 1 and 2. In fact, previous work of solving partial differential equations [1,[6][7][8][9][10][11]. involved such derivatives, and results are consistent with the present theorem. Detailed comparison is omitted. Remark 4. In many practical applications, such as in solving partial differential equations, error estimations and discussions in other spaces are often required. Moreover, in real computations, the computational domain is always limited to a finite interval, such as [a, b]. Therefore, the norm f L 2 (R) in Eqs. (39) and (41) are required to be changed into f L 2 (a,b) , which can be evaluated by integrations along [a + M 1 ∆, b + M 1 ∆] and [a − M 2 ∆, b − M 2 ∆] respectively. Therefore various L p (1 ≤ p ≤ 2) error estimates of E(t) can be derived accordingly. If we know the size of L p norm (1 ≤ p ≤ 2) of the function of interest, then we can deduce from the theorem the critical values, r and M , to achieve desired accuracy.
2019-04-12T09:08:24.854Z
2000-04-30T00:00:00.000
{ "year": 2000, "sha1": "12e0c5f740e7e7ed29ca602ba97701632312f50f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "12e0c5f740e7e7ed29ca602ba97701632312f50f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
186894639
pes2o/s2orc
v3-fos-license
One mystery of the North Atlantic multidecadal variability. An attempt of simple explanation Forming of the North Atlantic multidecadal variability (MDV) remains quite enigmatic. Some studies connect the long-term North Atlantic oceanic variability to transform of the stochastic atmospheric forcing. On the other hand, the intense heat fluxes directed from ocean to atmosphere precede the large-scale positive sea surface temperature (SST) anomalies in the region (and vice versa). The last phenomenon puts some doubts on the stochastic theory and let to suggest that surface heat fluxes play just a passive role as a response to ocean dynamical processes. Analyzing a toy box model and CMIP5 control experiments we have demonstrated that observed phase shifts between SST and surface heat fluxes do not contradict a stochastic theory. The North Atlantic long-term variability can be induced via a transform of the atmospheric random forcing. However, the role of the ocean circulation processes remains crucial for the MDV forming. Specifically, the stochastic excitation of the meridional overturning circulation reproduces observed and model generated MDV. Direct atmospheric impact on SST cannot induce correctly the phase shift between input and output signals. Introduction Multidecadal variability of the North Atlantic (NA) climate system is one of the most intriguing climate theory problems. MDV impacts the thermal and precipitation conditions in the vast regions of the northern hemisphere [1][2][3][4] and complicates quantifying the rate of modern anthropogenic warming. Mechanisms of MDV forming are still under debate. The atmospheric aerosol content primarily of volcanic origin was considered a long-term external forcing provoking the MDV [5,6]. However, Zhang et al, 2013 [7] demonstrated later that the role of aerosols in the SST evolution was overestimated. Nonlinear dynamics of the ocean-atmosphere system can be responsible for the lowfrequency variability formation in a wide range of time and spatial scales [8]. One of the possible MDV mechanisms suggests the transition of thermohaline circulation between different quasi-stable states [9,10]. Another widely accepted hypothesis connects the development of low-frequency variability in the NA climate system to stochastic atmospheric forcing [11][12][13][14]. Most important source of the forcing is due to the North Atlantic Oscillation (NAO) [14,15,16]. The NAO intensity can be directly responsible for the deep oceanic waters forming, especially in the Labrador Sea [17]. The low-frequency variability of the North Atlantic climate system became recently the subject of lively discussion. Clement et al, 2015 [18] connect the root cause of such variability, manifested primarily in AMO (Atlantic Multidecadal Oscillation), to random atmospheric forcing. Supporters of the alternative concept [19] underline the crucial importance of the ocean internal dynamics. Clement et al., 2015 have found that the space-time variability, reflecting the important features of the AMO, is reproduced in both the Coupled General Circulation Models (CGCMs) and the Slab Ocean Models (SOMs). Objections [19] were built mainly on the cross-correlation analysis of the AMO index and the surface heat fluxes. Statistics of CGCMs and SOMs experiments differ radically. SOMs experiments display that the positive phase of AMO is preceded by the heat influx directed from the atmosphere to the ocean. CGCMs long-term runs exhibit the practically opposite picture -in the circumpolar North Atlantic the positive SST anomalies are preceded by an energy flux from the ocean toward the atmosphere. The largest correlation is achieved when surface heat fluxes lead the [19] speculating the phenomenon suggested that heat losses from the ocean surface are almost completely compensated by the advective oceanic energy transport. Thus, the development of AMO is mainly determined by the large-scale Atlantic circulation processes, primarily by the AMOC (Atlantic Meridional Overturning Circulation) and SPG (Subpolar Gyre) intensity. As a result, Zhang et al, 2016 [19] suggested a secondary role of heat fluxes on the air-sea interface in the formation of lowfrequency SST variability. They have stressed that the main AMO driver is oceanic dynamics and that the energy transfer fluctuations on the surface just remain the passive response to that. The conclusion of Zhang et al, 2016 [19] seems consistent with the results of Gulev et al, 2013 [20]. They investigated the SST and H + LE (H -sensible and LE -latent heat fluxes on the surface, positive from the ocean) anomalies obtained from the North Atlantic region observations. Two characteristics are almost synchronized at the low-frequency band and negatively correlated in the high-frequency region of the spectrum. Some doubts arise, however. Strong long-time variations are clearly expressed for the AMO index. If the atmosphere reacts just passively to the long-term oscillations of oceanic circulation, why the multi-decadal atmospheric variability over the region is relatively weak? The NAO index temporal spectrum is close to the white noise [21][22][23] and not similar to the AMO spectrum characterized by a pronounced predominance of low-frequency variability. Moreover, correlation, regression, and any other statistical analysis can be used to test a particular physical hypothesis but does not provide sufficient backgrounds for its formulation. The presence or absence of correlation between shifted time series cannot yet serve as a basis for construction physically conditioned cause-effect relationships. In this study, we show the "paradoxical" phase shift between the external atmospheric forcing and the ocean response is reproducible within the framework of a linear damped oscillator impacted by the stochastic input signal. In the second section, we described shortly the results of the CMIP5 experiments analyses. Section three is devoted to the description and analysis of the conceptual linear model of an oceanic oscillator. The last section contains the discussion and conclusions. CMIP5 data analysis We have collected area averaged results of the control long-term Coupled Model Intercomparison Project 5 (CMIP5) experiments [24]. Analyzed datasets include SST, surface downwelling (DLF) and upwelling (ULF) longwave and solar (DSF and USF) radiation fluxes, surface sensible and latent heat fluxes. All characteristics were averaged for the North Atlantic region limited 20º W -70º W and 35º N -70º N. Totally data for 30 CMIP5 models were analyzed. The region is characterized by relatively homogeneous spatial response to the NAO variations. For the aims of the research, we have used annually averaged data. The primary goal of the CMIP5 control experiments investigation was the cross-spectral analyses of the spatially averaged SST and the net atmospheric forcing net F embracing all surface heat fluxes, (1) The calculations were provided on the base of inverse Fourier transform of the cross-correlation functions. Maximum time lag, Estimations of the spectral properties of the spatially averaged SST and the heat fluxes net F exhibit the behavior typical for the damped stochastically forced oscillator ( Figure 1). Spectral density of the net atmospheric forcing is very close to the white noise. However, SST response is characterized by the prevailing of the low-frequency spectral components. The results resemble the findings of Zhang et al [19], which have demonstrated that the strong positive anomalies of SST in the North Atlantic are preceded by intensive heat fluxes directed from the ocean to atmosphere. Our estimations point that this paradoxical effect is most pronounced for the time shifts between 0 and 4 years. The phase shift between the SST and the net surface heat flux has changed approximately from  − at zero frequency to 0 in the high-frequency domain ( Figure 2. b), in close agreement with the results of [19,20]. In the vast majority of CMIP5 models (26 from 30), SST anomalies are out of phase to anomalies of energy fluxes in the low-frequency domain. The remaining four models, IPSL-CM5B-LR, MIROC-ESM-chem, MRI_CGCM3, and NORESM1-ME display different dependences of the phase shift from frequency. This can be due to the relatively short periods of control experiments or perhaps not completely clear simulations problems [25]. Simple North Atlantic stochastic model where ) (t  are the variances of corresponding fluxes. Sign minus in the last relationship is determined by the fact that heating of the North Atlantic SST leads to prevention of deep water formation and vice versa. We will consider the two types of forcing separately trying to get most simplified solutions. Note also that in our model heat fluxes are considered positive when energy is gained by the ocean (in contrast to Gulev et al, 2013 [20] A solution of (4) can be written as a Duhamel integral (if where Coefficients are defined as follows: The correlation function of the Wiener process derivative owns the delta function properties. It means (see Appendix 1, expression A.1) that the cross-correlation function of the external forcing On the chance of the atmospheric stochastic excitation of SST, i.e. slightly differs from the standard linear oscillator. Duhamel form of a solution can be represented as [27] ( ) The weight function, Similar to the equation (7) we can express the cross-correlation of () Stochastic forcing of SST it is also equal to zero on the frequency described by Hasselmann (1976) model [28] linked with the classical Langevin equation The phase shift between input and output signals in (13) is determined by It is important to note that within the low-frequency range   2 , 0   −  ms phase shift is positive so that formally output signal "leads" the input signal. In such a way heat fluxes on the sea surface can be mistakenly interpreted as a passive response to the ocean circulation. This peculiarity can create an illusory impression of the leading ocean circulation role in the forming of the AMO phenomenon. At least in the model (2-3) it definitely does not take place. What is the origin of the phenomenon? Common sense based interpretation suggests that the cause precedes the effect. The inverse situation looks physically impossible. This, of course, is true. However, within the framework of mutual correlation or spectral analysis, the absence of an adequate model can lead to an inaccurate interpretation of the cause-effect relationships. As a result, the erroneous conclusions about the physical nature of the underlying processes could be made. A close cause-effect problem has been found by Muryshev et al, 2016 [29] in the study of lead-lag relationships between global mean temperature and CO2 content. To figure out the origin of abnormal behavior of the phase shift in the model (1-2) let us consider an input signal being a differentiable function of time. In this case, the SST evolution can be described by a linear differential equation of the second order, ( 14) In event of the periodical external force, (14) can be integrated and rewritten in the form, The phase shift between T and T F is determined by the same equation (12) Stochastic excitation of meridional overturning circulation Forcing of the meridional overturning circulation ( Discussion and Conclusions Analyses of the CMIP5 control experiments display that SST and net heat fluxes on the North Atlantic air-sea interface negatively correlate in the low-frequency and positively correlate in the highfrequency ranges. This picture looks quite far from the classical Hasselmann scheme. However, the results can be explained in the framework of a simple stochastic model which takes into account linear feedbacks of SST and Atlantic meridional overturning circulation. The conceptual idea of the stochastically forced linear oscillator for the forming of the North Atlantic MDV has been proposed previously in some studies (e.g. [30,31]). Moreover, an almost linear relationship has been found between NAO forcing and AMOC response [32,33]. The stochastically forced damped oscillator concept looks adequate for the explaining some quite intriguing North Atlantic climate phenomenon. Particularly this hypothesis explains the low-frequency out of phase relationship of spatially averaged SST and surface net heat fluxes. The research of the toy stochastic model reveals that effect of the surface energy fluxes on MDV can be very different and depends on the object of forcing. Impact of the heat fluxes on the SST can lead to the phase shifts that generally are close to the classical Hasselmann scheme except for lowfrequency domain. In this domain, SST anomalies formally seem to forerun the atmospheric forcing. It is important, however, this is a quite elusive effect determined by the non-classical form of the oceanic oscillator excitation. We have to note also that the direct forcing of SST cannot lead to a negative correlation between SST and heat fluxes. It seems quite obvious from the phase shift bounded between [27]. This result disagrees to the GCM experiments estimates that point as a rule to the positive regression of the two governing indexes [25,34]. On the contrary, if the meridional overturning circulation is affected directly by the atmospheric stochastic forcing the picture looks quite different. In this case, strong resemblance has been revealed between the analytical box model solution and the estimations built both on observational data [20] and CMIP5 control experiments. The SST and net heat fluxes are negatively correlated in the lowfrequency domain and are in phase in the high-frequency limit. This resemblance allows suggesting the key mechanism of the North Atlantic MDV formation is based on the stochastic surface energy fluxes directly impacting the meridional overturning circulation. Moreover, stochastic model (2)(3) under such conditions is characterized by the positive correlation between AMO and AMOC indexes (Bekryaev, 2016 [27]). The additional support of the hypothesis follows from the cross-correlation functions decay comparison in the model (2)(3). The forcing of the meridional overturning circulation leads to the negative values of the cross-correlation function between SST and net surface heat fluxes in the time lag range between zero and approximately eight years. These results are in relative concordance with the CMIP5 model experiments. An analysis of the phase shifts and other cross-correlation characteristics between the North Atlantic SST and net surface heat fluxes localized in the regions of deep water forming could shed some light on the subject. Appendix. The phase shift between stochastic forcing and system output Cross-correlation function of the input process
2019-06-13T13:21:00.632Z
2019-02-12T00:00:00.000
{ "year": 2019, "sha1": "367ef36cdeb07dacc23c8e90dfb8d19048dfd93b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/231/1/012008", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "dba6b4b232e98513ac87b0de57ec90d31c75b3aa", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
15703019
pes2o/s2orc
v3-fos-license
Modeling Policy and Agricultural Decisions in Afghanistan Afghanistan is responsible for the majority of the world's supply of poppy crops, which are often used to produce illegal narcotics like heroin. This paper presents an agent-based model that simulates policy scenarios to characterize how the production of poppy can be dampened and replaced with licit crops over time. The model is initialized with spatial data, including transportation network and satellite-derived land use data. Parameters representing national subsidies, insurgent influence, and trafficking blockades are varied to represent different conditions that might encourage or discourage poppy agriculture. Our model shows that boundary-level interventions, such as targeted trafficking blockades at border locations, are critical in reducing the attractiveness of growing this illicit crop. The principle of least effort implies that interventions decrease to a minimal non-regressive point, leading to the prediction that increases in insurgency or other changes are likely to lead to worsening conditions, and improvements require substantial jumps in intervention resources. I. INTRODUCTION The conflict between the myriad political actors in Afghanistan over the past decade has resulted in a tremendous amount of social and economic instability. The inability to establish an adequate governance structure within the nation has adversely affected the integration of Afghanistan into the global political and economic system [1,2]. One outcome of this tumultuous socio-political environment has been the reemergence of poppy farming, particularly in the southern and eastern portions of the country [3,4]. Previously, during the Soviet invasion in 1978, Afghani warlords utilized the production of poppy to fund their regional armies [5]. Similarly, since 2002, the cultivation and sale of poppy plants has played an important role in financing anti-NATO and -U.S. resistance forces. Poppy farming and its related activities hinder efforts to establish more legitimate economies throughout poppy growing regions. Peters argues that the poppy trade has greatly affected the motives of al Qaeda in Afghanistan and the Taliban, noting that some incursions are designed to protect drug shipments rather than advance political and territorial goals [6]. Additionally, many have written that political corruption is a problem tightly related to the opium trade [6][7][8][9]. Peters and Goodhand in particular argue that current zero-tolerance strategies, which are a result of poppy criminalization, are counterproductive. Understanding and dealing with the poppy trade in a way that does not victimize less prominent actors in the system is an important yet complicated component of bringing peace to Afghanistan. A group of "less prominent actors" of particular interest in this system are Afghan farmers. Farmer decisions to cultivate this crop are primarily based on economics. In their 2010 Afghanistan Opium Survey, the United Nations Office on Drugs and Crime [10] reports that poverty and the crops expected high sale price are the primary reasons farmers choose to grow poppy [11]. However, this decision also takes place in a complex system of dangers and rewards, government policy, political motivations, insurgency, local infrastructure, and other factors, all of which indirectly influence the economic viability of poppy cultivation. Aside from the market price received for poppy, intermediaries may use non-economic forces as leverage for increasing or decreasing the benefits farmers receive from the poppy trade. For example, some Afghan farmers have reported that their "decision to plant opium [has] 2 been 'influenced' by local commanders" [12]. Another example of the complex situation on the ground relates to the fact that fertilizer is banned in Afghanistan due to its frequent use in improvised explosive devices (IEDs), perhaps making the hardy poppy plant an attractive crop choice [13]. Here we characterize the elements influencing poppy growth in Afghanistan, encapsulating both the economic and non-economic incentives into a single measure in a dynamic simulation that systematically summarizes the various economic and non-economic factors as dollar amounts in order to gain a better understanding of why farmers decide to grow poppy. We construct an agent-based model of Afghan farmers' crop decisions over time and across space, in order to explore how these decisions might change given different policy scenarios, expressed by key leverage parameters in the model. The model is calibrated with actual crop prices and satellite derived land-use imagery. We find that increases in the levels of insurgency significantly increase the level of poppy production in the regions in which it already exists, as well as in other regions that are proximate to unsecured border crossings. On the other hand, policies of increased national subsidy for licit crops and increased security on the Afghanistan-Pakistan border are effective in transitioning regions of poppy growth to licit crops. An understanding of why farmers choose to grow certain crops is necessary to inform policy decisions aimed at establishing a more sustainable national economy. Identifying effective interventions is both a national and international concern, as 93% of global opiate production currently occurs in Afghanistan [8,14]. Another critical benefit from an effective poppy reduction policy would be a parallel reduction in drug-related corruption within the Afghan government [15]. While exogenous forces, such as world demand for heroin, directly influence the dynamics of poppy production, a study of endogenous factors, such as local government and insurgency policies, can be a basis for local interventions. Interventions should recognize the need to provide alternative economic opportunities, as the current poppy growth plays an important economic role in the society as well as in individual lives-another reason policy directed at economic incentives could be both more robust and more generally beneficial to the society. Rubin [16] notes that past U.S.-led counternarcotic measures have been used by the Taliban to gain support against international forces. By complementing or replacing these measures with economically supportive alternatives, 3 Taliban forces will have less room to exert their own influence. The next section describes the structure of the simulation. We construct a model of empirically estimated economic forces and address questions whose answers are robust to the uncertainties of the estimates. The third section describes the outcomes from a number of scenarios. A summary of implications is given in the concluding section. II. MODEL STRUCTURE AND DATA The dynamic model of agricultural choice is composed of a number of independent farmer agents, who choose what type of crop to grow on their land. It is assumed that the agents use farming methods appropriate to their region (e.g. use of irrigation or dryland farming). Decisions of which crop to grow are based on a comparison of the net compensation they expect to receive from growing a licit crop or poppy, including social and logistical factors. As is understood in the general theory of market function, the role of prices in our model serves as an aggregator of many complicated details of agricultural and social processes. The value for licit crops is given by: where the index c takes the values grain-cereal-fodder or fruit-nuts-vegetables, for the two different types of licit crops grown by the farmer agents in this model. Here p c is the price a farmer receives for selling their harvest, s represents the subsidies received from national and international organizations (including indirect benefits like equipment and improved distribution capacities), and i c is a one time cost of switching to a licit crop if they were growing poppy during the previous season. Here c(t, r) is the crop raised at time t at location r. The Kronecker delta function, δ i,j is equal to one when its two subscripts are equal (i = j) and zero otherwise (i = j), functioning as a switch to "turn on" the initiation cost term only if the farmer agent (at location r) has not been growing the licit crop. The value for poppy is determined by: where p poppy is the price a farmer receives for selling their harvest. The force exerted on the farmer by insurgency, f , includes indirect benefits and incentives as well as the avoidance of strong-arm tactics. This is given by f = f 0 + f 1 ( r), where f 0 applies to all of Afghanistan and is a parameter adjusted in the model, and term f 1 ( r) is particular to each province (and therefore spatially dependent). The cost of trafficking the drugs to the nearest exit point from the country, e depends on the distance from the centroid of the province to the exit point, via the road network, and thus depends on the location r. The "one time" initiation cost i poppy is the cost of switching from a licit crop to poppy if the farmer at the given location was growing a licit crop during the previous season. Land use data were obtained from Afghanistan Information Management Services and all arable land (e.g. land designated as suitable for irrigated, rain-fed, or vineyard agriculture) was identified to create a grid with 0.025 degree-squared cells, approximately equivalent to an area of 1.75 miles squared. Each cell was associated with the province in which it is located, and assigned one of two licit crop types (grain-cereal-fodder or fruit-nuts-vegetables) based on information from 2008 reports prepared by the National Agricultural Information System for the Afghan Ministry of Agriculture, Irrigation, and Livestock [17] or, in cases where these reports are not available, from 2007 reports from the United Nations' World Food Programme's Food Security Atlas [18]. A distinction between the aggregated categories of grain-cereal-fodder and fruit-nut-vegetable is made due to their different environmental requirements and sale prices. Information on the number of hectares of poppy grown in each province [10] was used to assign some cells as poppy growing regions. The result of this process is shown in Figure 1. In the model each farmer remembers its initial base designation (grain-cereal-fodder or fruit-nut-vegetable) and only adopts its base crop type or poppy throughout the simulation. The initial area (approximately 12,500 cells or 5,700,000 hectares) and distribution of crops corresponds well with published studies' documentation of where various types of crops are located in the country and their volume of production [10,19]. In particular, the initial state assigns approximately 275 cells to poppy cultivation, which is equivalent to around 125,000 hectare, consistent with recent UNODC reports [11,20]. There is slight variation in this value between different simulations of the model because crop assignment is probabilistic. For the purposes of our analysis it is assumed that prices for all crops remain constant, with grain-cereal-fodder crops priced at $1,000 per Ha [10], fruit-nutvegetable crops at $9,000 per Ha [21], and poppy at $3,750 per Ha [20]. The local insurgent influence parameter, f 1 ( r), is assigned an initial value using a map from the International Council on Security and Development, depicting the level of insurgent activity in Afghan provinces [22]. The influence parameter uses a dollar amount to represent the added financial incentive (or avoidance of financial loss) faced by farmers confronted with insurgent forces. In addition, direct threats to life or limb [23] are considered a part of this term. Farmers in "light insurgent activity areas" are assigned a random base value between f 1 = $0 and $100 per Ha, those in "substantial insurgent activity areas" a value between f 1 = $100 and $200, and farmers in "heavy insurgent activity areas" a value between f 1 = $200 and $300. The maximum value of f 1 = $300 per Ha is chosen because this is the amount counter-insurgency operations of the U.S. Marines have paid farmers to destroy their opium crops [24]. We assume regions with high insurgency levels will be able to counter U.S. offers with incentives of all kinds whose total value is a similar dollar amount. The switching costs of farmer activity, i c , were estimated to be $500, $1,875, and $4,500 dollars per Ha for changing to c = grain-cereal-fodder, poppy, and fruit-nut-vegetable respectively. This is equivalent to half of the revenue from one harvest. Because a farmer will choose the crop that will provide the most income for the following season, the switching costs affect the impact of interventions seeking to change farmer crop choices. A distinction between single and multi-year switching costs, which might arise for fruit-nut-vegetable crops was not included, but would not have any impact on the conclusions. blockades along certain regions of the Afghan border. The first step in calibrating the model is to develop a baseline scenario, where the number of poppy, grain-cereal-fodder, and fruit-nut-vegetable cells are consistent with the previously mentioned distribution of crops throughout Afghanistan. We find that an appropriate stable baseline scenario is obtained when the insurgency level, f 0 , is held at the initial values, drug trafficking blockades are present at the northern two exit points, and the subsidy per hectare parameter, s, is set between $1,100 and $2,500. Within this range the subsidy parameter can be changed without affecting the system's behavior. The stability is due to the switching costs. We choose to use the minimum value in the subsidy range (s = $1, 100) as a reference value for the baseline scenario because we assume that no more than necessary would be spent to maintain stability. This is the principle of least effort, which follows from an assumption of short term rationality of policy makers who have competing uses for resources [25]. Increases in subsidy level that do not achieve any improvement are reversed, while decreases are maintained until they manifest deteriorating conditions. This results in the level of support being just above the minimum needed to sustain the current level of poppy growth. We find that the subsidy level assumptions are reasonable considering the present levels of international aid in Afghanistan. Providing a subsidy of $1,100 per hectare to the farmers tending to Afghanistan's 5,700,000 hectares of arable land results in a total cost of around 6 billion dollars per year. This is a reasonable estimate of the security, infrastructure and more direct assistance to Afghani farmers, given that it is 5% of the estimated US military expenditures of $120 billion [26] and on the order of the international humanitarian aid to Afghanistan of $36 billion between 2001-2009 [27], including $4 billion spent by the US on aid in 2010 [28]. It is important to note that the subsidy range that produces a stable scenario is dependent III. INTERVENTION SCENARIO RESULTS The general behavior of the model can be understood from direct considerations. A sufficient increase in the subsidy of licit crops causes a reduction in the growth of poppy. Poppy production increases (decreases) with increased (decreased) levels of insurgency. Improved levels of border security, particularly with Pakistan, reduce the growth of poppy. A complete blockade of all border crossings eliminates poppy growth everywhere. Changes are inhibited by the cost associated with changing crops. Aside from extreme parameter values, land associated with fruit-nut-vegetable growth does not participate in the poppy economy, due to relatively high prices for those crops. These effects, however, interact with each other in specific ways that can be understood from simulations of the model. For example, subsidies must be raised beyond the baseline equilibrium subsidy range maximum of s = $2,500 per hectare in order to achieve a reduction in poppy cultivation. At reduced values of subsidy below $1,100, poppy growth increases dramatically, by an amount that depends on the security scenario chosen. Thus, reducing the growth of poppy requires exceeding a certain threshold of economic investment and security enhancement. There is a range of values, which yield similar results. In that range, increasing investment in subsidy would increase costs but not achieve any significant impact. However, at the borders of that range, changes in subsidies move the system into a more or less desirable condition. A few scenarios are shown in Figure 3. For each scenario, we present a plot of the number of agents farming the three types of crops at each each time step, where a time step represents a year. In addition, we present maps of the resulting stable geographic distributions of crops at the end of each simulation. The first scenario to be examined involves increasing the level of insurgent influence, f 0 , holding all other parameters at their baseline values. A resurgence of insurgency throughout Afghanistan is likely to correspond to a concurrent increase in poppy crop production, as the funding of such a movement would benefit from illegitimate sources, such as drug trafficking, as seen in Figure 3A-B. The expansion of opium production occurs in the southwest and southeast of Afghanistan, where insurgency, f 1 , is already relatively high, drug trafficking exit points are nearby, and most licit agriculture is of the less profitable grain-cereal-fodder variety. The second scenario considers the impact of increasing the subsidy level from s = $1,100 to $2,200 and implementing a drug trafficking blockade at the southwestern exit point. This scenario is designed to explore how a well-funded counter-narcotic campaign might eradicate opium production in the southwestern region of the country. Farmers who were initially growing poppy experience both incentives (subsidies) and disincentives (increased trafficking costs), resulting in the majority switching to their base crop. As seen in Figure 3C-D, such a strategy reduces the number of poppy crop cells to under 100 agents equivalent to a 62% reduction of production. The final scenario involves a slight increase of the insurgent influence parameter that adds $200 to a farmer's baseline insurgent influence value, f 1 and the removal of the northwestern drug trafficking blockade. This scenario might be realized if international forces experience additional constraints on their financial and physical resources. We also considered the addition of a southwestern drug trafficking blockade in this scenario. Figure 3E-F shows Implementing a thorough blockade, however, would have important social consequences for the large number of workers tied to the agricultural industry. While some poppy farmers may be able to switch to legal crops, additional opportunities should be provided to those who have no other means for making a living. While not included in our model, this can be reasonably expected to include those involved in the opium production and trafficking process. The implications of eliminating this portion of the Afghan economy could include an increase in social unrest due to unemployment and instability in the agricultural sector. 13 One limitation of the present implementation of this model is that crop prices are held constant. Because Afghanistan is responsible for the production of such a large portion of the world's opium supply, a reduction in the number of poppy farms would likely make the crop more profitable and create a balancing feedback, increasing the economic incentive for production and thus resisting the further reduction of poppy farming due to the higher prices. While a dynamic poppy pricing model might be an improvement, the effect of external price increases may be limited because much of the money gained from selling opium goes directly to warlords, drug traffickers and dealers, and government figures [30]. This shields poppy farmers from price swings to some extent, and makes the use of fixed prices in the simulation more reasonable as a first approximation [31]. Adding a more detailed description of current Afghan farms and farming, and the related socio-economic forces at work, may result in more precise analyses, however this should not affect the general trends described above. Annual and progressive climate variations, which might also affect crop choices, would not affect the overall analysis of policy impacts or its conclusions. In this paper we have constructed a model that describes the role of economic forces in Afghanistan's farmers' choices. The Afghan farmer choice model uses geographic data to provide new policy insights into the dynamics behind the production of poppy crops. While our considerations represent only part of the worldwide opium trade network, understanding what actions are likely to reduce the number of farmers choosing to cultivate poppy may be helpful in constructing policy based upon achievable targets. This is particularly important since the model shows that without a certain threshold level of financial and/or security efforts, impact will be very limited. Through this understanding, more comprehensive and systemic strategies can be devised to fight opium production, trafficking, and use. This work was supported in part by ONR under grant N000140910516 and AFOSR under grant FA9550-09-1-0324.
2012-05-22T14:42:15.000Z
2011-11-22T00:00:00.000
{ "year": 2013, "sha1": "21bdb41b1410ca0f78fa0654bfc73a1d9a42ca0c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1111.5351", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "21bdb41b1410ca0f78fa0654bfc73a1d9a42ca0c", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Economics", "Environmental Science", "Political Science" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
212198
pes2o/s2orc
v3-fos-license
A Spectrum Access Based on Quality of Service (QoS) in Cognitive Radio Networks The quality of service (QoS) is important issue for cognitive radio networks. In the cognitive radio system, the licensed users, also called primary users (PUs), are authorized to utilize the wireless spectrum, while unlicensed users, also called secondary users (SUs), are not authorized to use the wireless spectrum. SUs access the wireless spectrum opportunistically when the spectrum is idle. While SUs use an idle channel, the instance that PUs come back makes SUs terminate their communications and leave the current channel. Therefore, quality of service (QoS) is difficult to be ensured for SUs. In this paper, we first propose an analysis model to obtain QoS for cognitive radio networks such as blocking probability, completed traffic and termination probability of SUs. When the primary users use the channels frequently, QoS of SUs is difficult to be ensured, especially the termination probability. Then, we propose a channel reservation scheme to improve QoS of SUs. The scheme makes the terminated SUs move to the reserved channels and keep on communications. Simulation results show that our scheme can improve QoS of SUs especially the termination probability with a little cost of blocking probability in dynamic environment. Introduction Today, internet of things, cloud computing and mobility has brought up too much data for users. It is evident that so much data should be transmitted by the wireless system. On the other hand, the wireless spectrum has been authorized to licensed users while the spectrum utilization is at low rate. To improve the utilization of spectrum resource, cognitive radio technology (CR) [1] has recently emerged as a promising solution. In the cognitive radio system, the licensed users which are called primary users (PUs) are authorized to utilize the wireless spectrum, and the unlicensed users which are called secondary users (SUs) are not authorized to access the wireless spectrum. Cognitive radio allows SUs access licensed spectrum opportunistically only when the spectrum is not used by PUs. While SUs use an idle channel, the instance that PUs come back makes SUs terminate their communications and leave the current channel. When this occurs, SUs' transmission is terminated compulsorily. The quality of service (QoS) is difficult to be ensured for SUs. In a wireless system, PUs may have much data to transmit. Therefore, PUs utilize licensed spectrum frequently. It leads to high probability of SUs' forced termination. The quality of service (QoS) of SUs is difficult to be ensured. In the cognitive radio system, there are numerous related literatures for QoS of SUs. In distributed cognitive radio networks, QoS for delay-sensitive applications is researched in [2]. Considering QoS requirements of SUs, cross-layer methods are proposed to allocate the resource reasonably in [3,4]. In [5,6], the analytical model is proposed to evaluate the SUs' performance in cognitive radio networks. Considering energy effect, the authors of [7] propose an energy-efficient handoff strategy using the partially observable Markov decision process. In [8], spectrum access strategy with an α-Retry policy is proposed a to enhance QoS for SUs. To satisfy the delay requirement in the cognitive radio system, the authors propose a novel component carrier configuration and switching scheme for real-time traffic in [9]. From the spectrum resource management perspective, a comprehensive analytical framework based on queueing theory is proposed to analyze QoS of SUs in [10]. And there are some related work in [11][12][13]. The aforementioned researches do not consider QoS in a cognitive radio system where SUs' transmission is often terminated by PUs. In this paper, QoS of SUs is investigated in the cognitive radio system. In the system, SUs access the wireless spectrum opportunistically when the spectrum is not used by PUs. However, while SUs use an idle channel, the instance that PUs come back to the channel makes SUs terminate their communications and leave the current channel. We first propose an analysis model to derive QoS of SUs such as blocking probability, completed traffic and termination probability of SUs. To improve QoS of SUs especially the forced termination probability, then we propose a channel reservation scheme for SUs. This scheme makes the terminated SUs move to the reserved channels and keep on transmission. Simulation results show that our proposed scheme can improve QoS of SUs especially the termination probability with a little cost of blocking probability in dynamic environment. The rest of the paper is organized as follows: In section 2, the analysis model is presented to obtain QoS of SUs. In section 3, a channel reservation scheme is proposed to improve QoS of SUs. In Section 4, simulation results show that our scheme reduces termination probability with the cost of a litter increase of blocking probability. Finally, this paper is concluded in Section 5. The Analysis Model In the system, there are N channels with equal bandwidth. When PUs do not utilize one or several channels, SUs could access the idle channels for transmission. Therefore, it is necessary to sense the channels periodically for SUs which have data to transmit as in Fig 1. At the beginning of each period, SUs perform spectrum sensing to find out whether the channel is used by the PUs or not. We assume that spectrum sensing is perfect and sensing period is negligible. For SUs, when they notice there are some idle channels after spectrum sensing, a SU can only access one idle channel until the next sensing. When a PU comes back to this channel with a SU's data, the SU must leave this channel and the SU's transmission is terminated. In this section, QoS of the SUs is analyzed in terms of blocking probability, the completed traffic and termination probability of SUs. It is assumed that the SUs' traffic arrival and departure streams follow Poisson random processes. In the whole system, let λ denote the arrival rate of SUs, which means how many SUs want to transmit data in unit time, and μ denote the departure rate of SUs, which means the average time of each SU transmission is 1/μ. The whole input traffic of SUs is A = λ/μ. When there are n channels are idle (unused by PUs), considering a SU can only access one idle channel, there are n SUs which can use the idle channels at most. If more than n SUs want to access channels in the cognitive radio system, some SUs' traffic may be blocked. According to the Erlang loss formula, the probability of SU traffic blocking is obtained as Considering the traffic blocking, the completed traffic of SUs is A(1−B) if no PUs use channels during SUs' transmission. However, PUs may return to these channels where SUs are transmitting data. SUs must leave these channels and their transmissions are terminated. Therefore, A(1−B) does not denote the real completed traffic. In each channel, we assume the PUs' traffic arrival and departure streams follow Poisson random processes. Let λ 1 denote the arrival rate of PUs, μ 1 denote the departure rate of PUs for each channel, and T denote the duration of one period for SUs. For each idle channel which is used by SUs, the probability that a PU comes during SUs' transmission is p ¼ 1 À e Àl 1 T . When PUs' traffic occurs in j channels where SUs are transmitting data, SUs' traffic is terminated in the j channels and only n − j channels can be used by SUs. The probability that PUs' traffic occurs in j channels is During a period, when PUs' traffic comes in one out of n channels, SUs can only use n-1 channels after PUs' traffic comes. According to the Erlang loss formula, the probability of SU traffic blocking can be obtained as Before PUs' traffic comes, SU traffic blocking probability is still B. The probability density function of the duration between the beginning of the period and the moment PUs' traffic comes is f ðtÞ ¼ e Àl 1 t . In this case, the corresponding completed traffic of SUs is During a period, when PUs' traffic comes in two out of n channels, it is assumed that PUs' traffic comes first in one channel and second in another channel. Let t 1 denote the duration between the beginning of the period and the moment PUs' traffic comes first, and t 2 denote the duration between the beginning of the period and the moment PUs' traffic comes second (t 1 t 2 ). For each channel, the PUs' traffic is independent. Therefore, the probability density function of (t 1 , t 2 ) is f ðt 1 ; t 2 Þ ¼ f ðt 1 Þf ðt 2 Þ ¼ e Àl 1 t 1 e Àl 1 t 2 . When no PUs' traffic comes, SU traffic blocking probability is B. When PUs' traffic comes first in one out of n channels, SU traffic blocking probability is B 1 . When PUs' traffic comes second in another one out of n channels, according to the Erlang loss formula, SU traffic blocking probability is B 2 which can be obtained as In this case, the corresponding completed traffic of SUs is During a period, when PUs' traffic comes in three or more than three channels, the corresponding completed traffic Tra(j) (j ! 3) of SUs can be calculated by the similar method. Therefore, the real completed traffic of SUs is PfPUs return in j channelsgTraðjÞ ð5Þ Compared to the completed traffic of SUs equaling A(1−B) without PUs' traffic coming, the terminated SUs' traffic is A(1−B)−Tr. Therefore, termination probability of SUs can be obtained as Channel Reservation In a wireless system, PUs have too much data and they come to channels irregularly. SUs' transmission is terminated frequently. To ensure the QoS of SUs, the channel reservation scheme is proposed. When there are reserved idle channels, terminated SUs' communications can move immediately to reserved channels to avoid communications termination. Channel reservation is proposed in this section. With the channel reservation scheme, we analyze QoS of the SUs in terms of blocking probability, the real completed traffic and termination probability of SUs. Let m denote maximum channels that SUs can utilize, which means no more than m channels are assigned to SUs no matter how many idle channels there are. The remaining idle channels are reserved. Terminated SUs can move to the reserved channels. Obviously, if the current number of idle channels is less than m, SUs can only access the current idle channels. We assume that the SUs' traffic arrival and departure streams follow Poisson random processes. In the whole system, let λ denote the arrival rate of SUs, and μ denote the departure rate of SUs. The whole input traffic of SUs is A = λ/μ. When there are n channels are idle (unused by PUs), SUs cannot access all these idle channels but only m channels (m < n) instead since the channel reservation scheme is used. According to the Erlang loss formula, the probability of SU traffic blocking under the reservation condition is obtained as The completed traffic of SUs is A(1−Br) without PUs' traffic coming, and there are n−m idle channels which are reserved. However, PUs may return to these channels where SUs are transmitting data. When PUs want to utilize one or several channels of m channels which SUs are using, SUs must leave these channels and move immediately to reserved channels. When there are remaining idle channels, forced termination will not occur. Therefore, during a period, when PUs' traffic comes in less than n−m+1 out of n channels, no forced termination occurs. After all reserved channels are used, forced termination may happen. During a period, when PUs' traffic comes in n−m+1 out of n channels, all reserved channels are used and SUs' traffic in one channel must be terminated. SUs can only use m-1 channels after PUs' traffic comes. According to the Erlang loss formula, the probability of SU traffic blocking can be calculated as Before the n−m+1th PUs' traffic comes, SU traffic blocking probability is still Br. In each channel, it is assumed the PUs' traffic arrival and departure streams follow Poisson processes. Let λ 1 denote the arrival rate of PUs, μ 1 denote the departure rate of PUs for each channel, and T denote the duration of one period for SUs. The probability density function of the duration between the beginning of the period and the moment the n−m+1th PUs' traffic comes is f ðtÞ ¼ e Àl 1 t . In this case, the corresponding completed traffic of SUs is During a period, when PUs' traffic comes in n−m+2 out of n channels, SUs' traffic in two channels must be terminated. For the two channels, it is assumed that PUs' traffic comes first in one channel and second in another channel. Let t 1 denote the duration between the beginning of the period and the moment PUs' traffic comes in one channel first, and t 2 denote the duration between the beginning of the period and the moment PUs' traffic comes in another channel second (t 1 t 2 ). For each channel, the PUs' traffic is independent. Therefore, the probability density function of (t 1 , t 2 ) can be obtained as f ðt 1 ; t 2 Þ ¼ f ðt 1 Þf ðt 2 Þ ¼ e Àl 1 t 1 e Àl 1 t 2 . When PUs' traffic comes in n−m channels, SU traffic blocking probability is Br. When PUs' traffic comes in n−m+1 channels, SU traffic blocking probability is Br 1 . When PUs' traffic comes in n−m+2 channels, SU traffic blocking probability is Br 2 which can be obtained as Br 2 ¼ A mÀ2 =ðm À 2Þ!= X mÀ2 i¼1 A i =i! according to the Erlang loss formula. In this case, the corresponding completed traffic of SUs is During a period, when PUs' traffic comes in more than n−m+2 channels, the corresponding completed traffic Trar(j) (j!n−m+3) of SUs can be calculated by the similar method. Therefore, the whole completed traffic can be obtained as PfPUs return in j channelsgAð1 À BrÞ þ X n j¼nÀmþ1 PfPUs return in j channelsgTrarðjÞ ð 10Þ Forced terminating probability can be derived as Numerical Results and Discussions In this section, QoS of SUs with channel reservation is analyzed in terms of blocking probability, the real completed traffic and termination probability of SUs according to the analytical model above. And the analytical results are validated by simulations. The parameters of Figs 2 and 3 are as follows. For SUs, the duration of one period is T = 20ms. There are N = 30 channels in the system. The departure rate of PUs for each channel is μ 1 = 0.02s −1 . And the whole input traffic of PUs is 30Erl. Erl is unit of input traffic and completed traffic. It can be obtained by arrival rate dividing departure rate. In Fig 2, the probability of SU traffic blocking is analyzed when the input traffic of SUs equals 5Erl and 8Erl, respectively. When there are idle channels, SUs can access these channels. Our channel reservation scheme limit the maximum allowable channels (denoted by m in section III) that SUs can utilize. Except the maximum allowable channels that SUs access, the remaining idle channels are reserved for terminated SUs to move to continue their communications. Therefore, as maximum allowable channels increases, the number of reserved channels decreases. Since more idle channels are used by SUs, the SU traffic blocking probability decreases. Fig 3 depicts the completed traffic of SUs with the number of maximum allowable channels. When the input traffic of SUs equals 5Erl, a few idle channels can finish the transmission of SUs' traffic because the input traffic of SUs is very low. Therefore, as maximum allowable channels for SUs increases, the completed traffic throughput of SUs increases slightly. When the input traffic of SUs equals 8Erl, the input traffic increases and a few idle channels cannot satisfy SUs' traffic. As maximum allowable channels increases, the completed traffic throughput of SUs increases noticeably than that of SUs with low input traffic. In Fig 4, the number of channels is N = 30, the departure rate of PUs is μ 1 = 0.02s −1 , and the whole input traffic of PUs is 30Erl. When the duration of one period for SUs equals 20ms and 40ms, the SU traffic termination probability decreases as reserved channels increases. The reason is that more channels are reserved for SUs to avoid termination. Therefore, channel reservation can decrease the forced termination probability while increase the blocking probability. In general, channel reservation is advantageous since users are more sensitive to be terminated of an ongoing transmission than to be rejected of a new transmission. Conclusion In this paper, an analysis model is presented to derive the quality of service (QoS) for SUs such as blocking probability, completed traffic and termination probability of SUs in the wireless system. Since PUs have too much data to transmit and often access the channels, the quality of service of SUs is difficult to ensured, especially the forced termination probability. Therefore, a channel reservation scheme for SUs is proposed to improve the quality of service of SUs. This scheme makes the terminated SUs move to the reserved channels and keep on transmission. Simulation results show that our scheme can improve QoS of SUs especially the termination probability with a little cost of blocking probability in dynamic environment.
2018-04-03T03:04:29.589Z
2016-05-12T00:00:00.000
{ "year": 2016, "sha1": "dd2d85c499417f592e448f83c555dbea2f625372", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0155074&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd2d85c499417f592e448f83c555dbea2f625372", "s2fieldsofstudy": [ "Computer Science", "Business" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
249440216
pes2o/s2orc
v3-fos-license
Gendered university major choice: the role of intergenerational transmission In this paper, I study the role of gender-typical parental occupation for young adults’ gender-typical university major choice using data on a recent cohort of university students in Germany. Results show significant intergenerational associations between the gender typicality in parental occupation and young adults’ majors. As to why these effects occur, findings suggest that the transfer of occupation-specific resources from parents to their children plays an important role and that a transmission of gender roles explains at least some of the father-son associations. The paper contributes to existing literature by introducing a novel measure that operationalises the extent to which majors and occupations are ‘typically female’ or ‘typically male’ and by studying different transmission channels. Introduction Gender differences in the labour market persist despite narrowing gaps between men and women in labour force participation, earnings, and occupations since the mid-twentieth century. The gender earnings gap in particular has received increased attention over the past decade. One fact emerging from research is that the gender gap in earnings tends to be wider among university graduates than among those with lower education levels (Goldin et al., 2017;OECD, 2020). Responsible editor: Klaus F. Zimmermann Existing literature suggests that an important part of the gender pay gap among university graduates stems from choices made earlier in the life course, namely university majors (Brown and Corcoran, 1997;Charles and Bradley, 2002;Machin and Puhani, 2003;Black et al. , 2008). Men are more likely than women to study STEM (science, technology, engineering, mathematics) fields. Women are overrepresented in humanities, social sciences, and educational sciences (Leuze and Strauß, 2009). Women are more likely to choose majors that typically lead to occupations with lower earnings and fewer opportunities for career progression (Charles and Bradley, 2002;Blau and Kahn, 2017). Gendered major choices thus have direct consequences on occupational segregation, on wage gaps, and on so-called glass ceilings-the idea that there are invisible barriers that prevent women from achieving top incomes and positions (Ponthieux and Meurs 2015;Bertrand, 2018). Sex segregation by university major also has important indirect consequences. For example, it may reinforce existing gender norms and stereotypes, thereby limiting the perceived educational choices of future generations (Charles and Bradley, 2009). Most research seeking to explain the determinants of gendered major choices focuses on one of two types of factors. Some show the relevance of individual-level characteristics. These include personality traits such as competitiveness, beliefs about enjoying coursework, and preferences over expected jobs (Antecol and Cobb-Clark, 2013;Zafar, 2013). Others focus on the role of the social environment such as teacher role models or sex of high school peers (Carrell et al., 2010;Brenoe and Zoelitz, 2019). However, few studies have investigated the role of parents in shaping the choice of university major (e.g. Humlum et al. 2018;Vleuten et al. 2018). This is despite the fact that parents transmit occupation-specific resources to their children (Vleuten et al., 2018). Moreover, children observe and learn from the gender roles enacted by their parents (Crouter et al., 1995;Platt and Polavieja, 2016). For example, children learn about the degree to which their parents follow traditional gender roles by observing their occupations (Polavieja and Platt, 2014). This is because occupations differ in the degree to which they are regarded as typically female or typically male. The aim of this paper is to analyse whether the degree of femininity of mothers' occupation and the degree of masculinity of fathers' occupation affect whether their adult children choose gender-typical majors at university and to study underlying transmission channels, using Germany as a case study. Specifically, I distinguish between the transmission of occupation-specific parental resources and the transmission of gender norms. To capture the degree to which a mother's occupation is regarded as typically female, I construct a rank-based measure based on the share of women in the occupation she held when her child was aged 15. I call this measure 'femininity rank of mothers' occupation' or 'mothers' rank'. Similarly, I construct masculinity rank in fathers' occupations (fathers' rank), masculinity rank in sons' majors (sons' rank), and femininity rank in daughters' majors (daughters' rank). I use the term 'gender-typicality rank' to refer to masculinity and femininity rank at the same time. Similarly, I use 'gender-typical' when referring to typically male and typically female majors simultaneously. I exploit unique survey data of a nationally representative cohort of first-year undergraduate students in Germany in 2010. Using regression analysis, I examine the association between femininity rank of mothers' occupation and masculinity rank of fathers' occupation on the one hand and the gender-typicality rank of young adults' university majors on the other hand. I thereby capture intergenerational positional changes in each person's position relative to others of the same cohort and sex. Germany is an important case study because its labour market exhibits low occupational mobility. This means that initial major choices at university have long-lasting effects on career outcomes, such as lifetime earnings (Aisenbrey and Brückner, 2008). Approximately three-quarters of the gender earnings gap at labour market entry in Germany is explained by university major choice (Braakmann, 2008). This matters especially given that the gender pay gap in Germany is particularly high among university graduates. In 2006, women with Abitur (school-leaving certificate) and a vocational qualification earned 38% less than equally qualified men, while tertiary-educated women earned 42% less than men with comparable qualifications (OECD, 2008). I find that sons choose less typically male majors if their fathers worked in less typically male occupations, as measured by their respective 'masculinity rank'. Sons' choice is not correlated with their mother's occupation. Daughters choose more typically female majors if their fathers worked in less typically male occupations and if their mothers worked in more typically female occupations. While the father-son and father-daughter associations hold generally, the mother-daughter association is statistically significant only under certain conditions: if mothers possess tertiary education, if mothers were employed, and among those living in East Germany. Moreover, the relationship between mother's occupation and child's major appears to be linear, whereas for fathers there are important nonlinearities. Specifically, results from quantile regressions and heterogeneity analyses show that the significant effects appear to be driven by fathers and students in less gendertypical occupations and university majors, suggesting that fathers in gender-atypical occupations can help break gender stereotypes and that the findings of the paper are at least partially driven by sons and daughters who defy gender-stereotypical major choices. In terms of effect size, a one standard deviation increase in masculinity rank in fathers' occupation is associated with a 3% decrease in daughters' femininity rank and a 5% increase in sons' masculinity rank in major. As to why these effects occur, a large part of the results appears to be driven by children choosing a major that is closely related to parental occupation. 1 This supports a 'direct transfer of resources' channel, that is, the transfer of occupation-specific skills, resources, and networks from parents to their children. The results also suggest that at least some of the father-son associations are due to a transmission of gender roles, as embodied by fathers via the gender typicality of their occupation. The findings from this study have important implications. First, the relevance of parental socialisation points to the importance of policies that address early roots of gendered major choices. Second, the interactive effect of parental education with masculinity/femininity in parental occupation suggests that the status of role models may matter more than their sex for young people to identify with them. Third, the finding that intergenerational associations are strongest between fathers and sons points to a need for policy to focus on men (and not predominantly on women) when attempting to tackle sex segregation in the labour market. While it is important to encourage women to enter highly paid STEM fields, policy should also aim at changing men's attitudes and encouraging them to enter traditionally female-dominated fields. The finding that sons with fathers in less gender-typical occupations choose less typically male university majors is therefore encouraging. I make three contributions to existing literature. First, the paper improves the understanding of gendered major choices by providing the first analysis on the role of gender typicality of parental occupation in Germany. Second, I introduce a new rank-based measure, which is used in research on intergenerational income mobility (Chetty et al., 2014) but has not been applied to gendered occupational and major choices. This is unfortunate because previously used measures, based on the share of women in an occupation/major, are affected by changes in the sex composition of the workforce as a whole. Instead, rank measures capture positional mobility between parents and their children, whereby each person's position is relative to others of the same cohort and sex. Finally, I am able to distinguish between two different transmission channels by disentangling the transmission of parental resources from that of gender norms. I thereby contribute to the literature on the intergenerational transmission of gender norms, which mainly draws evidence from intergenerational associations in female labour force participation and does not allow for such a distinction. Identifying transmission channels is important for the design of effective policies to address sex segregation in university majors. The remainder of the paper is organised as follows. Section 2 reviews existing evidence on the determinants of gender differences in university major choice, and describes how university major choice operates in Germany. Section 3 presents the data and methods. Section '4' reports the results, section '5' studies transmission channels, and the last section concludes. Determinants of gender differences in major choices University major choice is complex and influenced by many factors, including expected earnings, perceived own ability, and exposure to a given major, among others (see Altonji et al. 2016 for a recent review). A subset of this literature studies the drivers behind gender differences in major choices. Empirical research interested in the determinants of gendered major choices tends to focus on one of two types of factors. Some argue that individual-level factors determine gendered major choices. For example, research has shown that gender differences in personality traits such as competitiveness, beliefs about enjoying coursework, and preferences over expected jobs all contribute to the gender gap in majors (Antecol and Cobb-Clark, 2013;Zafar, 2013). While important, these papers ignore that gendered preferences and self-conceptions are a result of gender socialisation processes (Cech, 2013). Other research studies the role of the social environment for the probability to choose specific groups of majors. This strand of research shows that the social environment directly affects gendered major choices in many cases. For example, a recent paper finds that a higher proportion of female high school peers reduces women's probability and increases men's probability to choose a STEM major (Brenoe and Zoelitz, 2019). Having female teachers increases women's likelihood to choose a STEM degree (Carrell et al., 2010;Bottia et al., 2015). And having a sister increases men's likelihood to study economics, business, or engineering (Anelli and Peri, 2015). While important in its own right, using STEM as an outcome measure when studying sex segregation by university major more broadly has several shortcomings. First, there is substantial within-group heterogeneity in sex composition within STEM majors and other broad groups of majors. This constitutes a shortcoming for those interested in the factors underlying the persistent gender differences in major choices. Moreover, a binary STEM measure tends to put strong emphasis on the lack of women in STEM fields while ignoring the underrepresentation of men in certain other fields as the flip side of sex segregation in majors. To overcome these shortcomings, I introduce a novel measure of gender typicality, which I describe in section '3'. Although the family is a key agent of primary socialisation (Bandura, 1977), only few papers study the role of parental transmission for gendered major choices. In particular, there is not much evidence on the importance of parents' occupation and specifically the degree to which these occupations are typically male or female. Two recent studies address this gap by analysing the association between share of women in parents' occupation or educational field and share of women in offspring's educational field, with different results. A study in Denmark finds a positive association between the female share in the education of mothers and the female share in the major of their daughters, as well as between the female share in the education of fathers and that of their sons (Humlum et al., 2018). A related paper studying field of study choice at secondary education level in the Netherlands also finds a positive relationship between the female share in mothers' occupation and in daughters' field of study (Vleuten et al., 2018). However, there is no father-son correlation. Instead, mothers employed in more female occupational fields are more likely to have sons in more male-dominated fields. These papers use the sex composition to identify the degree to which a major is gendered. While this is a useful measure, it warrants further improvement. I build on this small set of literature by introducing a rank-based measure of the degree to which an occupation or major is typically male or female. This measure is described in more detail in section '3'. Channels of intergenerational transmission Socialisation theories in sociology (e.g. (Eagly, 1987;Okamoto and England, 1999)) and in social psychology (e.g. (Bandura, 1977)) argue that parents act as key agents of socialisation to their children. Gender socialisation theories suggest that children specifically emulate the behaviour of the same-sex parent (Vleuten et al., 2018). Gendered behaviours can either result from children observing the behaviour of their same-sex parent and actively choosing to imitate them (cognitive developmental theory; Kohlberg 1966), or because parents encourage them to adhere to gender roles (social learning theory; Bandura 1977). Therefore, from an early age, children form beliefs about what constitute culturally appropriate behaviours and preferences for girls and boys, including appropriate types of jobs. 2 In economics, cultural transmission and socialisation processes have been incorporated into economic models since the start of this century (e.g. (Akerlof and Kranton, 2000;Alesina and Giuliano, 2015;Bisin and Verdier, 2001;Bisin and Verdier, 2011;Escriche, 2007)). Within this literature strand, a number of empirical studies have tried to identify the existence of gender social norms through the study of female labour supply decisions. For example, Fernández and Fogli (2009) demonstrate that second-generation immigrant American women whose ancestry is from countries with higher female labour force participation work more. Olivetti et al. (2020) show that a woman's labour supply in early adulthood is affected by the labour force participation of past high school peers' mothers. These correlations in labour force participation are interpreted as evidence of the existence and intergenerational transmission of gender norms. However, a key shortcoming of this empirical research is that it is not possible to distinguish whether the intergenerational associations in labour force participation are due to a transmission of gender norms or due to other reasons such as a transfer of resources or imitation. I address this limitation and contribute to this strand of literature by studying a different outcome, university major, which allows me to distinguish between the relative importance of two transmission channels: the gender-typicality per se and the transfer of occupation-specific resources. This is possible because occupations and university majors can be classified along two dimensions-their broad field as well as their gender typicality. This distinction is not possible when studying female labour supply decisions. More specifically, two main channels can account for intergenerational associations between gender-typicality rank in parents' occupation and rank in offsprings' major: a direct transfer of resources on the one hand and a transmission of gender roles or gender norms on the other hand (Vleuten et al., 2018). 3 A direct transfer of resources takes place when young adults choose a major that is similar to their parents' occupational field. This encompasses what is commonly referred to as the transfer of occupation-specific human capital (e.g. (Humlum et al., 2018)) and the inheritability of parental endowments (e.g. (Becker and Tomes, 1979)) in economics, and the transfer of occupation-specific resources within sociology (e.g. (Jonsson et al., 2009)). Taking a broad definition, this channel includes the transfer of occupation-specific and financial resources, social networks, human capital, traits, and abilities (Vleuten et al., 2018;Aina and Nicoletti, 2018). 4 It occurs, for example, if the child whose parent is a doctor studies medicine. Each occupation and each major differs in the degree to which it is gendered. Consequently, direct transfer mechanically leads to positive intergenerational associations between parents and children's femininity or masculinity rank in occupation and major, respectively. It is reasonable to assume that young adults are more likely to identify with and use the resources of the more influential parent whose social position dominates that of their spouse (Dryler, 1998), for example in terms of occupational status, income, or educational level. A second 'indirect channel' is present if children choose majors that are unrelated to their parents' occupations but we still observe a significant association between gender typicality in parental occupation and gender typicality in children's majors. The presence of an indirect channel can be interpreted as strong evidence for gender socialisation and the transmission of gender norms. This is because the possibility of a direct transfer is very limited and instead the gender-typicality rank per se matters for gendered major choices. Empirically, this can be tested by studying heterogeneous effects across those children who choose majors that are related to the same field as their parents' occupations and those whose majors are unrelated to parents' occupations. These two competing transmission channels are interrelated and cannot be considered completely independent, both from a theoretical and from an empirical 3 Gender norms and gender roles in this paper refer to the perceived appropriate roles of men and women concerning university major and occupational choice. I use the definition of gender norms from Pearse and Connell (2016), who define them as "collective definitions of socially approved conduct [...] applied to groups constituted in the gender order -mainly, to distinctions between men and women" (p. 31). The concept of gender norms is related but different to that of stereotypes, understood as 'consensual expectations about what members of a group actually do' (Eagly and Karau, 2002). In contrast, in this paper, I do not study gender role attitudes. Attitudes are evaluations of behaviour or people as good or bad; they vary on a positive/negative scale and can be expressed by statements such as 'I like/dislike' or 'I agree with/disagree with' (Bicchieri, 2017). Therefore, attitudes towards gender roles are individual evaluations of these gender roles and norms; and a positive attitude would reflect an endorsement of the collective gender norm in the society. Individuals' stated gender role attitudes and the gender roles disclosed in their behaviours are often conflicting (Hochschild and Machung, 1989). 4 Such traits and abilities may or may not be due to genetic inheritance and this distinction is not a focus of this paper. perspective. From a theoretical perspective, parents may be more likely to transmit occupation-specific resources to their children if these are in line with cultural gender norms. For example, fathers in STEM occupations are found to transmit their occupation-specific preferences to their daughters only in the absence of a son (Oguzoglu and Ozbeklik, 2016). From an empirical perspective, it is possible that a transmission of gender roles occurs within groups of students who choose majors closely related to their parents' occupation. In other words, there may still be a transmission of gender norms even if we do not find empirical evidence for the 'indirect' transmission channel. In light of these considerations, empirical evidence of the existence of the 'indirect' transmission channel therefore provides an even stronger case for the existence of gender norms. There is little empirical research that has tried to disentangle these transmission channels and identify the existence of gender norms in the context of gendered university major choices (but see e.g. Humlum et al. 2018). Studies on related but different outcomes such as occupational choices, occupational aspirations, and field of study choices in secondary school have produced mixed results. While some studies find support for a transmission of gender roles (e.g. (Polavieja and Platt, 2014;Vleuten et al., 2018)), others find no such support (e.g. Dryler 1998). Major choice in tertiary education in Germany In 2010, 49% of secondary school graduates obtained a school-leaving certificate qualifying them for tertiary education. Of those, 69% obtained Abitur (Allgemeine Hochschulreife) and the rest obtained a subject-linked school-leaving certificate (Brugger et al., 2012). Abitur is a school-leaving certificate obtained at the end of upper secondary education for students who attend the 'highest' Gymnasium school track. 5 In principle, this certificate provides eligibility to study any major at any university. In contrast, subject-linked school-leaving certificates (Fachhochschulreife or fachgebundene Hochschulreife) restrict eligibility either to certain majors or to university of applied sciences (Fachhochschulen). In addition to qualifying for entry to university via a school-leaving certificate, a small share of students enters university education via a 'non-traditional' route without a school-leaving certificate. These students qualify through other criteria such as vocational training (Neugebauer and Schindler, 2012). The entry rate into tertiary education in 2010 was 45% (Brugger et al., 2012). When applying for an undergraduate degree, students choose a major (Studienfach), such as mathematics, German studies, or mechanical engineering. Students also take two additional decisions particular to the German tertiary education system. First, they choose one of two main types of tertiary education institutions, traditional research universities (Universitaeten) and universities of applied sciences (Fachhochschulen). While universities offer degrees in all majors, universities of applied sciences have a more applied focus and offer a limited range of applied sciences majors (Jacob and Weiss, 2010). Second, with many majors, a student can choose between graduating with a 'regular' undergraduate degree or with a 'teaching' degree. The latter type is necessary to become a school teacher. Therefore, in studying major choices I distinguish between 58 majors as well as the three mutually exclusive 'types' of degree, namely university, university of applied sciences, and teaching degree. Since not all 58 majors are available for each of the three degree types, their combination yields 134 distinct categories. Choosing a university major is an important decision because the German labour market has strong linkages between majors and occupations (Leuze, 2007). In fact, the German labour market is known "as a prototypical case of an occupational labour market where job applicants are matched to jobs according to their occupation-specific credentials" ( (Klein, 2016), p. 46). Around three-quarters of the gender differences in earnings at job market entry of graduates can be explained by gender differences in university major (Braakmann, 2008). Moreover, low occupational mobility means that initial major choices at university have long-lasting effects on career outcomes, such as lifetime earnings (Aisenbrey and Brückner, 2008). University major choice in Germany is not only an important decision from the individual's perspective, but its study also has a number of advantages compared to studying related choices such as the one of occupation. While gender differences in university majors and occupational sex segregation are closely related, the choice of a major is less influenced by demand factors than the choice of an occupation. Determinants of occupational segregation include supply side factors such as individual preferences as well as demand side factors such as gender stereotypes of employers enacted when employers select job candidates (Hausmann and Kleinert, 2014) and current labour market conditions. Compared to that, major choice allows a focus on supply side factors and is therefore a closer reflection of individual preferences. One concern is that major choices may not adequately reflect people's preferences because many majors have admission restrictions to manage high demand. In this paper's sample, 70% of students entered a programme with admission restrictions, with the high school GPA (Abiturnote) being the most important and often sole criterion. This means that only students who graduate with a GPA above a certain threshold (called numerus clausus) are admitted to the programme. This means that on the other hand 30% of programmes have no admission restrictions, that is, students with a school-leaving certificate can enrol directly at the respective university without the need to fulfil any additional requirements. To alleviate part of the concern that major choice may not adequately reflect individuals' preferences, in Section '4' I conduct a robustness check on students who graduated with a GPA above the median and a robustness check on students who state that they entered their desired major. These restrictions do not change results. Furthermore, while not all students may be able to enter their preferred major, from a policy perspective, it can be argued that studying the actual choices students make given their constraints is more important than studying idealistic aspirations. Literature on the determinants of university major choice in Germany suggests that social origin plays a role in university major choice. For example, individuals whose father possesses a tertiary degree are more likely to choose majors that are considered prestigious, such as medicine or law (e.g. (Reimer and Pollak, 2010;Georg and Bargel, 2017)). Apart from that, the choice of a university major is treated as largely self-determined in the literature. This is supported by evidence that intrinsic motives, in particular interest in the major, are an important factor for major choice while conformance with friends' and parents' expectations is found to be less important (Heine et al., 2008;Ochsenfeld, 2016). Moreover, teacher recommendations or evaluations are not usually needed for entry to university and are not commonly included as an independent variable in regression models. In line with this, self-reported information from students indicates that the three most-used sources to inform major choice are the internet, friends, and information material provided by universities (Heine et al., 2008). On the other hand, much fewer students cite conversations with teachers as a source of information and only a fifth of those who name teachers as a source evaluate them as useful. Additionally, a few characteristics of the tertiary education system make Germany a well-suited case to study major choices as a relatively 'free choice' that closely proxies individual preferences. First, the choice of major is not restricted by earlier field of study or track choices at secondary school. This is in contrast to other countries such as the UK or Italy, where entry to some university majors is conditional on having taken certain exams or tracks in secondary school 6 . Second, in contrast to other countries such as the USA, it is not possible to enter university without declaring a major. Therefore, the choice of major takes place (just) before a student enters university, at the time when he or she applies for a degree. Third, the high selectivity into certain prestigious universities in countries such as the UK or USA does not exist in Germany. Instead, universities are considered more equal in quality and there is no strong hierarchy among universities (Jacob and Weiss, 2010). Finally, university education in 2010 was free in most of the 16 federal states. Even in the five federal states that charged tuition fees in 2010, usually at EUR 500 per semester, they were relatively low in international comparison. Data sources and sample The main dataset used is the Starting Cohort 5 of the German National Educational Panel Study (NEPS-SC5, see Blossfeld and Roßbach 2011). The NEPS-SC5 contains rich data of a nationally representative cohort of 17,910 first-year undergraduate students who started their degree in October 2010, and who are enrolled for the first time in a public or state-approved higher education institution in Germany (see Zinn et al. 2017). Wave 1 interviews were conducted between December 2010 and January 2012 and, to date, 9 waves of data are available, following individuals up until 2015. For the analysis, a cross-sectional dataset is constructed, using information from the wave 1 survey and from spell data on schooling. The analysis sample is restricted to individuals between 18 and 25 years old who obtained Abitur (Allgemeine Hochschulreife). The age restriction allows for a focus on the transition from high school to university by excluding individuals who pursue a university degree as a second career later in life. The restriction to individuals with Abitur ensures that students are eligible for any degree at any type of university. However, robustness checks including individuals with other types of school-leaving certificates are shown in Section 4, and indicate that results remain substantially the same. I also drop observations with missing values on key variables. Since information on parental characteristics is provided by students, this restriction implies that only individuals who know the educational level, age, and occupation of both parents are included. A parent is defined as the person who the student identifies with as mother or father. Therefore, I include controls for the family structure an individual grew up in, which distinguish between biological and adoptive parents on the one hand, and step and foster parents on the other hand. 7 I also run analyses on subsamples of different family structures, and the results do not change substantially. The final analysis sample consists of 9640 individuals (6100 female students and 3540 male students). 8 Table 10 in the Appendix shows how the different sample restrictions affect sample size and summary statistics. Overall, the changes in the mean values of key variables due to sample restriction are minimal. 9 I use supplementary data from four sources. To construct the dependent variables, I use information on the total number of female and male students by university major and by degree type in Germany in the academic year 2010/2011 from administrative data of the Federal Statistical Office (Statistisches Bundesamt, 2011). For the key regressors, I use administrative data from the Federal Labour Office, which contains information on occupational group of all female and male employees subject to social security contributions in Germany (Statistik der Bundesagentur für Arbeit, 2014). The median income by occupational group is used as a control variable and is also obtained from the Federal Labour Office (Statistik der Bundesagentur 7 The survey reports adoptive parents in the category of biological parents and groups step parents and foster parents. 8 The initial full survey sample consists of 60% female students and the final analysis sample consists of 63% female students. The overrepresentation of women in the sample is primarily due to a higher survey response rate among women and to a lesser extent due to the exclusion of more observations on male students due to missing values on key covariates. The difference in nonresponse between male and female students is accounted for in the survey weights. For details on weighting procedures, see Zinn et al. (2017). 9 There are two exceptions. The first is that when moving from the initial full sample to the one restricted to students who are aged 18 to 25 and hold a general school-leaving certificate, parents are more likely to have higher levels of education. This is expected because students with parents who possess tertiary education are more likely to attend the highest school track (Gymnasium) at lower secondary school (Müller and Schneider, 2013). Moreover, the average rank in university major for men reduces from 55.3 to 52.5. Restricting the sample further by dropping observations with missing values on key variables does not change the mean values of any of the variables. für Arbeit, 2018). The NEPS does not have good information on earnings. Therefore, estimates of average returns to major are obtained from the 2005 and 2009 DZHW Graduate Panel Survey (Brandt et al. , 2018;Briedis et al., 2019), and used as a control variable in a robustness check. The DZHW Graduate Panel Survey is a four-yearly survey of higher education graduates administered by the German Centre for Higher Education Research and Science Studies (DZHW). It enables to study the transition of higher education graduate cohorts to professional careers. Finally, I use data from Starting Cohort 4 of the National Educational Panel Survey (NEPS-SC4) for a robustness check on the selectivity of the sample of university students in the main dataset NEPS-SC5. NEPS-SC4 is a nationally representative sample of students who were in grade 9 of compulsory education in the academic year 2010/2011, who are followed throughout their subsequent school careers (Blossfeld and Roßbach, 2011). Methods Before detailing the rank-based measures of gender typicality in university major and occupation in the next subsection, I describe the regression model. The regression model resembles 'rank-rank' income regressions, which have been used in research on relative mobility in income (Chetty et al., 2014). The following baseline 'rank-rank' gender-typicality regression model, estimated via OLS, is used to study the association between gender-typicality rank of the occupation that mother and father held when the individual was aged 15 and the gender-typicality rank in daughters'/sons' university major: where R i is the gender-typicality rank of individual's university major, RM i is the femininity rank of the occupation the mother held when the individual was aged 15, and RF i is the masculinity rank of the occupation the father held when the individual was aged 15. X i includes individual characteristics, namely seven age dummies, two birth order dummies, three dummies for family structure growing up, and a binary variable indicating 1st-or 2nd-generation immigrant. P i are parental characteristics and include mothers' and fathers' age, a binary variable indicating the parent was employed when the individual was aged 15, three dummies for educational level, and controls for the median income in mothers' and fathers' occupational group, respectively. s are federal state fixed effects. 10 These variables are chosen to control, as good as possible, for variables that are correlated with both the gender-typicality rank in major and the gender-typicality rank in parental occupations. Summary statistics of all variables are reported in the Appendix in Table 11. The regression model captures intergenerational positional changes by identifying the correlation between parents' and children's position in their respective gendertypicality distribution, holding constant key parental and individual characteristics (1) There are 16 federated states in Germany, known as Bundeslaender. as well as federal state. All analyses are weighted using the cross-sectional sampling weights for wave 1, to account for the complex sampling design and to correct for non-response among the recruited students. 11 Since parents' behaviour may affect sons' and daughters' choices in different ways, separate regressions are conducted for female and male students. A key assumption of the regression model is that gender typicality is linearly transmitted from parent's occupation to child's major. Yet it is possible that the transmission of gender typicality occurs non-linearly or at certain points in the distribution only. For example, it may be that only fathers in occupations with a relatively low masculinity rank are associated with students' rank in major. Similarly, it may be possible that any associations hold only at certain points of the gendertypicality rank distribution of university majors. While I conduct OLS regressions as a starting point, I also explore potential non-linearities by discussing results from quantile regressions and heterogeneous effects across the distribution of key regressors (subsection '4.3'). Nevertheless, I argue in the next subsection that imposing linear transmission on rank-based measures possesses advantages compared to the two approaches used in existing research. The first uses linear regressions with measures based on the share of women/men in occupations and majors. The second uses categorical regressors, which necessarily use arbitrary cutoffs of what constitutes a gender-typical occupation or major. The analysis also suffers from a few data limitations. In particular, there is no information on parents' income or work hours, which would be useful to study relative parental status (e.g. relative income) in more detail. Moreover, sibling sex is not contained in the data, which has been identified as a relevant factor affecting genderstereotypical behaviour (Anelli and Peri, 2015). Measures of gender typicality in university major and in occupation To measure the degree to which a university major is typically female, I rank each female student relative to the population of all female students in the academic year 2010/2011 in Germany based on the share of women in her university major. I call this measure 'daughters' femininity percentile rank in university major', or 'daughters' rank' in short, and it takes values between 1 and 100. The femininity rank indicates a female student's relative position in the distribution of all female students, based on the share of women in their university major. For example, a woman enrolled in a psychology major at university is assigned a femininity rank of 85, indicating that 15% of female students are enrolled in a major with a higher share of women. Analogously, for male students, 'masculinity rank in university major' is constructed based on the student's relative position in the national distribution of all male students' share of men in university major. Table 1 shows the 10 most common major choices for men and women, and their respective rank measure. Since each person's rank is based on the distribution of students of the same sex, the measures are sex-specific. For example, Table 1 shows that the femininity rank of an economics major at university is 21, while the masculinity rank of an economics major at university is 44. As mentioned in Section '2', university majors are distinguished not only by 58 fields of study but also by 3 different degree types, namely, teaching degree, university degree, and university of applied sciences degree. Their combination yields 134 distinct university majors, from which the femininity rank and masculinity rank measures are constructed. In cases in which students declare more than one major, I use the one they declare as their first major. The key regressors are the femininity and masculinity percentile rank in the occupation of mothers and fathers, respectively. There are 334 distinct occupational groups based on the German occupational classification KldB88. Following the same logic as for the dependent variables, I construct a measure of the degree to which a mother's occupation is typically female. Specifically, I rank mothers based on the share of women in their occupation relative to all other employed women in Germany. 12 The 'femininity rank of mothers' occupation' or 'mothers' rank' takes values between 1 and 100, and higher numbers indicate a more 'typically female' occupation. For example, the rank associated with a mother who is a kindergarden teacher is 94, indicating that 6% of mothers work in occupations with a higher female share. On the other hand, the rank associated with a mother who is a doctor is 13, suggesting that 87% of mothers work in occupations with a higher share of women. I also construct measures for a masculinity rank in fathers' occupations in an analogous way, ranking fathers based on the share of men in their occupation relative to all other employed men in Germany. By construction, the rank measures follow a uniform distribution with mean and median 50. 13 12 The survey records the occupational group a parent held when the individual was aged 15. This allows studying the association between gender typicality of parental occupation during adolescence for students' gendered university major choices in early adulthood. To construct rank measures for mothers' (fathers') occupations, I therefore use information on the female (male) share by occupational group corresponding to the year in which the individual was aged 15. Since the sample is restricted to individuals aged 18 to 25 in the year 2010, I use administrative data (Statistik der Bundesagentur für Arbeit 2014) on the female (male) share by occupational group for one of the years in the period between 2000 and 2007, depending on each individual's age. While the sex composition of individual occupations may have changed by 2010, using data from the year in which the individual was aged 15 best captures the degree of gender typicality that the occupation represented when the individual was an adolescent. 13 There are important differences in the gender typicality of different occupations across different countries. Ideally, for the descendants of immigrants, I would therefore construct parental rank measures based on the sex composition of occupations in their home country at the time when the individual was aged 15. Unfortunately, this is not feasible given that country-specific occupational classifications used in different countries are not easily matched to the German KldB88 classification used in this paper. Moreover, while there is information on parental birth country, this is not necessarily the same as what the parent considers their home country. I therefore include a dummy taking a value of 1 for individuals who are first-or second-generation immigrants (based on recorded birth country) in all analyses, and I perform a robustness check excluding those individuals from the analysis (see Table 13). Students report information on the occupation that their parents held when they were aged 15. Therefore, the measures capture the role of parental occupation during adolescence for students' gendered university major choices in early adulthood. 14 The ten most common occupations for mothers and fathers and their respective rank are shown in Table 1. The generation of parents studied in this paper often follows a traditional gender division of work. In total, 17.6% of students' mothers in the sample are 'inactive', that is, they have not been employed since the student was born (as opposed to 0.9% of fathers) and have therefore no occupation recorded in the survey. However, excluding all these students from the analysis would lead to a highly biased sample, leaving out those who have parents with the most gender-traditional household allocation of work. Moreover, having an inactive mother has been shown to negatively affect daughters' labour force participation (e.g. (Morrill and Morrill, 2013)). Similarly, prior research has shown that the relative income of mothers compared to fathers is related to the gender typicality in sons' major choices (Humlum et al. , 2018). While inactive mothers cannot transmit occupation-specific resources to their children, their inactivity gives signals about appropriate gender roles to children, which may translate into major choices. Therefore, I create a fictitious profession corresponding to the parents who were not employed in the period from the birth of the individual and the individual reaching age 15. I calculate the gender-typicality rank based on the sex composition of this fictitious profession. 15 A robustness check performed in Section 4 shows that their exclusion does not substantially alter results. To test the possiblity that growing up with a mother out of the labour force may directly affect students' university major choices, I perform a robustness check with a dummy for mother being inactive (see Table 14). There are several advantages that these rank measures possess over alternative measures used in prior research. Previous studies have operationalised the degree to which occupations or majors are typically female by using the share of women as a measure (Humlum et al., 2018;Vleuten et al., 2018). Figure 2 in the Appendix illustrates how the female (male) share by major/occupation corresponds to the femininity (masculinity) rank. Share-based measures have two undesired properties. First, the share of women within a given occupation may depend on the structure of occupational classifications. Specifically, the occupational classification KldB88 from the year 1988 reflects the occupational structures of the industrial society of the 1960s, with typically male occupations categorised into a higher number of 14 There may be concerns about the relevance of this measure if there is a high degree of occupational mobility among parents. However, I argue that the measure is appropriate for the purpose of this paper for several reasons: First, capturing parental occupation at age 15 is meaningful as the focus of this paper is studying the role of parental occupation during adolescence in the context of gender socialisation. Second, the German labour market is characterised by low occupational mobility (Aisenbrey and Brückner, 2008). Third, the measure is based on 334 occupational groups, which aggregate 1991 different occupations of similar nature. Therefore, if parents switch occupation to a closely related one of similar nature, this would be captured within the same occupational group. 15 The femininity rank for inactive mothers is 82 and the masculinity rank for inactive fathers is 2. smaller groups compared to female occupations (Hausmann and Kleinert, 2014). If typically male occupations are systematically more detailed in occupational classifications than typically female ones, this may bias the sex composition within occupations. Specifically, it may partly explain why men tend to work in more segregated occupations while women bunch in a smaller number of occupations (Hausmann and Kleinert, 2014). Moreover, the sex distribution of occupations is more dispersed than the distribution of university majors, partly due to the fact that the occupational classification is more detailed. Rank measures do not suffer from this problem because they capture the position of individuals relative to others of the same cohort and sex. A change in the sex composition of a university major affects the rank of a student only insofar as it alters the student's position relative to the position of others. Second and more importantly, the share of women in an occupation is affected by the sex composition of the workforce as a whole. That is, an increasing proportion of women within a certain occupation may be explained by an increase in female labour force participation, even if there is no change in the propensity of women or men to choose that particular occupation (England et al., 2007). Therefore, the fact that the overall female share among the 2010 university student population is higher than the female share among the total workforce in their parents' generation is reflected in share-based measures. This complicates a meaningful interpretation of share-based measures as measures of the concept 'gender typicality' in a regression model as specified in Eq. 1. On the other hand, rank measures capture positional mobility between parents and their children, whereby each person's position is relative to others of the same sex and cohort. Therefore, coefficients from a rank-rank regression model as in Eq. 1 have a meaningful and straightforward interpretation. Specifically, coefficients can be interpreted as the association between a parent's relative position in their sex-specific occupational rank distribution and a student's relative position in their sex-specific university major rank distribution. Occupations and university majors are also commonly categorised into 'maledominated' and 'female-dominated' ones. For example, majors (or occupations) with a female share of 70% or above are often referred to as female-dominated, while those with a female share below 30% are labelled as male-dominated (e.g. (Hausmann and Kleinert, 2014)). A key disadvantage of such categorisation is that these cutoffs are arbitrary. This is especially problematic in regression analysis because coefficients on binary or categorical regressors are interpreted relative to a baseline cateogory. Changing the cutoff then also necessarily changes the baseline. For example, there is no theoretical reason for why estimating the effect of being in an occupation with a female share of 70% or above (compared to the baseline category of being in an occupation with a female share of less than 70%) is more meaningful than, for example, estimating the effect of being in an occupation with a female share of 66.67 (two-thirds) percent (compared to being in an occupation with a female share below 66.67%). A second key disadvantage of categorical regressors, at least in the context of this paper's focus, is that a categorisation, independent of which cutoffs are chosen, implies a substantial loss of information regarding the degree of gender typicality of occupations. In sum, rank measures have the advantage that they are independent of the structure of occupational/major classifications and independent of the overall sex composition of the population. Therefore, estimating a linear relationship between parental and children's rank in their respective distribution has a straightforward interpretation. In the case of fathers and sons, for example, it captures the association between a sons' and a fathers' relative position in their respective distribution. Summary statistics The left part of Table 2 presents selected summary statistics for key variables, separately for sons and daughters. The average age of students is around 20 years, and their average rank in major approximately 51. As mentioned in Section '2', this is a sample of individuals who enter university and hence their parents are disproportionately highly educated. Therefore, in order to check the degree of selectivity, summary statistics are compared to those of NEPS-Starting cohort 4, a sample of grade 9 students which includes the full population of students in regular schools. These are reported in the right part of Table 2. The age difference between mothers and fathers of the two cohorts corresponds approximately to the age difference of the students across the two cohorts. Moreover, the share of mothers who were not in employment since the student was born is similar across both cohorts. Not surprisingly, the share of tertiary educated mothers and fathers in the undergraduate student cohort (SC5) is much higher compared to the average parent in the cohort of compulsory schooling grade 9 pupils (SC4). This is in line with previous research which shows that intergenerational educational mobility is low in Germany (Heineck and Riphahn, 2009). By construction, the rank measures have a mean of 50 if they are nationally representative. However, the highly educated parents of the study sample are not nationally representative. Indeed, the femininity rank in mothers' occupation is slightly higher in the cohort of university students, while fathers' rank is over 10 percentile points lower compared to starting cohort 4. This suggests that high-skilled mothers' occupations are more gender-typed while high-skilled fathers' occupations are less gender-typed compared to lower-skilled occupations. This can partly be explained by the fact that many occupations with a very high share of men, such as carpenters, truck drivers, and electricians, do not require tertiary education. A full set of summary statistics are reported in the Appendix in Table 11. Table 3 presents results on the relationship between gender-typicality rank in parental occupation and masculinity rank in university major for sons. Columns 1 to 3 do not include any controls or fixed effects. Column 1 considers femininity rank in mothers' occupation only, while column 2 includes masculinity rank in fathers' occupation only. The coefficient on mothers' rank in column 1 is positive but not statistically significant. In contrast, column 2 reveals a positive relationship between the degree to which fathers' occupation is typically masculine and the degree to which sons' major is typically masculine. A 1 percentile (i.e. 1 unit in masculinity percentile rank) increase in fathers' rank is associated with a 0.12 percentile increase in sons' rank. Column 3 jointly includes mothers' and fathers' rank, and the coefficients stay almost identical. This suggests that fathers' rank is independently associated with sons' rank and that assortative mating is not driving the results. 16 The size and significance of the estimated coefficient on father's rank does not vary substantially when progressively adding fixed effects and individual level controls in columns 4 to 6. Column 4 includes federal state fixed effects. Column 5 adds a set of parental characteristics, namely educational level, age, and a dummy for being employed when their child was aged 15, for mothers and fathers, respectively. Column 6 additionally controls for the natural logarithm of the median income in mothers' and fathers' occupation. Column 6 also adds the following individual characteristics: categorical variables for age, birth order, family structure when growing up, and whether the individual has an immigrant background. 17 The coefficient on fathers' rank decreases slightly (from 0.123 to 0.113), but remains statistically significant at the 1% level. In the most restrictive specification in column 6, a 24 percentile increase in fathers' rank (corresponding to one standard deviation, see Table 2) is associated with a 2.7 percentile increase in sons' rank, which corresponds to a 5% increase compared to the mean of sons' rank in the sample. The positive same-sex relationship between fathers and sons is compatible with both a direct transfer of resources and with a transmission of gender roles. The coefficient on mothers' rank becomes smaller and then turns negative as fixed effects and control variables are added (from 0.019 to −0.013) and is never statistically significant. The level of education of mothers and fathers is not associated with the masculinity rank in sons' major, as shown in columns 5 and 6. Panel A of Panel B of Table 3 presents the estimates for the sample of daughters. Section 2 mentioned that the transmission of gender roles happens primarily via the samesex parent. If a transmission of gender roles occurs, we would expect a positive same-sex relationship between rank in mothers' occupation and daughters' major. However, column 1 shows that the coefficient on mothers' rank is positive but small and not statistically significant. In contrast, column 2 indicates that fathers in more typically masculine occupations have daughters in less typically feminine, that is, more typically masculine majors. These findings stay very similar when mothers' and fathers' rank are jointly included (column 3) and when state fixed effects and individual level controls are successively introduced (columns 4 to 6). 16 Results from robustness checks in which interaction effects betwen rank in mothers' occupation and rank in fathers' occupation are included confirm that there are no interactive effects between mothers' and fathers' rank. Instead, they appear to operate independently from each other. These results are shown in Table 12. 17 The full set of coefficients are not shown due to space limitations but are available upon request. In the most restrictive specification in column 6, a one percentile increase in fathers' masculinity rank is associated with a decrease in daughters' femininity rank by 0.05 percentiles. An increase of one standard deviation in fathers' rank (29 percentiles) is associated with a decrease in daughters' rank by 1.6 percentiles, corresponding to a 3% fall compared to the mean femininity rank of daughters' major in the sample. This coefficient is roughly half the size in absolute terms compared to fathers' rank in the specification for sons presented in column 6 of Panel A. This negative opposite-sex relationship between fathers' and daughters' rank is compatible with a direct transfer of resources between fathers and daughters. The result that fathers'but not mothers'-rank is associated with the degree to which young women's major choices are typically female may be related to the fact that German families of the parental generation (typically 1950s/1960s birth cohorts) often follow a traditional division of work in which the father is the main breadwinner. Therefore, fathers may be more likely to transmit occupation-specific resources to their daughters and/or act as a role model compared to mothers. In line with this, the theory of direct transfer predicts that a child is more likely to draw upon the resources of the higher-status parent (Vleuten et al. , 2018). This will be further investigated in section '5'. Columns 5 and 6 of Panel B show that while fathers' educational level is not associated with the femininity rank in daughters' major, mothers' education is. Having a mother with a high school degree and having a mother with tertiary education is associated with an increase in daughters' rank in major by roughly 3.1 percentiles and 3.4 percentiles respectively, compared to having a mother with only basic schooling or less. The association between mothers and their daughters' major choices appears to operate not through mothers' occupation but through their educational level. Those mothers who have a high level of education are more likely to have a successful career or high-status occupation, which may explain why the mother effect operates through educational level in the context of a parental Table shows estimates from OLS regressions. The dependent variable is the masculinity/femininity percentile rank of sons'/daughters' university major. The key regressors are femininity percentile rank of mother's occupation and masculinity percentile rank of father's occupation. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively) . Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0.1. Sources: NEPS-SC5, Federal Labour Office, Federal Statistical Office Table 3 (continued) generation that often follows a traditional male breadwinner model. This interpretation, highlighting the importance of 'parental status', is supported by results from a heterogeneity analysis in which mothers' rank is interacted with a variable indicating that the mother has tertiary education (see Table 7). 18 Robustness checks As mentioned in section 3, 17.6% of mothers were not in employment between their child's birth and age of 15, and do not have an occupation recorded. A traditional division of work in which the father works and the mother is not in employment, also known as 'traditional male breadwinner' model is common among the parental generation, especially in West Germany (Bauernschuster and Rainer, 2012). Excluding these mothers would lead to a highly biased sample in which less traditional families are overrepresented. Therefore, these mothers for who information on occupation is not recorded are assigned a femininity rank of 82, based on the fictitious occupation of 'being inactive'. Correspondingly, inactive fathers are assigned a masculinity rank of 2. To analyse whether this decision affects results, a robustness check in which these mothers and fathers without recorded occupation are excluded from the analysis is conducted, and the results are presented in Table 4. Columns 1 and 2 show results for sons and columns 3 and 4 those for daughters. While columns 1 and 3 present results without any controls, columns 2 and 4 include the full set of controls and fixed effects. The positive association between fathers' rank and sons' rank (columns 1 and 2) and the negative association between fathers' rank and daughters' rank (columns 3 and 4) both remain, and the size of coefficients is similar to those from the full sample (see Table 3). The coefficient on mothers' rank in the specification for sons (columns 1 and 2) remains small and not statistically significant. Interestingly, the positive coefficients on mothers' rank in the sample of daughters (columns 3 and 4) are slightly larger compared to those of the full sample, and the coefficient becomes marginally significant at the 10% level in column 4. The full specification for daughters in column 4 suggests that women choose more typically female university majors if their mothers worked in more typically female occupations and if their fathers worked in less typically male occupations, and the effect of fathers is slightly larger compared to that of mothers. Therefore, the decision to include mothers without occupation in the main set of results masks the positive effect of those mothers who have been in employment on their daughters' major choices. This finding may again be related to the fact that parents often follow a traditional division of work in which the father is the main breadwinner. Families in which the mother has been employed are less likely to follow a male breadwinner model; mothers are more likely to have a higher status, and are more likely to transmit occupation-specific resources to their daughters and act as role models. 19 Nevertheless, the effect of fathers on daughters' rank in major is still stronger than the one of mothers. To further explore in how far the relevance of rank in mothers' occupation depends upon their status, as suggested by the direct transfer theory, additional analyses are presented in Section '5'. In Section '2', I discussed the concern that students' major choices may not accurately reflect their preferences. Specifically, students may not be able to study their desired major due to admission criteria. The main admission criterion of majors for which demand exceeds supply is high school GPA (Abiturnote). Therefore, Table 5 presents results from a robustness check in which the sample is restricted in one of two ways. First, a sample in which only students with a high school GPA at least as good as the median GPA of 2.2 are included (columns 1 for sons and 3 for daughters); and second, a sample in which only students who indicate they were able to realise their desired major are included (column 2 for sons and 4 for daughters). The rationale is that students in these restricted samples are more likely to have entered a major that represents their actual preferences. Results do not change substantially compared to those considering the full sample of students. Column 4 reveals that for the subsample of daughters who state that they were able to realise their desired major, the positive coefficient on mothers' rank becomes weakly significant. Without further analysis, it is difficult to know why this weak link appears but it is possible that daughters do draw on the occupation-specific resources of their mothers if they are given the chance or, alternatively, that the characteristics of mothers in this subsample of daughters differ from those in the main sample. A number of additional robustness checks are performed and their results are reported in Tables 13 and 14 of the Appendix. Results from Table 13 show that the main results are robust to various variations on the analysis sample, namely including students with subject-specific school-leaving certificates (fachgebundene Hochschulreife/Fachhochschulreife, columns 1 and 2), excluding students who study towards a teaching degree (columns 3 and 4), including only those who grew up living with both biological parents (columns 5 and 6), and excluding those who are first-or second-generation immigrants (columns 7 and 8) 20 . The robustness of results to the inclusion of additional controls, some of which are potentially endogenous, is studied in Table 14. Results do not change substantially when including fixed effects at the level of administrative district (401 Landkreise, columns 1 and 2), or controlling for students' high school GPA (columns 3 and 4), high school maths grade relative to German grade (columns 5 and 6), average financial returns by major (columns 7 and 8) 21 , or a dummy indicating that the mother is inactive (columns 9 and 10). Moreover, the coefficients on the dummy indicating that the mother is inactive are not statistically significant, suggesting that this variable is not independently associated with rank in sons' or daughters' major. 21 Average financial returns by major are obtained from regressions of the average salary paid in the first job after graduation on group of university major, controlling for age and square of age at graduation, federal state of the job, female dummy, dummy for having studied at FH (university of applied sciences), and year of graduation. The underlying data are the 2005 and 2009 DZHW Graduate Panel Survey. 20 The sex composition of parental occupations will vary by country and therefore I exclude those who are foreign-born or have a foreign-born parent in this robustness check. Finally, to check how the selectivity of the sample of highly educated students affects results, I use NEPS data of a sample of grade 9 school students (NEPS-SC4). NEPS Starting Cohort 4 is a sample of a nationally representative cohort of students in compulsory schooling. I estimate regressions of the probability to enter university on fathers' masculinity rank and mothers' femininity rank in occupation. The results are reported in the Appendix in Table 15. Overall, results indicate that the rank in parental occupation has no effect on sons' and very small effects on daughters' likelihood to enter university. On the other hand and in line with prior research documenting low intergenerational educational mobility (Heineck and Riphahn , 2009), there are large effects of parental level of education on sons' and daughters' probability of starting a university degree and they are mainly same-sex intergenerational correlations. Taken together, this subsection showed that the paper's main findings are robust to a number of robustness checks, including different subsamples and additional control variables. To sum up, results suggest that daughters choose more typically female university majors if their fathers worked in less typically male occupations and if their mothers worked in more typically female occupations. The positive same-sex correlation between mothers and daughters is significant only when excluding mothers who have not been employed since their child was born. Sons select more typically male university majors if their fathers worked in more typically male majors, and Table 4 Robustness check: excluding mothers and fathers without recorded occupation Table shows estimates from OLS regressions. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. The key regressors are femininity percentile rank of mother's occupation and masculinity percentile rank of father's occupation. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively). Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0. Yes this effect is roughly twice the size in absolute terms compared to the father-daughter correlations. The association between mothers' and sons' ranks is close to zero and never statistically significant. The positive same-sex correlations are compatible with both a direct channel of resource transfer and an indirect channel of the transmission of gender roles. In contrast, the negative opposite-sex correlations between fathers and daughters are only compatible with a direct transfer of resources. These potential channels will be explored in more detail in Section '5'. Non-linearity of intergenerational transmission I next investigate how these findings vary across the distribution in the gender-typicality rank of university major. Figure 1 presents the coefficents and 95% confidence intervals on rank in mother's occupation (top panel) and father's occupation (bottom panel) from quantile regressions at the 10th to the 90th percentiles of the distribution in major rank, for daughters (left-hand side) and sons (right-hand side). All specifications include the full set of control variables. Overall, the statistically significant positive father-son and negative father-daughter correlations and the finding that mothers' rank is not related to sons' nor daughters' major choices hold across the majority of points in the distribution of rank in students' major. Moreover, the coefficient on mothers' rank is quite stable across the different quantiles in the distribution of daughters' and sons' rank in major. The size of the father effect, on the other hand, varies across the distribution of rank in major. It takes an approximate (albeit skewed) U-shape for the sample of daughters and a (skewed) inverse U-shape for the sample of sons. For both the sample of daughters and the sample of sons, the coefficient on fathers' rank is largest in terms of absolute size between the 20th and the 50th percentiles of the dependent variables. This suggests that the effect of fathers' rank is driven by daughters who choose less typically feminine (gender-atypical) and by sons who choose less typically male (genderatypical) university majors. In particular, there appear to be stronger associations up until roughly the median of the distributions in sons' and daughters' rank. This suggests that the main results are driven by sons and daughters who defy genderstereotypical major choices. Next, I explore whether the strength of these intergenerational associations not only varies across the distribution of the dependent variable, but also across the distribution of the key regressors. To this end, I perform regressions in which I interact the rank in mothers' occupation with a binary variable taking a value of 1 if the rank in mothers' occupation is at least 50 (and 0 otherwise), and interact the rank in fathers' occupation with a binary variable taking a value of 1 if the rank in fathers' occupation is at least 50 (and 0 otherwise). I choose rank 50 as a cutoff to indicate a 'gender-typical' occupation as this roughly appears to be the turning point for the dependent variables, as shown in Fig. 1. The results are presented in Table 6. In line with previous results, the coefficients on mothers' rank do not appear statistically significant (neither for sons nor for daughters) and this holds true for both the lower half as well as the upper half of the distribution in mother's rank. The coefficients on the interaction effect between mothers' rank and a dummy indicating rank is larger than 50 are not statistically significant either. For fathers, on the other hand, there is again evidence for a non-linear effect in intergenerational transmission. Results for the sample of sons (columns 1 and 2) show that the positive association between father's rank and sons' rank is statistically significant only for fathers with a rank below 50. For fathers with a rank of 50 or above, the coefficient on fathers' rank is close to 0 and not statistically signicant (as indicated by the linear combination of estimates) and this difference compared to fathers with a rank of at least 50 is statistically significant, as indicated by the interaction effect. The results for the sample of daughters (columns 3 and 4) paint a similar picture. The negative association between masculinity rank in fathers' occupation and femininity rank in daughters' major is statistically significant only for fathers' Fig. 1 Quantile regressions. Notes: Figures show coefficients and 95% confidence intervals from quantile regressions at different quantiles of the dependent variable. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. The key regressors are femininity percentile rank of mother's occupation and masculinity percentile rank of father's occupation. The full set of control variables is included: age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively), two dummies for birth order, three dummies for family structure when growing up, a binary variable indicating (1st or 2nd generation) immigrant background, and natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Sources: NEPS-SC5, Federal Labour Office, Federal Statistical Office ranks up to 50. For ranks of 50 and higher, the coefficient is close to 0 and not statistically significant (linear combination of estimates) and this difference is statistically significant, as indicated by the interaction effect. Taken together, results from Fig. 1 and Table 6 support the main takeaways in terms of statistical significance and signs of key regressors from the linear regression results presented in Table 3. Moreover, they reveal important non-linear effects in intergenerational tranmission. They show that the positive father-son correlations and the negative father-daughter correlations are driven by those in gender-atypical occupations and university majors. Sons with fathers in gender-atypical occupations choose less typically male university majors, thus breaking gender stereotypes. Daughters with fathers in gender-atypical occupations choose more typically female majors, though this effect seems to disappear for daughters choosing majors with a very high femininity rank. These non-linearities are important to bear in mind when interpreting the results and considering resulting policy implications. Direct versus indirect channel of intergenerational transmission Section '2' described direct resource transfers and transmission of gender roles as two potential channels that can account for the results presented in Section '4'. In this section, I study the presence of these two channels through a number of different heterogeneity analyses. Direct transfer of resources Results presented in the previous section showed that fathers'-but generally not mothers'-rank is significantly correlated with the degree to which young women's and men's major choices are typically female and male, respectively. Results also revealed that mothers in more typically female occupations have daughters in more typically female majors, if these mothers were employed at some stage while raising children. Taken together, these findings suggest that the more important role of fathers in the study sample may be related to the fact that German families of the parental generation often follow a traditional division of work in which the father is the main breadwinner. That is, the father typically works full-time and the mother does not work or works part-time (Holst and Wieber, 2014). Indeed, according to the theory of direct transfer ('direct channel'), young adults are more likely to identify with and use the resources of the higher-status parent (Vleuten et al., 2018). To test the plausibility of a direct transfer of resources, I analyse whether results vary across parental status. To do so, I perform three different heterogeneity analyses, presented in Table 7. 22 In the first, I interact mothers' and fathers' Table 6 Interaction effects to test linearity of intergenerational transmission Table shows estimates from OLS regressions. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively). Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0. Yes rank with a dummy indicating whether they have tertiary education. The rationale is that tertiary education is an indicator of social status and results from Table 3 showed that mothers' educational level is associated with daughters' femininity rank in major. In the second heterogeneity analysis, I interact the parental rank variables with a dummy for whether the individual went to school in East Germany when aged 15. The rationale behind this variable is that couples in East Germany on average have a more equal division of work, which is a result of the differences in family policy between East and West Germany during the divided years (Bauernschuster and Rainer, 2012;Holst and Wieber, 2014). Specifically, while West German policy encouraged a traditional male breadwinner model in which fathers worked and mothers stayed at home, East German policy encouraged a reconciliation of motherhood and work (Bauernschuster and Rainer, 2012). Finally, I interact the parental rank variables with a dummy for whether the individual grew up living with the mother only. While this is a measure of intensity of parental contact, mothers in the sample who raise children living without a partner are also more likely to have higher status. Specifically, they are more likely to possess a tertiary degree and be employed at the time their daughter or son was 15. Columns 1 to 3 of Table 7 present results for the sample of sons and columns 4 to 6 for daughters. Column 1 shows that mothers' rank is not significantly associated with sons' major, independently of her educational level. The positive effect of fathers' masculinity rank on sons' masculinity rank holds independently of fathers' education, but it is significantly stronger if fathers have tertiary education. Column 2 shows that the effect of parental rank does not depend on whether the son grew up living in East Germany. Finally, column 3 indicates that there is a positive father-son correlation in masculinity rank, independently of whether the son grew up living with both parents. However, the coefficient on the interaction between mothers' rank and the dummy variable of living with the mother only is negative and statistically significant. Mothers in a more typically female occupation have sons in less typically male majors, for those who grew up living with the mother only. Moving on to daughters, column 4 shows that the coefficients on mothers' rank and fathers' rank, which show the effect for those mothers and fathers without a tertiary degree, are not statistically significant. Fathers in more typically male occupations who have tertiary education, however, have daughters in less typically female majors, and the interaction term is statistically significant. Moreover, mothers in more typically female occupations who are tertiary-educated have daughters in more typically female majors (the coefficient is statistically significant at the 10% level). However, the interaction term is not statistically significant. Column 5 presents the results distinguishing between East and West Germany. For the sample of daughters going to school in West Germany, only the father-daughter correlation is statistically significant. On the other hand, for those growing up in East Germany the coefficient on mothers' rank increases to 0.062 and becomes significant at the 10% level (though the coefficient on the interaction term is not statistically significant). Finally, column 6 shows that the negative father-daughter correlation is only statistically significant for daughters who grew up living with both parents. On the other hand, the coefficient on mothers' rank is larger (but imprecisely estimated) for daughters who Table 7 Channel: direct transfer of resources Table shows estimates from OLS regressions. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. The key regressors are femininity percentile rank of mother's occupation and masculinity percentile rank of father's occupation. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively). Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0. (3) Rank mother's occup. Yes grew up living with mothers only, even though the interaction term is not statistically significant. In sum, the coefficient on fathers' rank in the sample of sons is positive and statistically significant independently of fathers' status, but the effect size is significantly larger when fathers have tertiary education. Sons' choice is significantly associated with mothers' rank only if they grew up living with the mother only. This could be explained by the higher intensity of contact with the mother (providing support for the the relevance of direct transfers of resources other than genetic inheritance), and by the fact that single mothers on average have higher status (providing support for a direct transfer of resources). 23 For daughters, the coefficient on mothers' rank is larger if the latter possess a tertiary degree, and if daughters grew up living in the East or grew up living with the mother only, but the interaction terms are not statistically significant. In contrast, the significant effect of fathers on daughters disappears for fathers without tertiary education and for daughters who grew up living with a mother only. Taken together, these results indicate that parental status does indeed matter for the correlation between rank in parental occupation and offspring's major choice. This suggests that the direct transfer of resources from parents to their children constitutes a relevant channel for the correlation between gender-typicality rank in parental occupation and gender-typicality rank in offsprings' major. Transmission of gender roles Section '2' stated that, in addition to a direct resource transfer, a second 'indirect channel' is likely present if children choose majors that are unrelated to their parents' occupations and we still observe a significant association between gender typicality in parental occupation and gender typicality in children's majors. In such a case, the possibility of direct resource transfers is much more limited, and therefore a significant association can suggest that the transmission of gender roles plays a role. Empirically, this can be tested by studying heterogeneous effects across those children who choose majors that are related to the same field as their parents' occupations and those whose majors are unrelated to parents' occupations. To do so, it is necessary to map each major with an occupational field. The appropriate mapping of parental occupational fields to groups of majors is in many cases not obvious. Therefore, I use a classification developed for the German Student Survey, which maps university majors to occupational fields (see (Georg and Bargel, 2017)). 24 The mapping is shown in Table 8. The table shows that each of nine broad groups of university majors is mapped to one of nine broad fields of occupations. The broader the groups, the more likely it is 23 The data confirms that single mothers have on average higher status as proxied by tertiary educational level. As mentioned in Section '3.1', information on parental occupation is reported by students, implying that students included in the sample know their father's occupation even if they grew up living with the mother only. 24 The German Student Survey (Studierendensurvey) is a survey of students at German universities conducted by the research group on higher education at the University of Konstanz. It aims to provide information on student orientations and the study situation, and has been conducted regularly since the 1980s. that fields are sufficiently distinct from each other so that the direct transfer of resources is indeed blocked as a channel as good as possible. For example, all university majors within natural sciences, mathematics, and computer science constitute one group and are mapped to all occupations within the natural sciences sector, such as laboratory assistants. In addition to similarity of field, as demonstrated, direct resource transfer is more likely if parents have a higher status. Therefore, I define a dummy variable called 'direct transfer mother' which takes a value of 1 if the following two conditions are met: the mother has tertiary education and the student chooses a major that is in the same broad field as mother's occupation, according to Table 8. The variable takes a value of 0 otherwise. I define a dummy variable called 'direct transfer father' in the same way. Table 9 presents the results in which I interact mothers' and fathers' rank with the 'direct transfer' variables. The table reports results for sons without any controls (column 1) and with federal state fixed effects and individual level controls (column 2), and for daughters without and with controls (columns 3 and 4, respectively). The coefficients on the interaction effects between parental rank and the 'direct transfer' indicators are statistically significant and large in absolute terms in all cases. The linear combination of estimates shown at the bottom of the table indicates that in the cases in which 'direct transfer' occurs, there is a positive and statistically significant same-sex association between rank of fathers and sons, as well as between mothers and daughters. Moreover, there is a negative and significant opposite-sex association between mothers and sons, as well as between fathers and daughters. The effects are quite large compared to the main results reported in Section '4'. For example, for daughters who choose a major in which direct transfer from the father occurs, a one percentile increase in fathers' masculinity rank is associated with a 0.83 decrease in daughters' femininity rank in major. In contrast, the coefficients on mothers' rank and fathers' rank are not statistically significant in most cases. This means that in those cases where the 'direct transfer' is blocked, mothers' and fathers' ranks are not significantly associated with offsprings' choices. The only exception is the coefficient on fathers' rank in the sample of sons. In the full specification in column 2 it takes a value of approximately 0.06, suggesting that in those cases where sons choose a major where the direct transfer of resources is unlikely to occur, a one percentile increase in fathers' masculinity rank is associated with a 0.06 increase in masculinity rank in sons' major. Overall, the 'direct transfer of resources' channel seems to account for a large part of the results. There is no direct evidence for the existence of an 'indirect channel' for the associations between fathers, and daughters and mothers and daughters, as well as mothers and sons. However, a transmission of gender roles may still occur within the group of students who choose a major closely related to parental occupation. Given the broad categories of the mapping of majors to occupations, the potential importance of this possibility should not be discarded. Moreover, findings suggest that the transmission of gender roles is a relevant channel for the associations in masculinity rank between fathers' occupations and sons' majors. Conclusion Using data of a nationally representative cohort of first-year undergraduate students in Germany, I examined whether femininity of mothers' occupation and masculinity of fathers' occupation are related to whether their adult children choose typically male or female majors at university, and if so, why. The findings indicate that the gender-typicality rank in parental occupation matters for students' gendered university major choices. While the effect sizes are modest, I identified a consistent and robust association despite only considering one specific aspect of parental behaviour. Results from quantile regressions and heterogeneity analyses suggest that fathers in gender-atypical occupations can help break gender stereotypes and that the findings of the paper are at least partially driven by sons and daughters who defy gender-stereotypical major choices. It is important to note that intergenerational transmission is not the same for mothers and fathers: heterogeneity analyses by parental education showed that fathers' rank is significantly associated with sons' rank independently of their status but with daughters' rank only if they have a tertiary degree. Mothers often have less successful careers than fathers and lower levels of education; their Other disciplines Other occupations Table 9 Channel: transmission of gender roles Table shows estimates from OLS regressions. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. The key regressors are femininity percentile rank of mother's occupation and masculinity percentile rank of father's occupation. Direct transfer indicates that major matches parents' occupational group according to Table 8 and parent has tertiary education. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively). Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0. Yes rank in occupation is only significantly associated with daughters' rank under certain conditions, and is not correlated with sons' rank. These asymmetries highlight the need to study both same-sex and opposite-sex intergenerational correlations between mothers and fathers on the one hand, and daughters and sons on the other hand. Much of previous research on intergenerational transmission of income and education has focused solely on fathers (e.g. Lefgren et al. 2012). I identified two distinct channels through which these intergenerational correlations can operate, a direct transfer of resources, and a transmission of gender roles. Findings suggest that the transfer of occupation-specific resources from parents to their children plays an important role and that a transmission of gender roles explains at least some of the father-son associations. The finding that a transmission of gender roles occurs predominantly between fathers and sons is in line with the observation that despite the increasing number of women entering male-dominated occupations, men continue to be reluctant to enter into female-dominated occupations (England, 2010). Previous research also suggests that male gender norms are more restrictive (Koenig, 2018). This points to a shortcoming of existing literature on intergenerational transmission of gender roles, where the predominant focus has been on women (e.g. (van Putten et al., 2008;Morrill and Morrill, 2013;Fernández and Fogli, 2009;Olivetti et al., 2020)). In light of this, the finding that the positive association between rank in fathers' occupation and rank in sons' major is primarily driven by fathers and sons in less typically masculine occupations/majors is therefore especially encouraging. The results from this study cannot be interpreted in a causal way. Nevertheless, some suggestive policy implications arise from the findings of this paper. First, the relevance of parental occupation shows the importance of policies that address roots of segregation that happen early in life through socialisation. One example is to invest in educational programmes designed to encourage 'atypical' choices among teenagers and to promote new role models, as showcased by initiatives such as 'Girls' day' and 'New pathways for boys' in Germany (Bettio and Verashchagina, 2009). These initiatives intend to widen the occupational aspirations of girls and boys. Results from this paper suggest that especially men in 'gender-atypical' occupations may encourage boys to aspire to less typically male occupations. Second, the interactive effect of parental status with masculinity/femininity in parental occupation suggests that high-status parents may serve as role models independent of whether they are of the same sex as their child. It also suggests that successful role model identification is contingent on status and perceived desirability. Third, while it is important to encourage women to enter highly paid STEM fields, policy should also aim at changing men's attitudes and encouraging them to enter traditionally female-dominated fields. Results from this paper suggest that one avenue could be to stimulate men's interest in typically female fields by challenging traditional stereotypes. This paper has focused in detail on gender typicality in parental occupation, a previously unexplored determinant of the persistence of gendered university major choices in Germany. Future research could extend this in various ways. Specifically, the paper has focused on choice of major when entering university. It would be interesting to study how gender typicality of parental occupation and entry to a gender-typical major affect the probability to drop out, switch major, and successfully obtain a university degree. With regard to external validity, it is possible that results are different among individuals with lower levels of education. Therefore, future research could explore whether the intergenerational transmission and its underlying channels are different when studying for example vocational education choices. Moreover, findings from this paper suggest important non-linearities in intergenerational associations. Future research could build on the rank measure used in this paper by further modeling non-linear relationships in ways that do not impose arbitrary cutoffs in what constitutes a gender-typical occupation or university major. Finally, most papers, including this one, focus on one specific determinant of major choice. Future research that considers the relative importance of different socialisation agents, including peers and teachers, would therefore be valuable. Table 12 Interactive effect of mother's and father's gender-typicality rank in occupation Table shows estimates from OLS regressions. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively). Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0. Table 13 Additional variations on the analysis sample Table shows estimates from OLS regressions. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. The key regressors are femininity percentile rank of mother's occupation and masculinity percentile rank of father's occupation. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively). Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0.1. Sources: NEPS-SC5, Federal Labour Office, Federal Statistical Office Dependent variable Gender-typicality percentile rank in major Sample All school leaving certificates Table 14 Additional control variables Table shows estimates from OLS regressions. The dependent variable is the gender-typicality percentile rank of sons'/daughters' university major. The key regressors are femininity percentile rank of mother's occupation and masculinity percentile rank of father's occupation. Parental characteristics include age, a dummy indicating the parent was employed when offspring aged 15, three dummies for parental educational level (each separately for mothers and fathers, respectively). Individual characteristics include two dummies for birth order, three dummies for family structure when growing up, and a binary variable indicating (1st or 2nd generation) immigrant background. Parental income is the natural logarithm of the median income in mother's and father's occupational group, respectively. Columns 1 and 2 include fixed effects at the level of the 401 administrative districts (Landkreise) instead of the 16 federal states (Bundeslaender). High school GPA (Abiturnote) can take values from 1 to 6, with lower values indicating better GPA. Relative maths grade is maths grade minus german grade, each taking values from 1 to 15, with higher values indicating better results. Expected income is obtained using earnings information by university major from the DZHW Graduate Panel Survey. Mother inactive is defined as the mother not having been employed since the individual was born. Survey weights used. Standard errors in parentheses. Levels of significance ***p<0.01, **p<0.05, *p<0.
2022-06-08T15:04:58.693Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "702ece54aace083ebca21f4abdbc68b43a2e1a6c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00148-022-00900-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "15fd2febcc132d958fad8cbb856adaf03723250e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
245369460
pes2o/s2orc
v3-fos-license
Effect of different microwave power levels on inactivation of PPO and PME and also on quality changes of peach puree The effect of microwave (MW) treatment with different power densities (4.4, 7.7, and 11.0 W/g) on polyphenol oxidase (PPO) and pectin methyl esterase (PME) inactivation in peach puree were studied, and the changes in color, rheological properties, total polyphenol and flavonoid and antioxidant capacity were evaluated. By using time/temperature data collected during MW heating, three cook values levels (0.36, 10, 24 min) for each power density were calculated. The PPO was significantly decreased from ca. 50% to ca. 5% when increasing the cook value level, regardless of power density applied. While PME significantly decreased from 40.6% to 10.2% when power density increased from 4.4 to 11.0 W/g at cook value 24 min. MW treatment did not alter the flow behaviour of peach puree. The apparent viscosity values of peach puree significantly increased after MW treatment with increasing cook value, regardless of power density applied. The L* values of peach puree significantly increased from 36.98 to 38.10 or more after MW treatment at cook value 10 min and 24 min. MW treatment could maintain the amount of total polyphenol, total flavonoid and antioxidant capacity, preserving the nutritional and functional values of the product. Introduction Peaches are rich in polyphenols like chlorogenic acid, neochlorogenic acid, flavan-3-ols and flavanols, which have proven protective effect against cardiovascular disease, and reduction of digestion tract cancer risk (Canalis et al., 2020). Peach puree is a convenient food product, and could be used for baby food, mixed drinks, and a spread on bread and so on. However, shelf-life of unprocessed peach puree is limited due to activities of polyphenol oxidase (PPO), responsible for enzymatic browning and can cause reduction in nutritional quality, and pectin methyl esterase (PME), affecting the rheological properties such as viscosity and texture (Arjmandi et al., 2017;Zhou et al., 2016). Traditional fruit puree processing involves high temperature (>80 • C) processing and affects the color and nutritional quality of material adversely (Garza et al., 1999). Peach puree became darker with the increase of heating temperature from 110 to 135 • C, and the L* and b* values were decreased along with increasing heating time (Ávila and Silva, 1999). In addition, the content of polyphenol in processed peach puree significantly decreased after heating at 100 • C for 5 min, including neochlorogenic acid, chlorogenic acid, catechin and β-carotene (Lavelli et al., 2008). Microwave treatment (MW) causes rapid increase of the temperature of the product. The MW penetrate into the food and generate heat throughout the whole volume of food, making thermal conductivity and heat transfer coefficients no longer the limit of heat transfer (Qu et al., 2021). MW treatment with pasteurization or sterilization purposes, mainly against inactivation of target microorganisms and enzymes, has been studied in liquid food, such as kiwifruit puree (Benlloch-Tinoco et al., 2015a;2015b), mamey fruit (Palma-Orozco et al., 2012) and apple puree (Picouet et al., 2009). Volumetric heating during MW processing results in better preservation of organoleptic, nutritional and functional properties (Huang et al., 2007;Benlloch-Tinoco et al., 2015b). Benlloch-Tinoch et al. (2015a) found that MW treatment (2 W/g, 340 s) allowed significantly greater preservation of kiwifruit puree chlorophylls than conventional heating (97 • C, 30 s), which led to 92-100% degradation of these pigments. Based on the sensory attributes evaluated, including appearance, color, odour, taste, sweetness, acidity, consistency and overall acceptance, MW treated kiwifruit puree (2 W/g, 340 s) was preferred. The authors also found that losses of vitamin C, vitamin E, total phenol, total flavonoids and antioxidant activity in kiwifruit puree under MW were all significantly lower than those value of conventional heating (84 • C/300 s), meaning MW treatment resulted in a better maintenance of the bioactive compounds. In addition, MW treated sample exhibited smaller color changes, and inactivated total mesophilic bacteria, yeast and mould equal to or greater than the conventionally heated sample (Benlloch-Tinoco et al., 2014a). Arjmandi et al. (2017) also found that MW treatment (1900-3150 W/600 mL, 150-180 s) preserved tomato puree's a value better than conventional heating (96 ± 2 • C, 35 s), leading to more retention of antioxidant capacity, vitamin C, and lycopene content. Furthermore, the inactivation of the enzymes responsible for puree quality loss also have been reported in previous studies, and it was found that MW treatment decreased the residual activity of enzymes better than conventional pasteurization. For example, a low residual activity of peroxidase (POD) and PME of 10-20% in tomato puree were detected after MW treatment (1900-3150 W/600 mL, 150-180 s), while polygalacturonase (PG) was more thermo-resistant with a residual activity of 30-56% (Arjmandi et al., 2017). In our previous study we observed that 80% of PPO in defatted avocado puree could be inactivated in 80 s at 11.0 W/g of MW treatment, and the residual activity after treatment remained constant under 20% after 31 days storage . PPO in mamey fruit pulp was completely inactivated after long microwave treatment by using a high MW power (3 W/g, 165 s/300 s) (Palma-Orozco, et al., 2012). Pasteurization unit as a measure of the lethal effect of processes was proposed with the aim of to compare conventional heating and MW in Benlloch-Tinoco et al. (2014b) study, and results showed a higher thermal load of 19.27 min was necessary in order to stabilize the kiwifruit puree under conventional heating than MW treatment (0.003-8 min) at any of the conditions studied. The higher effectiveness of MW treatment could be attributed to non-thermal effects associated with the MW treatment. The purpose of the study was to investigate the inactivation of PPO and PME in peach puree by MW treatment with 3 levels of power densities. Moreover, the effect of MW treatment on the color, rheological property, total polyphenol, total flavonoid and antioxidant capacity of peach puree were evaluated. Materials Peaches (Prunus persica) were purchased from a local supermarket in Auckland, New Zealand. They were washed, peeled, deseeded and pureed by using home juicer. The peach puree was stored in bags at − 40 • C until used for experiments. Frozen sample was thawed at 4 • C for 12 h prior to experiments. Microwave treatment The microwave treatment of the puree was carried out in a modified domestic microwave oven (MicrowaveWork Station-240, FISO Technologies Inc., Canada). 100 g of sample was tempered to an initial temperature of 10 ± 0.5 • C and then was treated in the microwave oven in a 150 mL glass beaker at different power densities (4.4, 7.7, and 11.0 W/g) under was 2450 MHz. The glass beaker was placed in the middle of a turntable plate in the oven. The temperature during treatment was measured using fiber optic probes (FOD-NS-967 A; FISO Technologies Inc., Canada) with an accuracy of ±0.5 • C. The tip of the fiber optic probe was placed at the radial-center of the beaker and was fixed with a plastic holder and the temperature data was collected approximately every 0.5 s by a computer. The treated samples were immediately submerged in ice-water. Dielectric properties measurement Dielectric properties (dielectric constant, ε ′ and dielectric loss factor, ε ′′ ) of peach puree were measured using open-ended coaxial probe (85070 E, Agilent Technologies, Malaysia) connected to a network analyser (E5062A, Agilent Technologies, Malaysia). The dielectric properties of sample under 2450 MHz were measured after samples reached a temperature in the range of 20-90 • C (about 5 • C intervals) in a water bath. Cook value Cook value (CV) is the cumulative heat impact of a complex time/ temperature history on a food's quality attributes, which was first used by Mansfield (1962). Using the time/temperature data collected during microwave heating, the CV (min) could be calculated by the following Eq. (1): Where F 100 is the equivalent thermal treatment time (min) at 100 • C, T is the temperature ( • C) at time t (min). The z value, ranging from 25 • C to 47 • C, corresponds to sensory attributes, texture softening, and color changes. A z value of 33 • C is often used to compute a cook value describing the overall quality loss (Wang et al., 2003;Bornhorst et al., 2017). In this study, the treatment time to heat the puree to 100 • C at 4.4, 7.7 and 204 11.0 W/g power densities were chosen as 3 cook value levels for every power density. Polyphenol oxidase assay The extraction of polyphenol oxidase (PPO) from peach puree and analysis of PPO activity were performed according to the method described by Baltacıoglu and Coruk (2021) with small modifications. Briefly, 3 g of peach puree was homogenized with 10 mL phosphate buffer (0.1 M, pH 6.5, containing 5% poly(vinylpolypyrrolidone) (PVPP)), and the mixture was centrifuged at 5525 g at 4 • C for 15 min (4 K15, Sigma, Germany). The supernatant was collected and the crude extract was kept at 4 • C before analysis. The assay mixture consisted of 3 mL 0.1 M pyrocatechol dissolved in 0.1 M phosphate buffer and 0.5 mL crude extract. The absorbance change at 420 nm at 25 • C was recorded for 1 min using a UV-VIS spectrometer (Lambda 35 UV/VIS, Perki-nElmer, USA). An enzyme activity unit was defined as an increase of 0.1 in absorbance at 420 nm per min. The residual activity (RA) was defined as: Residual activity (%) = specific activity after treatment specific activity before treatemnt × 100 (2) Pectin methylesterase assay Pectin methylesterase (PME) activity was determined by auto titrator (800 Dosino, Metrohm, Switzerland) using the method previously described by Benlloch-Tinoco et al. (2013) with some modifications. Firstly, peach puree was diluted twice with ultrapure water. Then, the reaction mixture consisting of 5 mL of peach puree solution and 20 mL of 1% apple pectin (70-75% esterification, Fluka) containing 0.1 M NaCl, which was previously adjusted to pH 7 was added with 0.05 M NaOH. PME activity was measured at 25 • C by recording the amount of 0.05 M NaOH required for static titration at pH 7 for 5 min. One unit of PME activity (U/mL) was calculated by: where V and N are the volume (mL) and normality of NaOH, respectively, V s is volume of sample (mL) and t r is the reaction time (min). Color measurement The color of untreated and MW treated sample was measured with a Minolta Chroma meter (Model CR, Minolta, Japan). The equipment was calibrated using a standard white reflector plate. Readings were obtained using the standard CIE L* a* b* color system. All measurements were made in triplicate and results were averaged. The total color difference (ΔE) was calculated using Eq. (4), where L 0 *, a 0 *, and b 0 * were the values for untreated samples. Rheological measurements Rheological measurements were carried out using a rheometer (AR G2, TA Instruments, UK) as described by Dankar et al. (2018) and Liu et al. (2013) with small modification, and parallel plates (40-mm diameter) with a gap size of 1 mm were used. The temperature was maintained constant at 25 • C by using a Peltier system. The steady-state shear experiments were carried out in the shear rate range of 0.01-100 s − 1 . After rest, the samples were submitted to shearing at 300 s − 1 for 5 min to avoid any thixotropy (data not shown). Then, a logarithmic decreasing stepped protocol (100-0.01 s − 1 ) was used in order to guarantee the steady-state condition. For dynamic rheological studies, to ensure that all measurements were carried out within the linear viscoelastic region, strain sweep tests were first performed (data not shown) in oscillation mode at 25 • C. Then, frequency sweep measurements were carried out at 1.0 Pa, a shear stress value within the linear viscoelastic range. The storage modulus (G') and loss modulus (G") were obtained over the angular frequency range of 1-100 rad/s. Total phenolic assay The extraction of total phenols was carried out according to the method reported by Cantín et al. (2009) with minor modification. Five grams of peach puree were mixed with the extraction solution, consisting of 0.5 N HCl in methanol/mili-Q water (80% v/v). The homogenate was sonicated at 1200 W for 15 min (Sonorex digital 10 P, Bandelin, Germany) and then centrifuged at 5525 g at 4 • C for 15 min. The supernatant was collected and methanol (80% v/v) was added to a final volume of 15 mL. The hydroalcoholic extract was used for total phenolic, total flavonoid and antioxidant capacity assays. Total phenolic content was determined by the Folin-Ciocalteu assay method using a plate reader (EnSpire 2300, PerkinElmer, USA). Twenty microliters of extract were mixed with 100 μL of Folin-Ciocalteu reagent. After exactly 5 min, 90 μL of 7.5% (w/v) Na 2 CO 3 was added. After 1 h reaction in dark, the detection of the mixture was performed at 765 nm. Results of total phenolic content were expressed as μg gallic acid equivalents (GAE) per gram of sample. Total flavonoid assay Total flavonoid content was determined based on the method of Cantín et al. (2009). One hundred microliter of the methanolic extract was added with 15 μL of 5% NaNO 2 . After 5 min, 15 μL of 10% AlCl 3 was added. After 1 min, 100 μL of 1 N NaOH was added. Absorbance at 510 nm was measured against a blank with a plate reader (EnSpire 2300, PerkinElmer, USA). Results of total flavonoid were expressed as μg of catechin equivalents (CE) per gram of sample. Antioxidant capacity analysis 2.12.1. DPPH radical scavenging ability assay The method was performed as described by Wang et al. (2012) and Guan et al. (2016) with small modification. Forty microliters of methanolic extract were mixed with 260 μL DPPH solution, and then the mixed solution was reacted in dark for 45 min. The absorbance was measured at 517 nm with a plate reader (EnSpire 2300, PerkinElmer, USA). The results were expressed as μmol of Trolox equivalent (TE) per gram of peach puree (μmol TE/g). Forty microliters of methanolic extract were mixed with 260 μL TPTZ solution, and then the absorbance was measured at 593 nm after 30 min of reaction at 37 • C. The results were expressed as μmol of Trolox equivalent (TE) per gram of peach puree (μmol TE/g). Statistical analysis One-way analysis of variance (ANOVA) at a significance level of α = 0.05 was carried out by Microcal Origin 8.0 (Microcal Software, Inc., USA). Rheological properties were measured in duplicate, and all other experiments were performed in triplicate, and the results were reported as mean ± standard deviation (SD). Fig. 1 shows the temperature profile of peach puree treated by MW treatment at 4.4, 7.7 and 11.0 W/g power densities. It is seen from Fig. 1a that it took 34 s, 50 s, and 101 s at 4.4, 7.7 and 11.0 W/g power densities, respectively, to heat the puree to 100 • C which shows that the power density considerably increased the heating rate. Fig. 1b shows the changes of dielectric constant (ε ′ ) and dielectric and loss factor (ε ′′ ) of peach puree at a temperature range of 10-100 • C. The value of ε' showed no significant changes along the temperature range investigated, while the ε ′′ value decreased with an increase in temperature from 10 • C to 50 • C, and then it did not change after 50 • C. Based on the timetemperature profiles presented in Fig. 1a, cook values 0.36, 10, 24 min for 4.4, 7.7 and 11.0 W/g power densities, respectively, were calculated. The three cook values were denoted as cook value level 1, 2, and 3. Benlloch-Tinoco et al. (2014b) proposed the use of pasteurization units as a measure of the lethal effect to compare the inactivation effect of L. monocytogenes and peroxidase (POD) activity in kiwifruit puree, and results showed that conventional heating mode required a significantly higher thermal load of 19.27 min to achieve the pre-set level of POD inactivation than MW treatment (0.046 min, 2 W/g for 200 s). In the study of Bornhorst et al. (2017), microwave-assisted pasteurization system at 95 • C was the best process for mashed potato and green pea, because it had the smallest hot spot cook values (6.5 min for mashed potato, 13.6 min for green pea) and least color change, while the 90 • C hot water was the worst (11.3 min for mashed potato, 18.2 min for green pea). Marszałek et al. (2015) reported that the vitamin C of strawberry decreased by 62% and 4-22% after conventional heating (90 • C for 15 min) and continuous MW treatment (2.0-3.5 L/min, 7-10 s), respectively, and it was possibly due to the much lower pasteurization unit delivered to product by MW treatment (0.15-1.04 min for MW treatment, 162.73 min for conventional heating method). The inactivation of PPO activity As shown in Fig. 2, MW treatment significantly reduced the activity of PPO in peach puree. The residual activity of PPO was significantly decreased from ca. 50%-5% when increasing the cook value from level 1 to level 2, regardless of power density applied. Meanwhile the residual activity in peach puree treated by MW treatment at cook value level 2 and level 3 showed no significant difference. PPO, one of the major oxidative enzymes involved in browning reaction, affects the color and flavour of peach puree (Chakraborty et al., 2014). The resistance of PPO to the MW treatment was dependent on the enzyme source, medium composition, and so on (Matsui et al., 2008). In their study, the residual PPO activity was decreased to 49.2% after MW treatment at 7.7 W/g for 38 s. On the other hand, the activity of PPO in defatted avocado puree significantly increased by 52.6% maximally when the sample was treated by MW treatment for less than 40 s at 7.7 W/g. Then, the residual activity significantly decreased to ca. 20% when increasing the heating time to 100 s . Benlloch-Tinoco et al. (2013) reported that MW treatment could significantly inactivate PPO in kiwifruit puree, and the level of residual activity of PPO significantly decreased from ca. 80% to ca. 10% as the kiwifruit treated by MW from 0.6 W/g to 1.8 W/g for 100 s. Thus, PPO from different sources showed significantly different resistance to the MW treatment, possibly due to iso-enzymes, existing state, environmental and physical-chemical conditions such as pH, soluble solids content (Mayer and Harel, 1979;Zhang et al., 2010). Furthermore, results showed that there was no significant difference among three power densities at a cook value level, indicating the heating rate of MW treatment had no significant effect on the residual PPO activity. In the previous studies, as compared with conventional heating, MW treatment showed greater effectiveness for inactivation of PPO and POD in green coconut water (Matsui et al., 2008). Benlloch-Tinoco et al. (2013) and Soysal and Söylemez (2005) both found the inactivation level of kiwifruit PPO and carrot increased as the MW power increased. In MW treatment, time required for approximately 90% reduction in carrot POD activity was shorter than conventional heating at the same temperature (Soysal and Söylemez, 2005). However, the effect of thermal load was not considered in the above studies. Benlloch-Tinoco et al. (2014b) reported conventional heating required a significantly higher thermal load to achieve the pre-set level of POD inactivation in the kiwifruit puree than any of MW treatments, irrespective of whether the comparison was carried out at the coldest or hottest spot of the sample. However, the effect of heating rate on enzyme inactivation was not discussed. In this study, it was indicated that the same thermal load caused by MW treatment induced the same inactivation level of PPO in peach puree, regardless of the power densities applied. Thermal load played a key role in inactivation of PPO in peach puree by MW treatment. The inactivation of PME activity As shown in Fig. 3, the inactivation of PME by MW treatment was different than the inactivation of PPO. After MW treatment at 4.4 W/g power densities for cook value level 1, the residual activity of PME was 72.1%, higher than the value of 49.7% detected for the PPO. PME turned out to be more resistant to MW treatment than PPO in peach puree. Benlloch-Tinoco et al. (2013) also found the inactivation percentage of PPO and PME were 97.5% and 77.2% after an optimum MW treatment at 2 W/g for 340 s, respectively. PME was a highly heat stable enzyme, as intense heat treatment was necessary to inactivate it. According to previous and this study, MW treatment could be an alternative technique to inactive PME. In Arjmandi et al. (2017) study, continuous MW treatment, in particular high power/short time (3.2 W/mL for 180 s, 4.5 W/mL for 160 s and 5.2 W/mL for 150 s), significantly decreased the PME residual activity to 14.65-15.02%. The PME activity in kava juice after continuous MW treatment at 41.4 • C, 52.3 • C and 65.2 • C were decreased to 83%, 73% and 34%, respectively (Abdullah et al., 2013). Tajchakavit and Ramaswamy (1997) reported the PME inactivation in orange juice was significantly faster in the MW treatment mode than in conventional heating mode, such as the D-values were 154 s and 7.37 s for conventional heating and MW treatment at the common temperature of 60 • C as the basis, respectively. The average residual activity of PME in peach puree treated by MW treatment decreased from 70.1% to 41.7% when increasing the cook value level from 1 to 2. Meanwhile the residual activity showed no significant difference among different power densities applied at cook value levels at 1 and 2. The residual activity significantly decreased from 40.6% to 10.2% when power density increased from 4.4 to 11.0 W/g at cook value level 3. It was deduced that the thermal load was not the only factor that affected the PME inactivation by MW treatment, and heating rate was the possible reason for the phenomenon. Tajchakavit and Ramaswamy (1997) suggested that the remarkable difference in inactivation effect between MW treatment and conventional heating showed the possibility of some contributory non-thermal effects under the MW heating condition. Benlloch-Tinoco et al. (2014b) also proposed some contributory non-thermal effects associated with the MW treatment, as MW treatment heating required a lower thermal load than conventional heating to inactivate microorganism and enzyme at power levels studied. It has been reported that the non-thermal impact of MW treatment on the microorganisms would be more effective than on the surrounding medium; thus, destroying microbial cells (Heddleson and Doores, 1994;Kozempel et al., 1998). Fig. 4 shows that the apparent viscosities of all peach puree samples decreased with increasing shear rate from 0.01 to 100 1/s, exhibiting a non-Newtonian characteristic of a pseudoplastic nature. The pectin and fibre content in peach puree provided to the puree a kind of internal structure which resulted in shear-thinning (Massa et al., 2010). From a rheological point of view, peach puree could be considered as a weak gel; its viscosity is not stable and is influenced by changing the degree of shear rate (Arjmandi et al., 2017). Massa et al. (2010) and Maceiras et al. (2007) both found that peach puree showed a shear-thinning behaviour along the decreasing shear rate. Compared with untreated sample, the apparent viscosities of peach puree were not significantly altered by MW treatment at cook value 1, while the values significantly increased after MW treatment at cook value 2 and 3. The MW treatment with different power densities at a common cook value level showed no significant effect on apparent viscosity. The result meant that increasing the thermal load significantly boosted the apparent viscosity, regardless of power density applied. Similar result was found by Arjmandi et al. (2017) that MW treatment, in particular, high power combined with low time (2700 W/160 s, 3150 W/150 s) provided the tomato juice with higher viscosity compared to untreated samples. The viscosity of kiwifruit puree also increased when a higher intensity MW treatment was applied, and sample treated by MW treatment at 1.2 W/g for 340 s, 1.8 W/g for 300 s and 2 W/g for 200 s showed significantly higher viscosity values than the rest of the samples (Benlloch-Tinoco et al., 2012). Our previous study also showed the apparent viscosity of MW treated (11.0 W/g, 60-80 s) defatted avocado puree was slightly higher than that of control samples . Viscosity is influenced by the presence of pectin, and inactivation of PME and PG after the thermal treatment. Owing to disruption of the cell wall in samples during thermal treatment, soluble pectin could be increased (Arjmandi et al., 2017). Furthermore, Maran et al. (2013) reported that the extraction efficiency of pectin could be improved by raising MW power from 160 W to 480 W under the same solid-liquid ratio, caused by the direct effects of MW energy on the plant materials. The phenomenon could be explained by that more electromagnetic energy by increasing MW power was transferred on biomolecules by ionic conduction and dipole rotations, which result in more power dissipated inside the solvent and plant material and then generate molecular movement and heating on the traction system quickly, and improved the pectin extraction efficiency (Maran et al., 2013). Thus, higher intensity of MW treatment could result in a product with a higher viscosity. On the other hand, further inactivation of PME by MW treatment with higher cook value also contributed to the enhancement of apparent viscosity. A lower PME residual activity at cook value 2 and 3 was corresponding to the higher value of apparent viscosity. Fig. 5 shows the change of dynamic rheological characteristics of peach puree treated by MW treatment. For all samples, the storage modulus G ′ was greater than the loss modulus G ′′ throughout the frequency range, indicating peach puree could be characterized as a week gel network. As compared with untreated peach puree, the G ′ and G ′′ of sample after different MW treatment showed no significance difference. Some factors affecting the elastic behaviour of puree are particle concentration, morphology, flexibility, and the way particles agglomerate (Lopez-Sanchez et al., 2011). The change in G' and G" after MW treatment was consistent with the result of particle size distribution (PSD) in peach puree, which showed that PSD of peach puree was not altered by MW treatment (data not shown). Table 1 shows the change of L*, a*, b* and ΔE values of peach puree after MW treatment. The L*, a* and b* values of peach puree were not significantly changed by MW treatment at cook value 1, except the L* Fig. 3. The effect of microwave treatment under same cook value on PME residual activity of peach puree. value at 7.7 W/g. The L* values of peach puree significantly increased from 36.98 to 38.10 or more after MW treatment at cook value 2 and 3, while no significant difference was observed among the samples with different power densities applied at a common cook value. Meanwhile, the a* and b* values of peach puree were not changed after all MW treatments as compared with the untreated sample, which in the range of 0.74-1.66, and 14.52-15.64, respectively, except the b* value at 7.7 W/g. It was indicated that the MW treatment at higher cook value resulted the peach puree with a brighter color than untreated sample, mainly due to the further inactivation of PPO by intense MW treatment as shown above. (de Ancos et al., 1999). In the study of Palma-Orozco et al. (2012), as compared to the untreated sample, the L* value of mamey fruit after MW treatment (0.3-3.1 W/g, 30-300 s) was not altered, but both a* and b* values decreased significantly; therefore, ΔE values increased when exposure time was increased. In addition, MW treatment (652 W, 35 s) decreased the L* value of Granny Smith apple puree from 66.8 to 51.5, and increased the a* value from − 9.5 to − 5.8 (Picouet et al., 2009). From the above studies, it could be deduced that the color change after MW treatment is closely related to the source of puree, since different purees have a wide range of enzyme, anthocyanin, and chlorophyll, as well as the original color. Thus, the effect of MW treatment on puree resulted from a comprehensive effect of enzyme inactivation and pigments degradation. The change of color 3.6. The change of total phenolic, total flavonoid and antioxidant capacity Table 2 shows the change of total phenolic, total flavonoid and antioxidant capacity in peach puree after MW treatment. The initial content of total phenolic and total flavonoid in peach puree were 156.41 ± 11.03 μg GAE/g and 37.28 ± 5.53 μg CE/g, respectively, which is in the range reported by the Cantín et al. (2009). Cantín et al. (2009) reported that total phenolic and total flavonoids in 15 peach and nectarines ranged from 127 to 713 μg GAE/g and 18 to 309 μg CE/g, respectively. The total phenolic content in the peach puree after MW treatment showed a decreasing trend, however no significant change was observed, except the value in peach puree after MW treatment with Values with different letters within the same column denotes significant difference according to Tukey test (P < 0.05). power densities of 4.4 and 7.7 W/g at cook value 2. MW treatment did not alter the content of total flavonoid and antioxidant capacity of peach puree, regardless of the power density and cook value applied. Total phenolic and total flavonoid are widely known to be common substrates of enzymes, such as PPO and POD. The effective inactivation of PPO in a short treatment time partly contributed to the better preservation of total phenolic, total flavonoid and antioxidant capacity. Since total phenolic and total flavonoids are important bioactive compounds contributing to the antioxidant capacity of peaches (Gil et al., 2002), the change of antioxidant capacity was in accordance with total phenolic and total flavonoid in peach puree after MW treatment. Picouet et al. (2009) also found that the total phenolic in apple puree was not altered by MW treatment (652 W, 35 s), and its content was similar: 1124 and 1166 mg/GAE kg, respectively. Benlloch-Tinoco et al. (2015b) found that MW treatment (2 W/g, 340 s) did not cause significant losses in the total phenolics of kiwifruit puree, while it reduced the total flavonoid content by 28.80%. In the study of Igual et al. (2010), MW treatment (45 W/mL, 30 s) and conventional heating (80 • C, 11 s) caused a similar decrease of total phenolic content (34.64%) and % DPPH (40%), but MW treated grapefruit juices better preserved total phenolic and antioxidant capacity when compared with fresh or conventional pasteurised ones during storage. Arjmandi et al. (2016) reported that the maximum total phenolic was obtained in semi-industrial MW treated orange-colored smoothies (1.0-18 w/mL, 93-646 s) without significant differences among MW treatments. Meanwhile the loss of antioxidant capacity was only 5% under the combination of high power/short time, and the value was 28% of that observed in conventional pasteurization (90 • C, 35 s) sample (Arjmandi et al., 2016). Thus, MW treatment was superior in preserving the nutritional and functional values of the product, mainly due to its inactivation of enzymes characteristic of high heating rate. Conclusion MW treatment can be considered as a suitable means of processing peach puree and preserving the quality of the product. PPO and PME in peach puree could be effectively inactivated by MW treatment, while PME turned out to be more resistant to MW treatment than PPO. Same level of thermal load calculated by cook value resulted in similar inactivation level of PPO in peach puree, regardless of the power densities applied, possibly indicating heating rate had no effect on inactivation of PPO. However, a different situation was observed for PME inactivation by MW treatment. The residual activity of PME significantly decreased when power density increased from 4.4 to 11.0 W/g at cook value 3, suggesting thermal load was not the only factor that affected the PME inactivation by MW treatment. As compared to untreated sample, MW treatment at higher cook value resulted the peach puree with a brighter color, while the amounts of total polyphenol, total flavonoid and antioxidant capacities were higher after treatment. The apparent viscosity values significantly increased after MW treatment at higher cook value, regardless of power density applied. Accordingly, the use of MW treatment offers a good alternative to conventional pasteurization for enzyme inactivation. Declaration of competing interest The authors do not have any conflict of interest to declare. Table 2 The effect of microwave treatment under same cook value on total phenolic, total flavonoid and antioxidant capacity of peach puree. Values with different letters within the same column denotes significant difference according to Tukey test (P < 0.05).
2021-12-22T16:31:09.850Z
2021-12-20T00:00:00.000
{ "year": 2021, "sha1": "befebde4bd8451117a0dc6e4accf5f498109fd4c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.crfs.2021.12.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a4c457949a55de71595a1723581ee341fd2bb90", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
52075115
pes2o/s2orc
v3-fos-license
The closedness of shift invariant subspaces in Lp,q(Rd+1)$L^{p,q} (\mathbb{R}^{d+1} )$ *Correspondence: jczhangqingyue@mail.nankai.edu.cn 1College of Science, Tianjin University of Technology, Tianjin, China Abstract In this paper, we consider the closedness of shift invariant subspaces in Lp,q(Rd+1). We first define the shift invariant subspaces generated by the shifts of finite functions in Lp,q(Rd+1). Then we give some necessary and sufficient conditions for the shift invariant subspaces in Lp,q(Rd+1) to be closed. Our results improve some known results in (Aldroubi et al. in J. Fourier Anal. Appl. 7:1–21, 2001). MSC: 42C15; 42C40; 41A58 The closedness is an expected property for shift invariant subspaces, which is widely considered in the study of shift invariant subspaces. de Boor, DeVore, Ron, Bownik and Shen studied the closedness of shift invariant subspaces in L 2 (R d ) [9][10][11]. And Jia, Micchelli, Aldroubi, Sun and Tang discussed the closedness of shift invariant subspaces in L p (R d ) [1,12,13]. In this paper, we consider the closedness of shift invariant subspaces in L p,q (R d+1 ). In order to provide our main result which extends the result in [1], we introduce some definitions and notations. The definition of L p,q (R d+1 ) is as follows. We define mixed sequence spaces p,q (Z d+1 ) as follows: Given a function f , define 1] . The norms are defined above and with usual modification in the case of p or q = ∞. L p,q is a generalization of L p (the definition of L p see [14,Sect. 1]). Clearly, For a given sequence c and a function φ, consists of all distributions whose Fourier coefficients belong to p,q . When p = q = 1, WC 1,1 becomes the Wiener class WC. The following proposition shows that the shift invariant subspaces in L p,q (1 < p, q < ∞) are well defined. (i) V p,q ( ) is closed in L p,q . (ii) There exist some positive constants C 1 and C 2 satisfying (iii) There exist constants B 1 , B 2 > 0 satisfying The paper is organized as follows. In the next section, we give some three useful lemmas and two propositions. In Sect. 3, we give the proof of Theorem 1.5. Finally, concluding remarks are presented in Sect. 4. Some useful lemmas and propositions In this section, we give three useful lemmas and two propositions which are needed in the proof of Theorem 1.5. Proposition 2.1 ([1, Lemma 1]) Let ∈ (L 2 ) (r) . Then the following are equivalent: (ii) There exist some positive constants C 1 and C 2 such that Then there exists a finite index set , η λ ∈ [-π, π] d+1 , 0 < δ λ < 1/4, nonsingular 2π -periodic r × r matrix P λ (ξ ) with all entries in the Wiener class and K λ ⊂ Z d+1 with cardinality(K λ ) = k 0 for all λ ∈ , having the following properties: where B(x 0 , δ) denotes the open ball in R d+1 with center x 0 and radius δ; where 1,λ and 2,λ are functions from R d+1 to C k 0 and C r-k 0 , respectively, satisfying The following lemma can be proved similarly to [7,Theorem 3.4]. And we leave the details to the interested reader. Lemma 2.4 Let c ∈ 1 . Then one has: Proof (i) By Young's inequality and the triangle inequality, one has Proof of Theorem 1.5 In this section, we give the proof of Theorem 1.5. The main steps of the proof are as follows: (iv) ⇒ (iii): Conversely, if f = r j=1 c j * sd θ j , then, by Proposition 1.3 and the triangle inequality Taking the infimum for (3.1), one gets Let B 1 = 1/ max 1≤j≤r θ j L p,q and B 2 = r j=1 ψ j L ∞,∞ . Then one has For convenience, let T : ( p,q ) (r) → V p,q ( ) be a mapping which is defined by and let f inf = inf f = r j=1 c j * sd θ j r j=1 c j p,q . Then, obviously, · inf is a norm. Assume f n ⊂ Ran(T) (n ≥ 1) is a Cauchy sequence. Here Ran(T) denotes the range of T. Without loss of generality, let f nf n-1 inf < 2 -n . Using the definition of · inf , there is C n ∈ ( p,q ) (r) (n ≥ 2) such that TC n = f nf n-1 and C n ( p,q ) (r) < 2 -n for any n ≥ 2. By the completeness of ( p,q ) (r) and ∞ n=2 C n ( p,q ) (r) < ∞, one has Z = ∞ n=2 C n ∈ ( p,q ) (r) and f 1 + TZ ∈ Ran(T). Note that TC inf ≤ C ( p,q ) (r) for any C ∈ ( p,q ) (r) . One has when n → ∞. Therefore, Ran(T) is closed. Since V p,q ( ) = Ran(T), one sees that V p,q ( ) is closed. Concluding remarks In this paper, we study the closedness of shift invariant subspaces in L p,q (R d+1 ). We first define the shift invariant subspaces generated by the shifts of finite functions in L p,q (R d+1 ). Then we give some necessary and sufficient conditions for the shift invariant subspaces in L p,q (R d+1 ) to be closed. However, in this paper, we only consider the closedness of shift invariant subspace of L p,q (R d+1 ). Studying the L p,q -frames in a shift invariant subspace of mixed Lebesgue L p,q (R d ) is the goal of future work. Funding This work was supported partially by the National Natural Science Foundation of China under Grants Nos. 11371200, 11326094 and 11401435. This work was also partially supported by the Program for Visiting Scholars at the Chern Institute of Mathematics.
2018-07-16T00:26:14.435Z
2018-07-06T00:00:00.000
{ "year": 2018, "sha1": "0062b7586ca46956948189de893e0cd8f84e7591", "oa_license": "CCBY", "oa_url": "https://journalofinequalitiesandapplications.springeropen.com/track/pdf/10.1186/s13660-018-1755-2", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a4be912dc5016f8004660e20bfb3e11ed7004959", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
220547581
pes2o/s2orc
v3-fos-license
The Processes of Change Scale: A Confirmatory Study of the Malay Language Version Background Processes of change (POC) comprise one of the psychological constructs in the Transtheoretical Model. The objective of this study is to test the validity and reliability of the Malay version of the POC scale among university students by using a confirmatory approach. Method A cross-sectional study design with a convenience sampling method using a self-administered questionnaire was carried out. University undergraduate students were approached to fill in the questionnaire, which consisted of demographic information and a POC scale. The POC scale consisted of 30 items and two main factors (i.e., cognitive and behavioural). The POC scale was translated into the Malay language using a standard procedure of forward and backward translation. Confirmatory factor analysis (CFA) was performed, and composite reliability was computed using Mplus version 8. Results A total of 620 respondents with a mean age of 20 years (standard deviation = 1.15) completed the questionnaire. Most of the participants were female (74.7%) and Malay (78.2%). The initial CFA model of the POC scale did not exhibit fit based on several fit indices (comparative fit index (CFI) = 0.880, Tucker Lewis index (TLI) = 0.867, standardised root mean square residual (SRMR) = 0.075 and root mean square error of approximation (RMSEA) = 0.058). Several re-specifications of the model were conducted and the modification included adding correlation between the items’ residuals. The final model for the Malay version of the POC scale showed acceptable values of model fit indices (CFI = 0.922, TLI = 0.911, SRMR = 0.064 and RMSEA = 0.048). The composite reliability of both the cognitive and behavioural processes was acceptable at 0.856 and 0.752, respectively. Conclusion The final model presented acceptable values of the goodness of fit indices, indicating that the scale is fit and acceptable to be adopted for future study. Introduction The transtheoretical model (TTM), also known as the stage of change (SOC) model, finally matured in the 1990s (1). According to Kim (2), "TTM accounts for the dynamic nature of health behaviour change including exercise, and recognises that individuals often must make several attempts at behaviour change before they are successful." TTM consists of four core constructs: A. The six stages of exercise behaviour change (3) include i) pre-comtemplation; ii) contemplation; iii) preparation; iv) action; v) maintenance and vi) relapse; B. The psychological constructs, include processes of change (POC) (consisting of overt and covert activities that individuals utilise to modify their behaviour); C. Decisional balance (involving the perceived 'pros' (advantages) and 'cons' (disadvantages) of continuing a current behaviour or adopting a new behaviour) (4); and D. Self-efficacy (how well one can execute the courses of action required to deal with prospective situations) (5). Processes of change comprise one of the psychological constructs in TTM, which was developed by Prochaska and DiClemente (6). It is one of the most well-known motivational models used to promote physical activity (PA) programmes (7). Kim (2) has defined TTM as "a contemporary psychological framework that attempts to explain intentional health behaviour adoption and maintenance as a process that occurs over time as a function of behavioural history and motivation." POC has been used along with other TTM constructs for examining the factors influencing the amount of PA performed among university and school students (8,9). Society has adopted POC as a scheme or method to adjust their self-developed behaviour (3). POC can be classified into either experiential or cognitive processes or behavioural processes (2,10). Experiential processes are comprised of consciousness raising, dramatic relief, selfreevaluation, environmental reevaluation and self-liberation that are derivative of an individual's own actions, while behavioural processes' information is obtained through environmental events, including social liberation, counter conditioning, stimulus control, reinforcement management and helping relationships. For decades, PA and exercise have been empirically accepted by clinicians and researchers as improving the health status of the general population. The World Health Organization (WHO) stated that physical inactivity is one of the major causes of various types of non-communicable diseases (11). Since then, countries around the world have been promoting exercise to reduce the prevalence of morbidity and mortality due to physical inactivity. Despite this, not only is the adult or elderly population limited with respect to PA levels, but a high number of studies have reported results showing that most of the world's youth population is also physically inactive (12,13). In Malaysia, reports from National Health and Morbidity 2017 showed that only 45% of Malaysian students are physically active (14). Thus, aside than the existing methods, new approaches and instruments should be brought in to improve the prevalence of PA among students in Malaysia. With this in mind, studies have shown that POC could predict an individual's PA behaviour (15)(16)(17). Previous studies have also emphasised that both experiential and behavioural processes are required for someone who is trying to begin, improve or sustain in PA (18)(19)(20)(21). Therefore, by applying POC in the context of assessing our Malaysian students, we could potentially predict their PA behaviour. Researchers and healthcare policy makers could therefore improve Malaysian students' PA participation with more suitable and effective methods of intervention that target individuals' cognitive and behavioural processes. Although one study has tested the validity of the short version of the POC scale among Malaysian primary school children (9), the full POC scale in the Malay language has not been tested among Malaysian adults. Considering that the POC is relatively new in Malaysia, we decided to translate the English version of the POC scale into the Malay language. Hence, the scale can still be used for university or college students, but it can also be used to assess the POC of primary and secondary students as well. Therefore, the present study aimed to test the validity and reliability of the Malay version of the POC scale among university undergraduate students in the Universiti Sains Malaysia using a confirmatory approach. Study Design, Procedures and Participants Data collection was held in Health Campus of Universiti Sains Malaysia, Kubang Kerian, Kelantan. A cross-sectional study design with convenience non-probability sampling method was carried out among the university undergraduate students from March to April 2019. Convenience sampling was used due to its simplicity in recruiting participants and data collection takes minimal time. Large sample size is preferable for confirmatory factor analysis (CFA), thus this sampling method allowed researchers to obtain enough number of participants in completing the questionnaire. University students who volunteered were undergraduate students from different courses, which include medicine, nursing, dental, sports science, biochemistry and others. The students were approached to participate in this study after their lecture at the university. The participants were briefed on the current study and those who wish to participate were required to answer the self-administered questionnaire. The participants took between 15 min and 20 min to complete the questionnaire. Instrument The questionnaire consists of two sections, the demographic and the POC scale sections. The demographic section, information such as age (years), gender, ethnicity, BMI, PA duration per week were collected. POC The scale was developed by Nigg et al. (22). The 30-items questionnaire is rated on a 5-point Likert scale ranging from 1 (never) to 5 (repeatedly). It measures the 10 factors: consciousness raising, dramatic relief, selfreevaluation, environmental reevaluation, social liberation, counter-conditioning, helping relationships, reinforcement management, stimulus control and self-liberation. The internal consistency reliability was reported to be 0.6 and 0.9 for the two higher factors, cognitive and behavioural processes, respectively (22). Translation of POC Scale into Malay Language The original English version of the POC scale was translated into the Malay language using the forward and backward standardised procedures outlined by Brislin (23). The steps include: i) one of the bilingual authors familiar with the content did the forward translation into Malay language, based on the principle of retaining meaning, rather than literal wordfor-word translations; ii) another bilingual expert back-translated the Malay version into English language; iii) three panels consisted of experts from the areas of sport psychology, sport sciences, psychometric evaluation with over 10 years of experience in their areas of expertise, reviewed and examined the forward and backward translated versions. All panels were competent bilingual languages, both in Malay and English. They reviewed the English translation from the Malay version and the Malay translation from English, comparing each item to the corresponding item on the original English version. They noted any deviations in meaning and finalised the Malay version of POC scale (POC-M). Then, expert panels were asked to assess whether the contents were culturally appropriate to the Malaysian population. The final version of POC-M was pretested among 30 university students for clarity, comprehension and understanding. The result of the pre-test among the university students was found to be acceptable, and no modifications were necessary. The POC-M was used in the present study. Data Analysis CFA was performed by using Mplus version 8. Demographic data of participants were presented by descriptive information. For categorical variables, they were presented by frequencies and percentage, whereas for numerical variables, they were presented by mean and standard deviation (SD). The data were checked for multivariate normality, and the results indicated that the data did not meet the assumption, based on Mardia multivariate skew (P < 0.001) and kurtosis (P < 0.001) tests. Therefore, for the subsequent CFA, the robust maximum likelihood estimator (MLR) was utilised, as this is robust to nonnormality (24)(25)(26). Researchers had suggested presenting more than one fit index in order to show the validity of the questionnaire (27). Based on 30-item of POC-M measurement model in the present study, the fit indices and its acceptable threshold value are as follows: the comparative fit index (CFI) and Tucker and Lewis index (TLI) with the desired value of more than 0.92; the root mean square error of approximation (RMSEA) with the desired value of less than 0.08; probability RMSEA with the desired value of more than 0.07; and the standardised root mean square (SRMR) with the desired value of less than 0.08 (28). Modification of the model may be required to achieve the recommended cut-off values for each model fit indices. Modification includes removing the items with low factor loading and adding correlation between items' residual. An item with factor loading less than 0.4 was treated as problematic items (29,30) and subjected for removal after adequate theoretical support was carried out. Correlation between items' residual in the same factor could be added in the model to improve the fit indices. To access the convergent validity of the scale, average variance extracted (AVE) and composite reliability (CR) of the final measurement model were computed. Sample Size In CFA, larger samples generally produce more stable solutions and are more likely to be replicable. Based on Hair et al. (28), with a number of factors larger than six, sample size requirements may exceed 500. In the present study, POC-M consists of more than six factors, so we considered that the sample size of 620 was sufficiently large for a confirmatory study using CFA. Demographic Characteristics A total of 620 volunteered and participated in the present study. Male students were 157 (25.3%), while female students were 463 (74.7%), with a mean age of 20 (1.15). Among the 620 students, 485 (78.2%) were Malay, 97 (15.6%) were Chinese, 29 (4.7) were Indian and 9 (1.5%) were others. Participants had mean duration of exercise per week of 53 min (SD = 33.4). Table 1 shows the distribution of the items' score answered by the students. As a whole, the majority of students answered 'occasionally' for all items. For 'never' score, the lowest number of student answered is item PC11 (Saya percaya bahawa senaman yang tetap akan membuat saya menjadi lebih sihat dan lebih gembira.), indicating most of the students are believing that PA could lead them to a healthier and happier lifestyle. Item PC29 (Saya menggunakan kalendar untuk menetapkan waktu senaman saya.) was the highest answer on 'never' score, as probably most of the students prefer to do PA during their leisure time. Highest and lowest answered for 'repeatedly' was item PC11 and PC29 respectively, prove that students are assured with the PA advantages towards health status, but practice it only during their free time. Descriptive Statistics Noticed that there was a ceiling effect on the item PC29 which most of the students answered 'never'. However, the rest of the items' score were normally distributed and no ceiling and floor effect found in the Malay version of the POC scale. Confirmatory Factor Analysis The POC-M consists of 30 items with ten first-order factors (consciousness-raising, dramatic relief, self-reevaluation, environmental reevaluation, self-liberation, social liberation, counter-conditioning, stimulus control, reinforcement management, and helping relationships) that divided into the second-order factors (experiential/cognitive and behavioural processes). In the initial hypothesised measurement model (initial), the factor loading of all items was higher than 0.4 (Table 2). However, two of the model fit indices were not in the acceptable value (CFI = 0.880, TLI = 0.867, SRMR = 0.075, RMSEA = 0.058) ( Table 3). As the items' factor loading were all higher than 0.4, there was no item need to be removed. Thus, further investigation of the initial model output was carried out and found that, there were numbers of correlation between items' and first factors' residuals with high value (higher than 10) in the output of the initial model. As mentioned before, the addition of these correlation residuals in the model could improve the model fit indices; thus, we decided to include the correlation residuals into the model. Addition of the correlation residuals has been done iteratively, started with the highest value to the lower value until the model fit the data. Final model with the accepted value of goodness of fit indices was achieved after 11 correlation between items', and first-order factors' residuals were added (Table 3). Composite reliability (CR) of the secondorder factors in the final model was calculated based on Raykov's method (31). There are two opinions for the cut-off point of composite reliability, > 0.6 by Tseng et al. (32) and > 0.7 by Hair et al. (28) The second-order factors' composite reliability for the final model were 0.85 for cognitive processes and 0.86 for behavioural processes ( Table 4). The AVE values were also provided in Table 4. Both factors had AVE values more then 0.50. Discussion Through the adoption of CFA, the current study was aimed to present the validity and reliability of POC-M scale among undergraduate students in the Health Campus of Universiti Sains Malaysia. After the addition of 11 correlations between items and second-order factors residuals into the model, we finally obtained the best model fit indices. There were no problematic items with lower than 0.4 items' loading found. Hence, no item was removed to achieve the required model fit indices value. From the result, we can see that there were more female participants (74.7%) than male participants (25.3%). This can be explained by the typical situation of the students' population in local universities where females are commonly higher in number compared to males. Moreover, as the questionnaire were requested among the students who voluntarily to participate, female students were more co-operative compared to males. Regarding the obtained model fit indices value for the present study, showed that all model fit indices were met the suggested value by Hair et al. (28) except for TLI. Nonetheless, the acceptable values for model fit indices are vary based on the viewpoint of different researcher all around the globe. As an example, the cutoff point for CFI (33) was initially 0.9. After did some revision, Hu and Bentler (34) were then increased the cut-off point for CFI ad TLI by 0.95 or higher. Another opinion given by McDonald and Ho (35) stated that the value of higher than 0.9 for CFI is acceptable. Guideline of model fit indices designed by Hooper et al. (36) exhibit that 0.8 is the value as a recommended cut-off point for TLI, also known as NFI (normed-fit index). Hence, the obtained value of model fit indices for the present study is considered as in the acceptable value and indicate the model fits the data well. Initially, POC was commonly adopted in assessing smoking cessation among adults (37,38). Back then, it was difficult to find a study that applies the TTM, specifically POC on adults/ adolescent population related to PA other than a study by Nigg and Courneya (39). Until now, we are able to find a few numbers of studies adopting POC scale on PA behaviour towards college students (40), adolescents (41) and African American (42). The development of POC scale for exercise behaviour was initially with 40 items and validated in the English language. It was then revised and simplified into 30 items (22). As it widely adopted all around the world, POC scale for PA has been translated into different languages such as, Iranian (43), Korean (2), Finnish (44) and French (45). As for the French language (45), two different studies were carried out. In the first study, item/scale descriptive statistics performed and all 30 items were reserved. For the second study, five models were measured with each of the models consist of different structure: Model A (ten first-order factors subordinate to two correlated second-order factors), Model B (the ten fully correlated factors model), Model C (the two second-order factors and eight firstorder factors), Model D (the nine first-order factors subordinate to two correlated secondorder factors), and Model E (the five fully correlated factors measurement model), among 671 participants through online and paperand-pencil approach. As the main objective is to compare the existing models in order to validate the French POC scale, multiple CFA were implemented. Final results showed that only Model A and Model E presented moderate fit indices with the data. Model E was the best model with Χ 2 = 653.46, CFI = 0.920, SRMR = 0.056 and RMSEA = 0.060. Similar to other studies, the present study also faced some limitations. As the participants were only undergraduate students of Health Campus of Universiti Sains Malaysia; thus, the generalisability of the present study are limited. Although all Malaysians could read and understand the Malay language well, yet, different layers of age, socio-economic and educational backgrounds, and other factors may lead to different understanding and interpretation of the questionnaire items. Insincerity and dishonesty could bring biases when self-administered approach applied. This may lead to the violation of the instrument reliability; however, before students were encouraged to answer the questionnaire sincerely and honestly. They were also asked to answer the questionnaire without discussing with other participants. All of these precautions were applied as the best way to reduce bias. Based on gender and ethnicity, participants were dominated by female and Malay. Therefore, the invariance test between genders and ethnicities could not be carried out. Future research should be conducted with better sampling method which could get balance participants between male and female; and different ethnicities. Conclusion The objective of the current study on validating the Malay version of the POC scale was achieved. The final model presented the acceptable value of goodness fit of indices, indicating that the scale is fit and acceptable to be adopted among Malaysian university students. Thus, it can be beneficial in assessing POC towards the PA of target population. Nonetheless, it is highly suggested for future study to identify better sampling method in order to collect a broader representative of the population. More universities, colleges, and schools should be involved if the target population are students. Variety of socioeconomic and educational backgrounds, rural and urban area and different layers' age also need to be concerned in order to produce better study sample. Hence, results from the future study will be more comprehensive and could be generalised to all Malaysian populations. Ethics of Study The study obtained approval from Universiti Sains Malaysia Human Research Ethics Committee (USM/JEPeM/18070305) and was conducted in accordance with the guidelines of the International Declaration of Helsinki. Participants were informed that their participation was voluntary and they may withdraw at any time without any loss or penalty. Participants who volunteered to participate in the study completed the questionnaire, which included the demographic sheet and the POC-M. Implied consent was obtained when participants completed and returned the questionnaire to the researchers. Conflict of Interest None. Funds None.
2020-07-09T09:11:37.902Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "e58aff25ee6aff8a3eb3ed90ca387e13e6af0ea2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.21315/mjms2020.27.3.13", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75f5e7a67f18a5231590a5adaf869a9fcbdf73e4", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
247193151
pes2o/s2orc
v3-fos-license
Prognostic Impact of Pattern of Mandibular Involvement in Gingivo-Buccal Complex Squamous Cell Carcinomas: Marrow and Mandibular Canal Staging System Purpose To study the pattern of mandibular involvement and its impact on oncologic outcomes in patients with gingivo-buccal complex squamous cell carcinoma (GBC-SCC) and propose a staging system based on the pattern of bone involvement (MMC: Marrow and mandibular canal staging system) and compare its performance with the 8th edition of the American Joint Committee on Cancer (AJCC8). Methods This retrospective observational study included treatment-naïve GBC-SCC patients who underwent preoperative computed tomography (CT) imaging between January 1, 2012, and March 31, 2016, at a tertiary care cancer center. Patients with T4b disease with high infratemporal fossa involvement, maxillary erosion, and follow-up of less than a year were excluded. The chi-square or Fisher’s exact test was used for descriptive analysis. Kaplan–Meier estimate and log-rank test were performed for survival analysis. Multivariate analysis was done using Cox regression analysis after making adjustments for other prognostic factors. p-Value <0.05 was considered as significant. Based upon the survival analysis with different patterns of bone invasion, a new staging system was proposed “MMC: Marrow and mandibular canal staging system”. “Akaike information criterion” (AIC) was used to study the relative fitted model of the various staging (TNM staging—AJCC8) with respect to survival parameters. Results A total of 1,200 patients were screened; 303 patients were included in the study. On radiology review, mandibular bone was involved in 62% of patients. The pattern of bone involvement was as follows: deep cortical bone erosion (DCBE) in 23%, marrow in 34%, and marrow with the mandibular canal in 43% of patients. Patients with DCBE and no bone involvement (including superficial cortical) had similar survival [disease-free survival (DFS) and locoregional recurrence-free survival (LRRFS)], and this was significantly better than those with marrow with or without mandibular canal involvement (for both DFS and LRRFS). Patients with DCBE were staged using the MMC, and when compared with the AJCC8, the MMC system was better for the prediction of survival outcomes, as AIC values were lower compared with those of the AJCC8. There was a significant association (p = 0.013) between the type of bone involvement and the pattern of recurrence. Conclusions For GBC-SCC, only marrow with or without mandibular canal involvement is associated with poorer survival outcomes. As compared with the AJCC8, the proposed Mahajan et al. MMC staging system downstages DCBE correlates better with survival outcomes. no bone involvement (including superficial cortical) had similar survival [disease-free survival (DFS) and locoregional recurrence-free survival (LRRFS)], and this was significantly better than those with marrow with or without mandibular canal involvement (for both DFS and LRRFS). Patients with DCBE were staged using the MMC, and when compared with the AJCC8, the MMC system was better for the prediction of survival outcomes, as AIC values were lower compared with those of the AJCC8. There was a significant association (p = 0.013) between the type of bone involvement and the pattern of recurrence. Conclusions: For GBC-SCC, only marrow with or without mandibular canal involvement is associated with poorer survival outcomes. As compared with the AJCC8, the proposed Mahajan et al. MMC staging system downstages DCBE correlates better with survival outcomes. INTRODUCTION Squamous cell carcinoma is the most common histology of the oral cavity cancers. There are a multitude of factors that impact the prognosis of patients with these tumors. Amongst these, mandibular bone erosion (through the cortical bone of the mandible: deep cortical and/or marrow) is found to be an important factor (1)(2)(3)(4)(5). According to widely accepted staging systems, its presence is considered to be stage T4a (6). The probability of mandibular bone erosion is higher with buccal mucosa lesions in close proximity to the mandible and gingival cancers, which occur due to invasion of the mandible through the occlusal surface (7)(8)(9). Over recent years, it has often been argued that mandibular bone erosion needs to be characterized further. The Japan Society for Oral Tumors (JSOT) has defined T4 cancer as the invasion of the mandibular canal (10)(11)(12). Ebrahimi et al. based the T stage on size and depth of invasion for tumor categories T1-T3 and T4 in the presence of marrow invasion (13). In contrast, a few reports have suggested that tumor size correlates with adverse prognosis and that bone invasion is not an independent predictor of survival (14)(15)(16). On the contrary, some studies have reported that tumor size and marrow invasion are independent predictors of reduced survival (13,(17)(18)(19). In view of such varied evidence and lack of clarity, this study aims to evaluate the association of various patterns of mandibular bone involvement and their impact on survival. Based upon the findings, we also endeavored to develop a staging system that would reflect the implications of various types of bone invasion-superficial cortical erosion (erosive bony involvement), deep cortical erosion (infiltrative bony involvement), marrow involvement (infiltrative bony involvement), and mandibular canal involvement (infiltrative bony involvement), as assessed on imaging in a better way. MATERIAL AND METHODS This is a retrospective study on treatment-naïve gingivo-buccal complex squamous cell carcinoma (GBC-SCC) patients who underwent preoperative CT imaging between January 1, 2012, and March 31, 2016, at a tertiary care cancer center. The patients who underwent treatment with curative intent were included. Since surgery is the mainstay of treatment for these cancers, only those patients who underwent definitive surgical management at our center were included in the study. Patients Overall, 1,200 patients were screened. We excluded patients with stage T4b with high infratemporal fossa involvement, maxillary erosion, those with follow-up of less than 1 year, and cases where digital imaging and communications in medicine (DICOM) images were not available for review. Analysis was performed on 303 patients in our study ( Figure 1). The Institutional Ethics Committee approval was obtained. Since it is a retrospective study, the waiver of consent was granted. The demographic, treatment, histopathological, and follow-up details were obtained from the electronic medical records. Image Evaluation Two senior head and neck radiologists with experience of over 10 and 6 years and one junior radiologist with experience of over 3 years reviewed the CT images independently (AbM, NS, and ND, respectively). The imaging review was performed on reconstructed DICOM data. The soft-tissue algorithm and bone window or bone algorithm reformations and axial images were analyzed on a volume viewer integrated within the picture archiving and communication system (PACS) using triangulation. The various patterns of bone involvement reported on imaging were as follows: erosive infiltration, i.e., superficial cortical erosion with subtle outer cortical erosion without complete breach. Infiltrative invasion included deep cortical erosion with outer cortical breach and disease reaching up to the inner cortical layer, marrow involvement with disease eroding both the cortices and reaching up to the mandibular marrow, and mandibular canal involvement with disease eroding the inferior alveolar canal, obliteration of fat, or excessive enhancement within the mandibular foramen, with or without widening or erosion of the foramen, which was regarded as the perineural spread. Figure 2 shows a line diagram of the described patterns of mandibular involvement. As the 8th edition of the American Joint Committee on Cancer (AJCC8) does not consider superficial cortical erosion for upstaging the disease, patients with superficial cortical erosion were included with patients having no bone erosion. Pathology Evaluation The pathology reports of all tumors exhibiting bone invasion on imaging were reviewed. The bone invasion was categorized as present or absent in the final report. In cases where there was inadequate information regarding the extent of bone invasion, the second review of the pathology slides was performed by a senior head and neck pathologist (SR, AP, and MB). Statistical Considerations The analysis was performed using SPSS version 21 and R software (IBM Corp). The chi-square or Fisher's exact test was used for descriptive analysis. The overall survival (OS) was calculated from the date of surgery to death due to any cause. Disease-free survival (DFS) was defined from the date of surgery to any disease recurrence. Locoregional recurrence-free survival (LRRFS) was calculated from the date of surgery to the locoregional recurrence. The patients were censored if they were lost to follow-up or on the last follow-up date in case the event did not occur. Kaplan-Meier estimate and log-rank test were performed for survival analysis. Multivariate analysis was done using Cox regression analysis after making adjustments for other prognostic factors. p-Value <0.05 was considered significant. MMC: Marrow and Mandibular Canal Staging System Based upon the survival analysis with different patterns of bone invasion, a new staging system was proposed, "MMC: Marrow and mandibular canal staging system" ( Table 1). The patients with no bone erosion/superficial cortical erosion and deep cortical bone erosion were staged based on the size and depth of invasion. Only marrow invasion with or without mandibular canal involvement was considered to be T4a. The patients were restaged according to this system, and this staging system was compared with the AJCC8 staging system. "Akaike information criterion" (AIC) was used to study the relative fitted model of the various staging (TNM staging-AJCC8) with respect to OS, DFS, and LRRFS. AIC estimates the best-fitted model, relative to other models, thus providing the means for each model selection. R software and survival package were used to calculate the AIC values. Patterns of Bone Involvement According to the radiology review, mandibular bone was involved in 187 (62%) patients. Out of these, deep cortical erosion was seen in 43 (23%), marrow was involved in 64 (34%), and mandibular canal involvement was seen in 80 (43%) patients. Survival Analysis In our study, the mean OS was 26 months, the mean DFS was 24.6 months, and the mean LRRFS was 24.7 months. The cohorts were stratified based on the type of bone erosion. Tumor > 4 cm in greatest dimensions with DOI >10 mm Tumor invades into mandibular marrow* with or without mandibular canal, maxillary sinus, retroantral fat or skin of face. T4b Tumor invades masticator space, pterygoid plates, or skull base or encases internal carotid artery DOI, depth of invasion. *Deep cortical erosion is not considered to be T4a in the marrow mandibular canal (MMC) staging system. When the patients were stratified based on extracapsular spread (ECS), there was statistically worse DFS and LRRFS in patients with marrow/canal involvement as compared with no bone erosion/deep cortical erosion in the ECS-negative subgroup (p = 0.023 and p = 0.013, respectively). However, the difference in the 2 groups was not statistically significant (p = 0.389 for DFS; p = 0.641 for LRRFS) in the ECSpositive subgroup. The type of bone was an independent prognostic factor for DFS on multivariate analysis after making adjustments for known histopathological prognostic factors and retroantral fat involvement involvement p < 0.001 (Table 3). Other independent prognostic factors were retroantral fat involvement, skin involvement, and tumor grade. The type of bone involvement was the only independent prognostic factor for LRRFS on multivariate analysis, p < 0.001 ( Table 4). Marrow and Mandibular Canal Classification-Stage Migration and Comparison With 8th Edition of the American Joint Committee on Cancer As per the final histopathology report, patients were staged according to the AJCC8 and MMC classifications ( Table 2). In the MMC classification, patients with deep cortex involvement were downstaged from T4 to the stage according to the size of the tumor and depth of invasion. Out of 228 T4 patients (according to the AJCC8), 30 patients were downstaged to T1-T3. Out of these 30 patients, 6 were restaged to T1, 16 were restaged to T2, and 8 were restaged as T3. When the two staging systems were compared using AIC, the MMC system turned out to be a better staging system for the prediction of survival, as the AIC values of the MMC staging system for LRRFS, DFS, and OS were lower compared with those of the AJCC8 ( Table 5). Table 6 shows the pattern of recurrence with respect to the types of bone involvement. Further, we evaluated if the type of bone erosion had any impact on the pattern of recurrence. The recurrence occurred in 23.3% of patients with deep cortical and no bone involvement versus 48.6% of patients with marrow and mandibular canal involvement, which was statistically significant (p = 0.023). There was a statistically significant association between the type of bone erosion and the type of recurrence (p = 0.013). DISCUSSION The prognosis of oral squamous cell carcinoma depends upon a multitude of factors. Several of these are included in the staging system. Bone invasion has been considered as an adverse prognostic factor for a long time, thus meriting adjuvant therapy (2). Over the last few decades, it has been shown that superficial cortical erosion for alveolar primaries does not portend a poorer prognosis; such tumors are, therefore, staged according to their size (13). There have been several studies that have further tried to understand and characterize the type of bone erosion and its effect on prognosis (20,21). They have differentiated bone erosion as erosive (superficial cortical erosion) and infiltrative (deep cortical, marrow involvement, and mandibular canal involvement) and looked at their impact on the prognosis and survival. It has been observed that cortical bone erosion may not impact survival and only marrow invasion impacts prognosis. A recent meta-analysis found that only marrow invasion impacted overall and disease-free survival (22). On the contrary, few other studies did not show such an association between any type of bone erosion and survival (14)(15)(16). Probably this is the reason why staging systems, rather than characterizing the type of bone erosion, continue to mention merely mandibular bone erosion as present or absent. Studies on this aspect have looked at all subsites of the oral cavity combined. It is prudent to understand that a buccal mucosa or a lower alveolus cancer is more likely to erode mandibular bone as compared with a tongue cancer (8,9). They cannot be kept on the same pedestal while making any meaningful conclusions regarding upstaging the disease in presence of bone erosion. Another important aspect that these studies have not considered is the pathological depth of invasion, which plays an important role in assessing the prognosis and has recently been incorporated in the AJCC staging system (23,24). In our study, we utilized the AJCC8 to stage the patients; thus, the depth of invasion was included in the staging process. As mentioned earlier, we only included buccal mucosa and lower alveolus cancer patients in the study, which is the most relevant cohort. We also excluded patients with high infratemporal fossa and maxillary erosion. This was done to exclusively analyze the prognostic impact of the type of mandibular bone erosion on survival. On multivariate analysis, type of bone erosion had an independent prognostic impact on DFS and LRRFS (p < 0.001 and p < 0.001, respectively) after making adjustments for other prognostic factors (Tables 3 and 4, respectively). Deep cortical erosion had survival similar to cases with no bone erosion. In contrast, marrow and mandibular canal involvement had similar survival (DFS and LRRFS), which was statistically worse than that seen with deep cortical erosion and no bone erosion for DFS and LRRFS (Figures 3, 4). Based on the results of univariate analysis, patients with deep cortical or no bone involvement were included together, and patients with marrow and mandibular canal were included together for further analysis. We found marrow and mandibular canal involvement to be statistically significantly poorer than no bone or deep cortical erosion for DFS and LRRFS (p =0.023 and p =0.013, respectively). It has also been hypothesized by a few studies that mandibular canal involvement may be associated with higher chances of distant metastasis (25)(26)(27). In our study, we found a statistically significant association between type of recurrence and the type of bone erosion (p = 0.013). There have been few studies that have tried to restage the disease based upon the type of bone erosion. As per Ebrahimi et al., cortex involvement had a similar outcome as no bone involvement (13). They proposed a staging system where the disease was upstaged by 1 T category in the presence of marrow invasion. Another study proposed the JSOT classification, where the tumor was classified as T4a only when there was the involvement of the mandibular canal (10,11). Involvement of the mandibular marrow without canal involvement was classified according to size; however, these patients performed equally badly as those with canal involvement. Bone erosion was completely ignored in another staging system, where the classification was based upon the soft tissue involvement alone (28). They did not consider bone involvement important for staging the tumor. In all these studies, cases without bone erosion were staged as per the size of the tumor. For staging, they had used the 7th edition of the AJCC, where the impact of depth of invasion was not considered. In the present study, we have staged the patients as per the AJCC8 and have considered the depth of invasion for all the patients. As per the AJCC8 classification system, the tumor is classified as T4a even on mandibular cortical involvement. But the results of our study show that the cortical involvement did not affect the survival of the patient. Hence, we proposed an MMC classification system in which we downstaged tumors with superficial or deep cortical erosion based solely upon their size and depth of invasion ( Table 1). Only those having marrow involvement with or without mandibular canal involvement were staged as T4a. This staging was labeled as MMC. The results of our study also show that T classification based upon the MMC staging was a better predictor of OS, DFS, and LRRFS as compared with the AJCC8 ( Table 5). The limitation of our study is that it is retrospective. We also did not study the impact of superficial bone erosion on the prognosis. Moreover, information about the pattern of invasion on histopathology for these patients was not available. In spite of these limitations, this study provided a large sample size focusing on the relevant subsite of the oral cancers, the buccal mucosa. CONCLUSION In this study, we found that for GBC-SCC, bone erosion with marrow as well as mandibular canal involvement, and not cortical erosion, is associated with poorer survival outcomes. The marrow with or without mandibular canal involvement has a higher incidence of recurrence, and there was a statistically significant association between the type of bone involvement and pattern of recurrence. T classification based upon the proposed Mahajan et al. MMC staging system, which downstages deep cortical bone involvement, is a better predictor for survival as compared with the AJCC8. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by IEC TMC. The ethics committee waived the requirement of written informed consent for participation. AUTHOR CONTRIBUTIONS Study concept: ND, AbM, and AkM. Study design: ND, AbM, and AkM. Data acquisition: ND, AbM, and AS. Quality control of data algorithms: ND, AbM, AkM, and AB. Statistical analysis: ND, AbM, AkM, and AB. Manuscript preparation: all authors. Manuscript editing: ND, AbM, and AkM. Manuscript reviewing: all authors.
2022-03-03T14:28:29.145Z
2022-03-03T00:00:00.000
{ "year": 2021, "sha1": "9df2d246e96d885cd5241dc9b07a308f1e987a4b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "9df2d246e96d885cd5241dc9b07a308f1e987a4b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119352039
pes2o/s2orc
v3-fos-license
Bright Source of Cold Ions for Surface-Electrode Traps We produce large numbers of low-energy ions by photoionization of laser-cooled atoms inside a surface-electrode-based Paul trap. The isotope-selective trap loading rate of $4\times10^{5}$ Yb$^{+}$ ions/s exceeds that attained by photoionization (electron impact ionization) of an atomic beam by four (six) orders of magnitude. Traps as shallow as 0.13 eV are easily loaded with this technique. The ions are confined in the same spatial region as the laser-cooled atoms, which will allow the experimental investigation of interactions between cold ions and cold atoms or Bose-Einstein condensates. Among many candidate systems for large-scale quantum information processing, trapped ions currently offer unmatched coherence and control properties [1]. The basic building blocks of a processor, such as quantum gates [2], subspaces with reduced decoherence [3], quantum teleportation [4,5], and entanglement of up to eight ions [6,7] have already been demonstrated. Nevertheless, since a logical qubit will likely have to be encoded simultaneously in several ions for error correction [8,9], even a few-qubit system will require substantially more complex trap structures than currently in use. Versatile trapping geometries can be realized with surface-electrode Paul traps, where electrodes residing on a surface create three-dimensional confining potentials above it [10]. In contrast to three-dimensional traps [11,12,13], such surface traps can be patterened using standard lithographic techniques, and allow increased optical access to the ions and real-time control over their position in all directions. While the prospect of scalable quantum computing has been the main motivation for developing surfaceelectrode traps, it is likely that this emerging technology will have a number of other important, and perhaps more immediate, applications. Porras and Cirac have proposed using dense lattices of ion traps, where neighboring ions interact via the Coulomb force, for quantum simulation [14]. A lattice, with a larger period to avoid ion-ion interactions altogether, could allow the parallel operation of many single-ion optical clocks [15], thereby significantly boosting the signal-to-noise ratio. The increased optical access provided by planar traps could be used to couple a linear array of ion traps to an optical resonator and efficiently map the stored quantum information onto photons [16]. Since the surface-electrode arrangement allows one to move the trap minimum freely in all directions, ions can be easily embedded in an ensemble of cold neutral atoms for investigations of cold ion-atom collisions [17], charge transport [18], or even the interaction of a single ion with a Bose-Einstein condensate [19]. Compared to standard Paul traps [11,12,13], the open geometry of surface-electrode traps restricts the trap depth and increases the susceptibility to stray electric fields, making trap loading and compensation more difficult. Nevertheless, successful loading from a thermal atomic beam has recently been demonstrated using photoionization [20] or electron-impact ionization aided by buffer gas cooling [21]. The former [22,23] is superior in that it provides faster, isotope-selective loading [20,24,25,26]. However, the loading rate and efficiency remain relatively low, and charge exchange collisions make it difficult to load pure samples of rare isotopes [25]. In this Letter, we demonstrate that large numbers of low-energy ions can be produced by photoionization of a laser-cooled, isotopically pure atomic sample, providing a robust and virtually fail-safe technique to load shallow or initially poorly compensated surface ion traps. We achieve a loading rate of 4 × 10 5 Yb ions per second into a U 0 = 0.4 eV deep printed-circuit ion trap, several orders of magnitude larger than with any other method demonstrated so far, and have directly loaded traps as shallow as U 0 = 0.13 eV. The trapping efficiency for the generated low-energy ions is of order unity. We also realize the first system where ions are confined in the same spatial region as laser-cooled atoms, allowing for future experimental studies of cold ion-atom collisions. Efficient photoionization of Yb atoms is accomplished with a single photon from the excited 1 P 1 state that is populated during laser cooling, and that lies 3.11 eV, corresponding to a 394 nm photon, below the ionization continuum. Due to momentum and energy conservation, most of the ionization photon's excess energy is transferred to the electron. Therefore, when we ionize atoms at rest even with 3.36 eV (369 nm) photons -the ion cooling light -the calculated kinetic energy of the ions amounts to only 8 mK (0.7 µeV). Every ion generated inside the trap should therefore be captured, and we easily observe ion trapping even without subsequent laser cooling. The ion trap is a commercial printed circuit on a vacuum-compatible substrate (Rogers 4350) with low radiofrequency (RF) loss. The three 1 mm-wide, 17.5 µmthick copper RF electrodes are spaced by 1 mm wide slits (Fig.1), whose inner surfaces are metallized to avoid charge buildup on dielectric surfaces. The two outer RF electrodes are electrically connected. Twelve dc electrodes placed outside the RF electrodes provide trapping in the axial direction, and permit cancellation of stray electric fields. In addition, the RF electrodes can be dc-biased to apply a vertical electric field. All dielectric surfaces outside the dc electrodes have been removed with the exception of a 500 µm strip supporting the dc electrodes. The ratio between the RF voltages V c (applied to the center electrode) and V o (outer electrode) determines the trap height above the surface. With a typical value of V c /V o = −0.63, the RF node is located 3.6(1) mm above from the surface. For V o = 540 V applied to the outer electrodes, at an RF frequency of 850 kHz the secular trap potential has a predicted depth of U 0 = 0.16 eV (Fig.2) and a measured secular frequency of 60 kHz . The trap can be deepened by applying a static negative bias voltage V dc to all RF electrodes [21], Fig.2, and unbiased once the ions are loaded. Using V dc = 0.5 V we were able to load traps at RF voltages as low as V o = 250 V, corresponding to U 0 =0.13 eV. All Yb and Yb + cooling, detection and photoionization light is derived from near-UV external-cavity diode lasers. 172 Yb or 174 Yb atoms are laser-cooled in a magneto-optical trap (MOT) using the 1 S 0 → 1 P 1 transition at 399 nm [27]. A master-slave laser system consisting of an external-cavity grating laser and an injected slave laser using violet laser diodes (Nichia Corp. NDHV310ACAE1) delivers 10 mW in three pairs of 1.7 mm beams. The MOT, located 4 mm above the substrate, is loaded from an atomic beam produced by a resistively heated oven placed 8 cm from the trapping region. Typically 5 × 10 5 Yb atoms are loaded into the MOT with a lifetime of 300 ms at an estimated temperature of a few mK. Photoionization from the excited 1 P 1 state populated during laser cooling is accomplished using either the ion cooling laser at 370 nm with a power of 750 µW and intensity of 850 mW/cm 2 , or a focused UV light-emitting diode (UV LED, Nichia Corp. NCCU001, emission at (385±10) nm) with a power of 8.7 mW and intensity of 125 mW/cm 2 at the MOT position. The efficiency of ionization is manifest as a 30% decrease in MOT atom number due to an increase in the MOT decay rate constant by Γ = 0.3 s −1 . The generated ions are also detected directly with a Burle Magnum 5901 Channeltron avalanche detector located 4 cm above the MOT. The UV-light induced MOT loss depends linearly on UV laser intensity (Fig.3), indicating that the ionization process involves a single 370 nm photon. We have also confirmed that the dominant ionization proceeds from the 1 P 1 state: when we apply on/off modulation to both the 399 nm MOT light and the 370 nm UV light out of phase, such that the UV light interacts only with ground-state atoms, we observe more than 13-fold decrease in ionization compared to in-phase modulation. From the observed loss rate in combination with an estimated saturation s = 0.6 − 2 of the 1 S 0 → 1 P 1 MOT transition, we determine a cross section σ = 4 × 10 −18 cm 2 for ionization of 174 Yb with 370 nm light from the excited 1 P 1 state. We estimate that this value is accurate to a factor of two, due to uncertainties in the 1 P 1 population. The photoionization typically produces 4 × 10 5 cold ions per second near the minimum of the pseudopotential. Trapped-ion detection with the Channeltron provides rapid readout, making it particularly useful for ob- serving fast trap loading or searching for an initial signal with a poorly compensated trap. Since the detector electric field overwhelms the pseudopotential, we turn on the Channeltron in 1 µs using a Pockels cell driver which is fast compared to the 3.3 µs flight time of the ions. The Channeltron signal is calibrated against fluorescence from a known number of ions, as described below. We measure ion loading rates by varying the time between turning on the trap and switching on the detector, which empties the trap. Given the brightness of our coldion source, ion trapping is easily accomplished even without laser cooling of the ions. Fig.4 shows the loading rate for photoionization of atoms from the MOT and from the atomic beam for the trap potential of depth of U 0 =0.4 eV shown in Figure2. The loading rate from the MOT, 4 × 10 5 ions/s, is three orders of magnitude higher than that from the beam alone, compared to a ratio of only four in the ion production rates measured without trap. Thus ions that were produced from laser-cooled atoms in a magneto-optical trap are 200 times more likely to be trapped than ions produced from the atomic beam. We ascribe this difference to the much higher MOT atomic density near the ion-trap minimum and to the lower energy of the produced ions. As the MOT is isotopically pure, our observations of a 10 3 loading rate ratio between MOT and atomic beam imply an additional achievable factor of 10 3 in isotope selectivity beyond the 370:1 isotope selectivity resulting from spectrally selective photoionization in an atomic beam [25]. From comparisons of electron-impact ionization loading and atomic-beam photoionization loading performed by other groups [24,25], we conclude that our loading rate is six to seven orders of magnitude higher than that achieved with traditional electron-impact ionization and four orders of magnitude higher than all previous results. In addition, by comparing the typical observed loss rate from the MOT (1.1×10 5 atoms/s) to the typical observed loading rate (2.4 × 10 5 ions/s) under similar conditions, we conclude that our trapping efficiency is comparable to unity. We attribute the discrepancy in rates to calibration of the Channeltron ion detector and uncertainty in the MOT population. This efficiency may prove an important advantage for suppressing anomalous ion heating that has been linked to electrode exposure to the atomic beam during the loading process [13]. The large loading rate will also be beneficial for applications that require a large, isotopically pure sample, such as quantum simulation in an ion lattice [14]. The loaded ions are cooled and observed via fluorescence using an external-cavity grating laser. [28]. To reach the target wavelength of 369.525 nm with a 372 nm laser diode (Nichia) in a Littrow setup with a firstorder grating reflectivity of 28%, we cool the diode to temperatures between -10 and -20 • C in a moisture-tight container. The laser provides an output power of 1.5 mW and is continuously tunable over more than 10 GHz. For laser cooling of Yb + , the laser is typically tuned 200 MHz below the 2 S 1/2 → 2 P 1/2 transition. An external-cavity repumper laser operating at 935 nm is also necessary to empty the long-lived 2 D 3/2 state on the 2 D 3/2 → 3 D[3/2] 1/2 transition [29]. The UV light scattered by the ions is collected with an aspheric lens of numerical aperture 0.40 placed inside the vacuum chamber at a distance of 19 mm from the trap. The collected light passes through an interference filter, and is evenly split between a charge-coupled device camera and a photomultiplier. The maximum photomultiplier count rate is 5000 s −1 per ion. To calibrate the Channeltron detector, we first cool a small cloud of ions (approximately 100) to below the crystallization temperature, as identified by a sudden change in the fluorescence [30]. By comparing the resonance fluorescence of the crystal to that of a single trapped ion, we determine the absolute number of ions loaded, and subsequently measure the Channeltron signal for the same cloud. The optical observation of the trapped ions allows us to determine the trap position and optimize the overlap with the MOT. We move the MOT laterally using magnetic bias coils and the ion trap vertically by changing the amplitude ratio of the voltages applied to the two RF electrodes. Fig.5 shows that for the optimal loading position of the trap, the pseudopotential minimum is located inside the neutral-atom cloud. We have thus demonstrated the first trapping of cold ions and neutral atoms in the same spatial region, which will allow the experimental investigation of cold ion-atom collisions. In conclusion, we have realized a novel, simple and robust system to load large numbers of low-energy ions into a versatile planar ion trap. Further trap miniaturization while maintaining a large loading rate can be achieved with a nested electrode design, where large outer electrodes provide initial trapping, and a series of smaller inner electrodes provides stronger confinement as the ions are transported towards the surface. This system provides the means to realize ion lattices for quantum simu-lation [14], ionic quantum memory with optical readout [16], many-ion optical clocks, or mixed ion-atom systems for the investigation of cold collisions [17] and charge transport [18].
2016-02-02T08:36:57.578Z
2007-02-03T00:00:00.000
{ "year": 2007, "sha1": "731b4809beacf8e9ff192006d9f3bb95313b5292", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0702025", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "731b4809beacf8e9ff192006d9f3bb95313b5292", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
198191883
pes2o/s2orc
v3-fos-license
Motivational intensity and visual word search: Layout matters Motivational intensity has been previously linked to information processing. In particular, it has been argued that affects which are high in motivational intensity tend to narrow cognitive scope. A similar effect has been attributed to negative affect, which has been linked to narrowing of cognitive scope. In this paper, we investigated how these phenomena manifest themselves during visual word search. We conducted three studies in which participants were instructed to perform word category identification. We manipulated motivational intensity by controlling reward expectations and affect via reward outcomes. Importantly, we altered visual search paradigms, assessing the effects of affective manipulations as modulated by information arrangement. We recorded multiple physiological signals (EEG, EDA, ECG and eye tracking) to assess whether motivational states can be predicted by physiology. Across the three studies, we found that high motivational intensity narrowed visual attentional scope by altering visual search strategies, especially when information was displayed sparsely. Instead, when information was vertically listed, approach-directed motivational intensity appeared to improve memory encoding. We also observed that physiology, in particular eye tracking, may be used to detect biases induced by motivational intensity, especially when information is sparsely organised. Introduction Cognitive scope, defined as the attentional or mnemonic preference for central (dominant) or peripheral (secondary) details, has been linked to a number of affect-related factors such as mood [1,2], anxiety (emotion) [3], hemispheric activity [4,5], arousal (or activation) [6], valence [7][8][9] and motivational intensity [10]. Modulation of cognitive scope has also been attributed to individual traits, such as anxiety [11] or hemispheric dominance [4,12]. Cognitive scope is sometimes described as the attentional or mnemonic preference toward the "trees" (narrow scope) or the "forest" (broad scope). Note that there may be some overlap between the affective phenomena which modulate cognitive scope. Emotions can be decomposed into three dimensions: arousal (also known as activation), valence ( is positive, such as happiness, or negative, such as sadness) and dominance (whether it is characterised by domination or submission) [13]. Manipulating any of these components could cause cognitive scope to be altered. However, an additional component has been argued to be central to the modulation of cognitive scope. This is motivational intensity, defined as the driving force towards a stimulus (approach motivation) or away from it (withdrawal motivation) [14]. For example, anger is normally associated to slightly negative valence, high arousal and high motivational intensity. Hence, without controlling for confounding variables, the narrowing effect of anger could be attributed to high motivational intensity, high arousal or negative valence simultaneously. Continuing from this example, recent work which did account for confounding variables indicated that the high motivational intensity component of anger is indeed a major factor in cognitive scope reduction [15]. Anger is considered high in motivational intensity since it drives attention towards a specific goal (e.g. an enemy that needs to be attacked). On the other hand, affective states such as joy or relaxation are considered to be low in motivational intensity: these emotions do not drive towards or against a specific goal when being experienced (since the goal may have already been achieved). Such emotions have been linked to broad cognitive scope in experiments that employed Navon letters or Kimchi and Palmer stimuli [10,14]. More in detail, high motivational intensity indirectly induced participants into paying more attention to details at the expense of general information. Note that we use the word indirectly as motivational intensity was induced using task-irrelevant manipulations, via 'flanker' tasks. This is important since it would not be of particular interest to observe an increase in interest towards a particular goal if a person is highly motivated in achieving that very goal. Instead, we are interested in the effect of motivational intensity on incidental or competing tasks. In this paper we focused on visual search, as we hypothesised that cognitive scope alterations would in turn affect visual search. Apart from cognitive scope, motivational intensity has also been linked to increased cognitive control [16,17]. That is, reward expectation may increase resource allocation in the brain, allowing the optimisation of behaviour and working memory capacity at the expense of other systems, resulting in increased physiological activation and strain. This effect is especially noticeable when utilising task-relevant manipulations [18]; during visual search tasks, participants quickly adapt to the task at hand, optimising their search strategy [19]. Importantly, increased cognitive control has also been observed during task-irrelevant manipulations [16,20]. As previously mentioned, broadening and narrowing of cognitive scope can also be modulated by valence. That is, positive emotions (happiness) are said to broaden scope for attention and memory, while negative emotions (sadness) tend to narrow it [7-9, 21, 22]. It has been argued that this is because positive moods enhance the currently most accessible thought. Positive emotions are believed to broaden cognitive scope since the "default" (most accessible under neutral circumstances) scope is broad [23]. That is, the normal approach to a situation would be to consider it as a whole, rather than focusing on the details. Positive emotions, then, encourage this behaviour. On the other hand, negative emotions counter this "default" broad scope, narrowing it instead [24]. It is particularly relevant to our context that approach-motivated positive affect has been shown to improve recall rates for centrally presented words in a task-irrelevant manipulation [20]. In this paper, we assessed whether motivational intensity and valence influence performance during a common computer-based task: visual search. We carried out our investigations by manipulating both motivational intensity and valence directly, without employing specific emotions (i.e. we did not attempt to elicit anger or sadness). As in the previously cited work, we employed a 'flanker' task; this enabled us to investigate the effects of manipulations on visual search performance without directly influencing it. In other words, it would not be surprising for us to observe an increase in visual search performance if participants were motivated in searching faster or more accurately (assuming no additional controlling variables). Instead, we attempted to alter cognitive scope, expecting that this would in turn influence performance. Factors modulating cognitive scope are relevant to the implementation of human-computer interfaces. This is because some systems present large amounts of scattered information, such as dynamic control applications (e.g. air or naval control systems). Given the complexity of the information presented, some may be overlooked if the current user of the system pays attention only to a limited interface area. It would therefore be valuable to detect such biases in order to reduce risk of failures due to human error (e.g. raise a low attention exception, which could, for example, alert the current user or other users of the system). This observation applies to systems beyond dynamic control applications: emerging search engines are starting to utilise result layouts which forego the currently ubiquitous list format. Instead, search results are sometimes displayed across larger areas, plotted in two dimensions, in a fashion that resembles scatter plots. This is done in order to provide personalised layouts [25], visually build queries [26] and/or refine queries over multiple iterations [27][28][29]. Apart from search engines, information obtained from web pages can be summarised into tag clouds, arranging keywords across a two-dimensional space [30,31]. It would therefore be of interest to investigate whether users who rely on information scattered across a two-dimensional space are susceptible to attentional and mnemonic biases induced by high motivational intensity or negative affect. We are also interested in real-time detection of such biases. Psychophysiological signals can be utilised to estimate the user's emotional state [32,33]. Systems that utilise psychophysiological signals have been able to automatically estimate stress, anxiety or mental workload [34][35][36]. We are interested in investigating whether multi-modal psychophysiological systems could also be used to estimate motivational intensity (and, in turn, cognitive scope). Motivational state estimated via psychophysiology could help in preventing or detecting the aforementioned biases during the use of systems which depend upon the users' visual attention. In particular, eye tracking could be very informative for the detection of changes in the users' cognitive scope. It has been previously indicated that less attention is paid to the periphery of a screen when motivational intensity is heightened [20]. Moreover, it has been shown that participants adapt eye movement strategies to optimise performance in various visual search tasks [37]. Given that motivation may guide what the visual system presents to conscious awareness [38], it is reasonable to expect that the effects of motivational manipulations would be reflected in the users' search strategies and would be measurable via gaze pattern analysis. These are additional signals that may also be helpful in the detection of cognitive biases. Motivational intensity is normally associated to increased sympathetic nervous system activation (arousal) [39]. This is associated to increases in heart rate [40] and skin conductance [41]. Motivational intensity has also been linked to EEG asymmetry: activity in the left hemisphere raises as motivational intensity increases [12,[42][43][44]. Moreover, left hemispheric activity has been associated to a localised cognitive scope, while grater right (or lower left) activity is correlated a broader scope [5,45,46]. This suggests that changes in motivational intensity could be detected by monitoring a number of relatively simple signals: Heart Rate (HR), Electrodermal Activity (EDA) and EEG (Frontal Alpha Asymmetry, which can be measured using only two electrodes). Using signals that require low computational power renders them appropriate for real-time usage (also known as on-line analysis). In summary, the picture that can be drawn from the literature is that motivational intensity modulates attentional scope. Changes in scope may affect users which are required to monitor relatively large amounts of information at a fast pace. In this context, it is necessary to take into account the effect of task-irrelevant stimuli, which reduce attentional resources allocated to the main task. It is also possible that the same information displayed in different ways would be not affected by motivational biases in the same fashion. For example, attentional biases that narrow cognitive scope may decrease performance more noticeably when the user of the system is parsing information displayed sparsely. Eye tracking and psychophysiological signals could be used to detect such motivational biases in real time. Finally, valence can also affect cognitive scope: it is often narrowed by negative affect. Studies We present three studies which we conducted to assess the effect of motivational intensity on visual word search, while simultaneously testing whether any biases induced by it can be detected via multiple psychophysiological signals. This is something that, to our knowledge, has not been previously investigated. The first study assessed whether motivational intensity does affect visual search differently depending on the way in which information is presented. Studies 2 and 3 were follow-up studies that we conducted after interpreting the effects we observed in the first study. Pre-empting our findings, in the first study we observed that lists and scattered information were being affected differently by our manipulations. We then conducted two additional studies to investigate this finding in more detail, one solely focused on lists and the other on scatters. This approach allowed us to investigate to what extent the effects of the cognitive biases we induced were dependent on the visual layout employed. The trial design was inspired by a previous experiment that demonstrated that visual attention allocated to peripheral areas of a computer screen is reduced when motivational intensity is high [20]. We built our hypotheses on this observation, adding a number of important considerations to our experimental design. In our studies, we considered three different motivational state conditions: neutral, approach and withdrawal. Across the studies, we explicitly asked participants to perform visual searches, while altering visual presentation layout at every trial. Our trial design included a questionnaire phase and the end of each trial, providing more granular data. This allowed us to determine whether cognitive scope manipulations affected active visual searches, rather than incidental information. Moreover, we included psychophysiological signal collection and analysis, in order to assess these signals could be used for attentional bias detection, particularly in real-time systems. Our approach sits in between basic and applied research. From a basic research perspective, we measured the effect of motivational intensity on visual search. From an applied research perspective, we investigated whether attentional biases can be detected via psychophysiology, and if these biases are equally present when visual presentation layout is altered. We believe there is a value in bridging basic and applied research: often, basic research does not provide enough information to investigate practical problems, while purely practical, on-site studies run with little experimental control do not provide enough detail for the investigation of the underlying cognitive phenomena [47]. Hypotheses The three studies we ran assessed three common hypotheses. The first hypothesis assessed whether motivational intensity does affect visual search, and whether different visual (or semantic) information organisation approaches are affected differentially when motivational intensity is high. The second hypothesis explored whether motivational intensity can be estimated via psychophysiological signals. Note that the second hypothesis follows from the first. That is, estimating motivational state via psychophysiology is a valuable effort if it can be used to detect attentional biases. The third hypothesis is secondary; it assessed whether affective state affects memory (it was tested after the visual search tasks were completed). The three studies had a common structure: they were composed of a relatively large (> 100) number of trials. Each trial started with a word search task, which consisted in category identification. This was followed by a reward expectation phase, inducing three different levels of motivation: neutral, approach-directed and withdrawal-directed. Several words were then displayed on a computer monitor using different layouts, depending on the study (e.g. listed, scattered, coloured or clustered). During this phase, the aforementioned physiological signals were recorded: these were electroencephalography (EEG), electrodermal activity (EDA), electrocardiogram (ECG) and electro-oculogram (the latter only used to assist in EEG artefact rejection). We used EDA and ECG to measure activation and EEG to assess whether increased motivation would result in frontal asymmetry. Eye tracking was employed to monitor changes in visual search strategy induced by motivational manipulations. Word presentation was followed by the 'flanker' task, after which affective manipulations were implemented via reward feedback, which could be positive, negative, or neutral. Participants performance in the visual search task was assessed after our motivational and affective manipulations. Our hypotheses are briefly outlined below. They will be reiterated in more detail after introducing the implementation of the first study-which acted as a template for the remaining two studies-in the "Hypothesis" section. Hypothesis 1: Motivational intensity modulates cognitive scope and interacts with search paradigm. The first hypothesis is focused on the effect of motivational intensity on attentional and mnemonic scope and how it interacts with visual search tasks. We elicited high motivational intensity in two directions: approach and withdrawal (also called avoidance). It is important to assess both directions when exploring motivation-related phenomena on human behaviour [48]. These two conditions were compared agains a 'neutral' baseline, representing low motivational intensity. We expected to find a main effect of high motivational intensity on performance, so that as it increases, performance decreases. This is because a narrow scope would result in lowered attentional allocation to words presented in peripheral areas [20]. A main effect of motivational intensity on eye tracking metrics correlated to attentional scope would also support this hypothesis. For example, the amount of time spent looking at individual words should increase when motivational intensity is high, while the average distance of gaze from the centre of the screen should decrease (to reflect a narrow attentional scope). In other words, increased motivational intensity would induce participants in spending more time observing the "trees" rather than the "forest" [15]. This is based on the observation that high motivational intensity is correlated to increased attention towards a central area [20]. In our case, however, the locus of attention is relative, rather than absolute (e.g. attention is centred around a cluster of words, rather than the centre of a computer monitor). Central and novel to the studies presented in this paper is the potential interaction that might arise in how search is performed and the level of motivational intensity. For example, when information is presented in a sparse fashion (e.g. scattered), one could expect to find an increased adverse effect of high motivational intensity. On the other hand, when words are presented densely (e.g. listed) high motivation may be less detrimental (or even beneficial), as a narrow scope would focus attention on a specific area of the screen, populated with words. In Studies 2 and 3 we altered semantic density (rather than visual density) as words were always visualised in the same way. Hypothesis 2: Physiology is influenced by motivational intensity and predicts errors. Hypothesis 2a. We expected our motivational manipulations to affect physiology measured during the word search task. We measured EEG Frontal Alpha Asymmetry (FAA); usage of this metric is advantageous as it requires little computational time and could be employed in real-time systems to detect (or prevent) attentional biases. We also measured inter-beat interval (or IBI, via ECG) and electrodermal activity (EDA, or skin conductance). We expected to find a main effect of motivational intensity on three variables: EEG asymmetry was expected to increase after reward expectancy manipulations. Specifically, we expected relative left activity to increase along with motivational intensity [46,49]. This is observable as an increase in right alpha band power. We also expected increases in heart rate and in skin conductance variables, indicating sympathetic nervous system activation [40,50,51]. Therefore, we expected IBI and EDA to increase along with motivational intensity. Hypothesis 2b. We also assessed the possibility of predicting performance via the aforementioned physiological signals, with the addition of eye tracking. Given the exploratory nature of this test, we tested all variables simultaneously, for all studies, correcting for multiple comparisons. This is because we could not precisely determine which signals were most likely to predict performance; instead, we speculated that signals could predict performance in any direction. For example, assuming that high motivation reduces scope, we expected high sympathetic nervous system activation to be correlated to lower performance. Similarly, we expected performance to decrease as the distance of the eyes from the centre of the screen decreases, as measured by eye tracking. EEG (as measured by alpha band power) was also expected to correlate with performance, as, in turn, we expected it to be correlated to motivational intensity. However, as previously indicated, we could not predict which of these signals would correlate with performance, and in which direction. Given that we corrected for multiple comparisons, eventual significant findings would indicate which signal could be suitable for automatic detection of biases induced by motivational intensity during visual search. Hypothesis 3: Affect modulates memory and interacts with search paradigm. In addition to reward expectancy, we manipulated reward feedback to induce positive or negative affect, in varying levels. Similarly to hypothesis 1, we tested whether this had a main effect on performance and whether it interacted with search paradigm. Rewards were communicated to participants after the word presentation but before assessing performance. Therefore, we could only test for mnemonic biases, as our manipulation took place after participants performed the search task. We expected to find a main effect of reward feedback on performance. Negative affect has been demonstrated to increase localised attention, as does high motivational intensity [3,10]. Hence, negative rewards were expected to induce narrower scope, decreasing the number of words remembered. Similarly to Hypothesis 1, we expected an interaction between reward feedback and search paradigm, especially considering that the centre of attention may not always correspond to the absolute centre of the screen. For example, sparse layouts may be more adversely affected by narrower mnemonic biases induced by negative affect. Study design Design and methodology common to all three studies are outlined in this section. The sections Study 1 to Study 3 will describe the trial structure and detail how each study differed from the previous. Our studies were composed of a sequence of trials (180 trials in Study 1, 108 in studies 2 and 3). Each trial comprised 6 phases. These were: 1. Category assignment: participants were given one (in Study 1) or more (studies 2 and 3) target categories. They were instructed to memorise all words belonging to the given target categories. 2. Expectancy cue: one out of three possible cues were shown to participants. These were Neutral (the next trial will not be rewarded), Approach (you may gain a reward in the next trial) or Withdrawal (you may lose money in the next trial). This 3-level variable is called Cue. It was balanced within participants (one third of the trials were preceded by each queue, in randomised order). 3. Word presentation: Participants were instructed to find words belonging to the target categories. Although participants were not rewarded for this task, they were instructed to perform it as best as they could. Words were presented to participants in various ways, specific to each study. Words were presented using differing visual arrangements in Study 1; this variable was called Layout. In Studies 2 and 3, the amount of categories presented simultaneously was manipulated; this variable was called Diversity. These variables were also balanced within participants, and fully crossed with Cue. 4. Flanker: participants were instructed to press the left or right key on the keyboard as quickly as possible, depending on the direction of a symbol shown on screen. They were told that they would be rewarded according to their speed in responding correctly to this task. The flanker task allowed us to manipulate motivational intensity on a trial-by-trial basis, independently of the Cue, Layout (Study 1) or Diversity (Studies 2 and 3) variables. 5. Reward feedback: participants were awarded a small amount of money (in half Approach trials), or lost a small amount of money (in half Withdrawal trials) or neither. Rewards were balanced within participants and across conditions. They did not depend on reaction times (with some exceptions, see the "Reward feedback" section). This variable is called Feedback. 6. Word recognition: A list of words belonging to the target category(/ies) was shown. Half the words did appear during word presentation (phase 3), while the other half was used to measure false alarms. We calculated the number of Errors participants committed in this phase to assess the effect of the independent variables mentioned above. Word creation. We manually created a dataset of 120 words for Study 1 and 168 words for Studies 2 and 3. In Study 1, the 120 words were grouped in 12 categories containing 10 words each. In Studies 2 and 3, the 168 words were assigned to 14 categories, each containing 12 words. The categories were created in order to be non-overlapping, while the individual words were selected to be non-salient; for this reason, we excluded particularly long, short or otherwise peculiar words. Emotional words were excluded by verifying their rating in the affective norms database in Finnish [52], or if a Finnish rating was not available, using the corresponding English translation [53]. Stimulus presentation. All stimuli were presented on a 22" widescreen LCD monitor, with a refresh rate of 59 Hz and resolution of 1680 × 1050 pixels. Participants were placed at a distance of approximately 60 cm from the monitor; the whole screen resulted in a rectangle covering a visual angle of approximately 42.78˚× 28.07˚. Stimuli were presented using Psy-choPy version 1.81 [54]. Words were formed by Lucida Console characters of 20 in height, corresponding to a visual angle of 0.76˚. Screen background was white and words were black (except when displayed within a condition called "colours", present only in Study 1). Recording apparatus. Data were recorded using a Brain Products QuickAmp recorder (BrainProducts, Munich, Germany). We bandpass filtered data at recording, with a high-pass of 0.01 Hz and notch filtered at 50 Hz. We recorded Electroencephalographic data from 32 electrodes, arranged using the 10-20 standard (Jasper, 1958). We recorded electro-oculograms from the left and right eyes using two bipolar electrodes (HEOG and VEOG). During recording, we used the average of all channels as reference (common reference). EEG impedances were at most 15 kO (acceptable for this type of research [44,[55][56][57][58]). Note that the average impedance recorded for the channels used in the analysis was noticeably lower (M: 3.83 kO, SD: 3.41). ECG was recorded using a bipolar lead compatible with the QuickAmp recorder. EDA was recorded using an adaptor compatible with QuickAmp auxiliary channels. Gaze position was tracked by a RED500 eye tracker (SensoMotoric Instruments, Teltow, Germany). Timing markers were sent from the stimulus presentation computer to the eye tracking computer and EEG amplifier simultaneously using a three way ('Y') parallel cable, with a crossover (LapLink) adaptor attached to the eye tracking computer. Signal processing procedures are detailed in the "Measures" section. Participants All participants were students or staff at Aalto University or Helsinki University. The study was announced using the respective universities' advertising services. In the invitation, prospective participants were told that they would be paid a maximum of 70 € and a minimum of 20 €, depending on how their reaction times compared to other participants. All participants were paid 70 € at the end of the experiment. Since rewards were balanced across conditions and did not depend on reaction times, some exceptions were put in place to avoid this deception from being noticed (described in the "Reward feedback" section). Immediately after the experiment but before being paid, participants were asked whether they believed their rewards were being allocated fairly. Participants who replied that rewards were not fair were paid but excluded from further analysis. Across the three studies, two participants were excluded for this reason (one in Study 1 and one in Study 2). All participants were native Finnish speakers, right handed, non colour-blind, with otherwise normal or corrected-to-normal vision. Ethics Study design and recruitment procedures were approved by the Aalto University Ethical committee. Approval numbers 2014_18 (Study 1) and 2016_02 (Studies 2 and 3). Written consent allowing the analysis of raw data and the publication of averaged results was obtained from each participant. Measures As mentioned in the introduction, we employed measuring techniques that could theoretically be implemented in a real time system and hence required a relatively low number of sensors and modest computational power. These were 2-electrode EEG asymmetry, 3-sensor ECG (for IBI), 2-sensor EDA and eye tracking. We employed 32-channel EEG at recording to perform ICA (in order to decrease the risk of Type I errors in the subsequent analyses). This section describes how data were analysed across all three studies. Each describes how we calculated the variables we utilised for our statistical tests. Each variable is written in italics. Data analysis. For all data segments, we considered physiological activity recorded only during the word presentation phase. This was done to exclude noise caused by muscular activity from analysis, since participants were instructed to stay as still as possible during this phase but not in the other phases (also considering that the following phase contained the flanker task, which required rapid body movements). In addition to the previously mentioned signals, we measured electro-oculogram (EOG) to reject EEG trials potentially containing eye blink artefacts. Specific rejection criteria for each signal are defined in the following subsections, while the number of participants that contributed to each measurement (indicated by N) is reported separately in the "Results" subsection of each study, along with their respective statistical test results. EEG analysis. During analysis (i.e. after recording), EEG data were filtered with a highpass of 0.5 Hz and low-pass of 70 Hz and re-referenced to common average. Ocular artefacts were removed by performing an Independent Component Analysis (ICA) back-transform in Brain Products Analyzer 2: components which were deemed to represent vertical and horizontal eye movements (typically the first two) were removed after visual inspection. EEG data were rejected if EOG channels appeared to contain an excessive amount of noise during visual inspection. After recording, the data segment of interest (word presentation phase) was split into three segments of 1 s each. Segments were rejected if the scalp electrodes contained activity exceeding ± 200 μV (± 400 μV for EOG), in line with previous research utilising a QuickAmp amplifier [59][60][61][62]. The median number of EEG trials rejected per participants were 5 in Study 1 (M: 11.3, SD: 24), 2 in Study 2 (M: 6.2, SD: 9) and 5 in Study 3 (M: 16.9, SD: 30). As in previous research [63], we computed a frontal asymmetry index on midfrontal electrodes (F3 and F4), subtracting the natural log of alpha power at the left electrode from the natural log of alpha power at the right electrode. Alpha band was defined as the average between 8 Hz-12 Hz. Power was computed using Welch's method [64] in Matlab (pwelch), with a window size of 75% and 50% overlap. This procedure was repeated for each valid segment of EEG data in every trial. The average of the segments was taken as an asymmetry index for the given trial, with higher scores indicating greater left activity. As in our previous research [44,58], we utilised resting states between trials for baselining. This is because a single baseline measurement would not be sufficient to account for overall variability across the experiment [63]. That is, the asymmetry index for a given trial was computed by subtracting the index measured during the word presentation to the value measured 1 s before the start of trial itself (this time window corresponded to inter-trial blanks, which took place after the questionnaire phase of the preceding trial ended). We called this variable FAA. ECG analysis. ECG data were analysed to measure activation induced by motivational intensity manipulations (i.e. by Cues). ECG data were rejected on a participant-by-participant basis after visual inspection, before analysis. For every 3 s segment (starting from stimulus onset), we first de-trended data by subtracting a fifth-degree polynomial fitted via least squares. Time series data were then full-wave rectified, removing polarity. We Searched for peaks in Matlab, using the 'findpeaks' function with a minimum peak distance parameter of 200 ms and a minimum peak height parameter of 50% of the global maximum found within the given segment. All peaks found via this method pinpointed a QRS complex in time (specifically, R peaks). Inter-beat intervals (in ms) were calculated as the average distance between the found peaks in the given segment. We call this variable IBI. EDA analysis. As for ECG, EDA data were analysed during the word presentation phase and were rejected on a participant-by-participant basis after visual inspection. EDA data were decomposed into phasic and tonic signals using Ledalab [41]. Given that average phasic activity (as computed by Ledalab) is mostly contained within a 1 s-4 s window from stimulus onset, Integrated Skin Conductance Responses (ISCR, in microsiemens, or μS) were calculated by summating phasic activity from 1 s to 4.0 s after the beginning of the motivational manipulation. Dividing the obtained summation by the number of samples per second in each segment resulted in a μS/s (microsiemens per second) measurement for each trial. This variable is called ISCR. Eye tracking analysis. We calculated three eye tracking variables per trial. These were 1) Gaze duration (total dwell time on word) 2) Words skipped (words not seen per trial) and 3) Radius (average distance of on-screen gaze position from the screen's centre). There were 2 conditions that had to be satisfied in order to mark a word as 'seen'. 1) Given that the human eye can identify words within approximately 3 (horizontally) and 5 (vertically) degrees of visual angle [65], the participant's estimated gaze had to fall within a rectangle surrounding the given word, measuring approximately 5.53˚× 3.83˚of visual angle (corresponding to 5.8 cm by 4 cm, or 200 by 150 pixels). 2) The estimated gaze had to stay within an inner square covering 2.76˚of visual angle (corresponding to a side of 2.9 cm, or 100 pixels) for at least 50 ms; this indicated a fixation [66]. The total time spent within the inner rectangle defined our Gaze duration metric. At the trial level, we calculated the mean distance of gaze from the centre of the screen by applying Pythagoras' theorem to eye tracking coordinates; this variable was called Radius. Eye tracking data were rejected at the participant level when calibration accuracy under 1˚could not be achieved. Moreover, data from individual trials were rejected if no eye movements were detected for least 80% of the trial duration. Data points indicating positions of 0, 0 (exactly the middle of the screen) were considered missing, as SMI eye trackers use this value to indicate data absence. Performance. For each trial, we measured performance by calculating the overall number of Errors, based on the answers given in the recognition phase of the given trial. Errors were defined as the number of false alarms (words belonging to the target category, but not actually presented during the trial) plus misses (shown in the given trial, but not reported as seen by the participant). Statistical analysis. We organised our data in two tables: one containing trial-level data (ECG, EDA, EEG, number of words skipped, mean gaze distance from centre of the screen, errors, etc.) and one containing eye-tracking word level data (e.g. gaze duration for that word, whether word was seen, or remembered). As previously mentioned in the "Study Design" section, all our tests were within-participants: each participant performed the task under all conditions we tested. Our hypotheses (reiterated later in each study-specific "Hypotheses" section) were tested by fitting a generalised mixed model to the data. The hypotheses were formulated a priori by examining the literature previously presented in this paper. We reported all results, whether statistically significant or not. Bonferroni correction was applied to all p-values related to the same hypothesis, within the same study. We applied two-level testing. Firstly, we fitted a linear model in Matlab using the 'fitglme' function. Secondly, we performed an ANOVA on the resulting model (in Matlab: lmg = fitglme(. . .); anova(lmg)). Thirdly, only if the top-level ANOVA test was significant at a Bonferroni corrected alpha of .05, we reported the individual factor effects resulting from the first test. The individual effects p-values were also Bonferroni corrected. We corrected for 5 comparisons when testing Hypothesis 1 in Study 1 (1 test on Errors + 3 tests on eye tracking variables + 1 test on Outlier hits) 4 comparisons when testing Hypothesis 1 in Studies 2 and 3 (which did not include Outlier hits), 3 comparisons for the correlation between motivational intensity and physiology of Hypothesis 2a (ISCR, IBI and FAA), 12 comparisons for the exploratory prediction of errors via physiology in Hypothesis 2b and 1 comparison in Hypothesis 3 (no correction, since we only performed one test on Feedback). Response variable distribution was specified as the type of distribution that most closely represented the probability density of our response variable. We selected the Poisson distribution for Errors and Words skipped. The binomial distribution was selected for Outlier hits (specific to Study 1, see the "Eord presentation" section). Non-normally distributed variables (IBI and ISCR) were log-normalised. Predictors were specified as fixed effects and participant numbers as random effects. Using this method, we calculated t-statistics and p-values. Means and standard deviations were calculated separately for each test. Effect sizes were calculated using r equivalent , given that no standardised method has yet been accepted for use on mixed models [67]. Similarly to means and standard deviations, r equivalent scores were calculated separately for each statistical test we conducted [68], using the Measures of Effect Size (MES) toolbox for Matlab [69]. Additional analyses The analyses discussed in this section were not devised as part of our original design. Instead, their inclusion was suggested by the reviewers of the present article in order to support its findings. The results of these analyses will be reported in the "Additional analyses" section, along with the summarised results of the three studies. Parietal alpha. Parietal EEG alpha power is considered to be inversely correlated to cognitive and memory load [70][71][72]. We measured alpha power for two reasons. Firstly, we were interested whether load was higher during the word search phase, when compared to baseline. Secondly, to assess the effect of Diversity (a variable present only in Studies 2 and 3). We measured alpha power using the same parameters we used for asymmetry (Welch's method, 1 second time windows, power between 8 and 12 Hz) except that we only considered the Pz electrode. To assess the overall effect of the word search task, we compared trial-level baseline alpha power indices against the middle second of the word search task indices in a paired t-test. The test was performed within participants (e.g. in Study 1 a single test involved 180 pairs, assuming no rejected trials). We repeated the test for each participant, counting the number tests that were significant at alpha .05. The tests were one-tailed, with the alternative hypothesis being that baseline alpha power would be greater than word search task alpha power (against the null hypothesis that their means were equal). To assess parietal alpha Diversity, we utilised a generalised model test (the same method we used to assess asymmetry for Cue) on Pz alpha power when compared to baseline. Spill-over effect. We run this test to measure the effect of reward feedback on the following trial, rather than the trial during which it was presented. The test parameters were otherwise the same we used for our main reward feedback hypothesis (Hypothesis 3). That is, we tested the effect of Feedback on Errors committed in the following trial using a generalised linear model (resulting in 179 tests for participant in Study 1, for example). Eye-tracking calibration. We also performed a test to assess whether the fact that we performed only one calibration session at the start of each experimental session had a detrimental effect on eye tracking accuracy. We measured the average distance between gaze position and the closest word shown on screen, for every trial. Similarly to our other tests, we utilised a generalised linear model to assess the effect of the trial number on the distance of gaze from the closest word. Study 1 We assessed the effect of high motivational intensity in opposing directions (i.e. approach and withdrawal) against low motivational intensity, using three reward expectancy cues (called Approach, Withdrawal and Neutral, respectively). Moreover, given that reward outcomes have not been previously shown to greatly affect memory using three levels [20], in our studies we considered five reward levels. These were Neutral, Gain, Loss, Nongain and Nonloss. We employed five different visual search paradigms; these were called Scatter, Clusters, Clusters + outlier, List, Colours) and are described in more detail in the "Word presentation" section. This first study can be described by a 3 (Cue) × 5 (Layout) × 3 (Feedback) fractional factorial design, in which the Cue and Layout factors were fully crossed (reward outcomes were not fully crossed as money could only be gained after reward cues and could only be lost after punishment expectancy cues). As previously indicated in the "Studies" section, this design may resemble 'Experiment 1' described in [20], but presents a number of important differences. Firstly, our design considers motivational direction (approach versus withdrawal). Secondly, we included five (instead of two) visual presentation layouts. Thirdly, considered five (instead of two) possible reward outcomes. Fourthly, the implementation of our flanker task (used to manipulate motivational intensity) was more complex, as described in the "Flanker task" section. Fifthly, we included EEG, EDA, ECG and eye tracking in our experimental design. Sixthly, participants were explicitly instructed to perform a search. These important differences allow us to investigate interactions between motivational intensity, affect, visual search and physiology. Method This experiment was composed of 180 trials (plus an additional 10 practice trials), split into four blocks by three breaks, one every 45 trials. Each break lasted as long as the participants desired. As previously stated in the "Study design" section, each trial comprised six phases: 1) Category assignment 2) Expectancy cue 3) Word presentation 4) Flanker 5) Reward feedback 6) Word recognition. This structure, along with the duration of each item and interstimulus / intertrial intervals is depicted in Fig 1. Each phase is described in more detail in the following subsections. Category assignment. Each trial began with a category assignment screen, asking the participant to memorise words belonging to a specific category (e.g. "look for words belonging to the buildings category"). This category is called target in this paper, while any words that do not belong to this category are called irrelevant. These rules applied only to the current trial (participants were instructed about this during practice). Given that each cue / category combination was repeated 12 times, categories were paired so that every category could be selected exactly once as a target (and once as irrelevant) for every repetition. This procedure generated the 180 category assignments that were used in the 180 trials that composed the experiment (3 Cue × 5 Layout × 12 repetitions). The final trial ordering was shuffled, with the condition that every subsequent trial had to employ both different target and irrelevant categories (so that words belonging to the same category could not be displayed twice in adjacent trials). In total, there were 12 possible categories (translated from Finnish, the original language of the experiment): buildings, vehicles, clothes, furniture, landscapes, chemical elements, fruits, body parts, animals, musical instruments, spices, stationery. Each category contained 10 words, all singular nouns of similar length. Expectancy cues. Expectancy cues were presented as light grey (75% white) filled shapes contained within a square covering a visual angle of 6.68˚(corresponding to a side of 70 mm Visual depiction of the trial structure described in the "Method" section. In this example, the circle was associated to a possible gain (reward) and a reward was eventually allocated. All trials followed this structure. Above each phase, the variable name of interest is shown in italics. Asterisks mark phases during which Study 1 differed from studies 2 and 3. https://doi.org/10.1371/journal.pone.0218926.g001 or 100 pixels). The three cues were a triangle, a square and a circle. Each symbol indicated to the participant that in the current trial, a reward may be given if response to the flanker was faster than the average response of all participants (Approach), a negative reward may be given after a slower response (Withdrawal), or no reward would be given (Neutral). As previously indicated, rewards were eventually balanced and did not depend on participants' reaction times (given some exceptions listed in the "Reward feedback" section). The meaning of each symbol (triangle, square and circle) was counterbalanced across participants. Ten practice trials allowed participants to familiarise with the meaning of each symbol. The three expectancy cues were evenly split across the 180 trials, in randomised order. Word presentation. We organised information in either concentrated or scattered approaches, and with or without visual category identification. We call this variable Layout. The five possible layouts (displayed in Fig 2) were: • Scatter: words were pseudorandomly scattered across the screen (sparse layout without category identification). • Clusters: words were visually clustered according to their category (concentrated layout with category identification). This was done by drawing an invisible ellipse that encircled approximately half the screen. Words from each category were placed around two antipodal points. • Clusters + outlier: similar to clusters but with an outlier word (concentrated layout with category identification). One word, called the outlier, was swapped with another so that a target word appeared in the irrelevant cluster. This condition was introduced to examine the effect of motivational intensity on attention for peripheral details (in this case, the outlier word). • Colours: words were pseudorandomly scattered across the screen, colour-coded according to their category (sparse layout with category identification). The two possible colours (assigned at random) were: dark blue and dark green (their RGB 0-255 codes were respectively 31,120,180 and 51,160,44). These colours were the two darkest colours generated by ColorBrewer 2.0 when requesting a colour set for four paired, colourblind-safe, qualitatively separated data classes [73]. This condition was used as an alternative method to encode category affiliation, as opposed to the previously described visual clusters. Colour has been shown to greatly influence efficacy of visual search when compared to other ways to alter the graphical appearance-eg luminance or chroma [74]. • List: words were aligned so that they were horizontally centred and equally distributed vertically (concentrated layout without category identification). Positions were arranged in ascending alphabetical order. This condition was included as ordered lists are commonly used in visual search tasks. Care was taken to avoid particularly salient words. In all conditions (except list, in which their vertical position was equally distributed across the screen) words were separated by approximately 86˚of visual angle (corresponding to 90 mm or 128 pixels). Flanker task. After word presentation, the flanker stimulus was shown to participants. It was composed of five '<' and '>' characters, with the direction of each selected at random. Participants were told to press the left or right keyboard arrow corresponding to the direction of the middle character as quickly as possible so that they could be rewarded (after seeing the Approach cue) or not punished (after seeing the Withdrawal cue). The flanker was displayed at the centre of the screen. To verify that our motivational manipulations were effective, we performed two paired ttests on a participant-by-participant basis. We tested reaction times to the flanker after Neutral cues were shown against Approach and Withdrawal. The tests were one-tailed (the alternative hypothesis was that reaction times after Approach and Withdrawal were less, or faster, than after Neutral). The null hypothesis was rejected in all cases (80/80), indicating that our manipulations were indeed effective. Reward feedback. After the flanker, participants received reward feedback. They could either gain 0.40 € (if fast enough, after Approach cues), lose 0.40 € (if slow, after Withdrawal cues) or neither. Rewards were pseudorandomly balanced so that half Approach trials would result in money being received and half Withdrawal trials would result in money being lost, the remaining half receiving no reward. All Neutral trials were followed by Neutral feedback (no money lost nor gained). Some exceptions were put in place: during initial pilot testing, most participants noticed that rewards were being manipulated. For this reason, we made sure to reward very fast responses and punish very slow responses. Similarly, pressing the wrong arrow key always resulted in negative rewards. We defined the reaction time threshold as 1.5 standard deviations of the current participant's reaction time distribution. We implemented these exceptions so that rewards were balanced (e.g. allocating a Gain because of a very fast reaction would result in a subsequent Gain trial being replaced with a Nongain). This ensured that very few participants noticed the reward manipulation (two across all studies). At the end of the experiment, participants were asked if they noticed anything "strange" about reward allocation. Participants who responded yes were discarded. We also intended to reject any participant who failed to reach 90% accuracy in flanker response, but in practice this threshold was never passed. In total, the Feedback variable comprised five levels: Neutral (no reward when no reward was possible), Gain (positive reward in Approach trials), Nongain (negative reward in Approach trials), Nonloss (positive reward in Withdrawal trials) and Loss (negative reward in Withdrawal trials). This ensured that rewards were coded according to expectations, rather than outcome alone, as suggested by [75]. Word recognition. At the end of each trial, participants indicated which words they saw during the preceding word presentation phase. Ten words belonging to the target category (5 shown and 5 not shown) were displayed in a randomised list at the centre of the screen. Participants responded "yes" or "no" for each word by using the left and right arrow keys. Participants were allowed to make corrections to their responses by pressing the up and down arrow keys and re-entering a response until the end of the questionnaire phase (which ended 1 second after their last answer). This was the last phase for each trial, and was used to measure performance (i.e. Errors) for the given trial. Participants. Data were collected from 40 participants (20 female). The number of participants is in line with our previous research employing within-participants design [44,58,76]. Their age ranged between 47 and 19 years (M: 25.70, SD: 6.27). An additional participant (whose data were discarded) noticed that rewards were not being awarded fairly. Hypotheses Hypothesis 1: Motivational intensity modulates cognitive scope and interacts with search paradigm. We expected to find a main effect of Cue on Errors. This factor (Cue) had three levels, specified as nominal in our linear mixed model: Neutral, Approach and Withdrawal. We expected Approach and Withdrawal cues to decrease performance. Layout was coded as a covariate in this analysis, since category identification was easier under the Colours, Clusters and Clusters + outlier layouts. We also expected to find a decreased number of Outlier hits in the Clusters + outlier condition committed after Approach and Withdrawal motivational cues, when compared to Neutral. That is, a relatively broad cognitive scope (associated to the Neutral cue), is expected to facilitate attentional and mnemonic allocation to the outlier word. All three eye tracking variables were also expected to correlate with motivational cues. Radius was expected to be narrower after Approach and Withdrawal cues, when compared to Neutral. Gaze duration was expected to increase when motivational intensity was high (as more attention is paid to the "trees" rather than the "forest"). We also expected more words to be skipped after high motivation cues. We also expected to find an interaction between Cue and Layout on Errors: a subset of visual arrangements may facilitate performance when they match the participants' motivational state. Concentrated information might be less attended to when motivational intensity is low. The Layout factor was specified as nominal, and comprised five levels (one for each visual presentation layout). We applied Bonferroni correction for 5 comparisons, corresponding to the total number of tests we performed. Hypothesis 2: Physiology is influenced by motivational intensity and predicts errors. 2a. We predicted a main effect of Cue on FAA (asymmetry), IBI, ISCR. More specifically, we expected Approach and Withdrawal to increase activation (decrease IBI and increase ISCR). Affects high in motivational intensity are expected to raise left hemispherical activity, increasing the FAA variable. 2b. We also investigated how physiology predicts performance. We tested the effect of FAA, IBI, ISCR and Radius on Errors. We assumed that since motivational intensity affects both performance and physiology, physiology should then predict performance. Because of the explorative nature of this test, we Bonferroni corrected all obtained p-values for 12 comparisons (4 variables × 3 studies). Hypothesis 3: Affect modulates memory and interacts with search paradigm. We expected to observe a main effect of Feedback on Errors. The Feedback factor comprised five levels: Neutral, Gain, Nongain, Loss, Nonloss. Negative rewards (Nongain, Loss) were expected to increase the number of Errors, decreased instead by positive rewards (Gain, Nonloss). We also expected the magnitude of the effect to be higher for the most positive and most negative outcomes (e.g. Loss should increase errors more considerably than Nongain, see [75]). A Feedback × Layout interaction on Errors was also predicted: negative rewards may reduce performance more evidently when information is displayed in sparse arrangements (e.g. random) whereas positive rewards may facilitate memory when utilising sparse visualisations. Results All results obtained from Study 1 related to each hypothesis are listed in this section. Results from all studies will be summarised in the final section of the paper (Tables 1 to 3). Eye tracking data showed that the mean distance of the eye from the centre of the screen, in pixels, (Radius) was larger after the We found no significant effect of Cue on whether the outlier word within the Clusters + outlier condition was remembered (p = 0.110). It is also worth noting that we found no significant difference in overall performance between Colours and Clusters and no significant interaction between the Cue and Layout factors on Errors. All tests were Bonferroni corrected for 5 comparisons, which corresponds to the total number of tests performed to assess this hypothesis. Hypothesis 2: Physiology is influenced by motivational intensity and predicts errors. 2a. ISCR and IBI were strongly correlated to participants' motivational states elicited by the cues we employed. ISCR was higher, suggesting increased sympathetic nervous system activation after both the Approach 2b. Exploratory prediction of errors using all 4 physiological variables (Bonferroni corrected for 12 comparisons, as described in the "Hypotheses" section in the introduction) was only significant for Radius. The highest half of Radius (representing broader attention) was associated to fewer errors (M: 3.27, SD: 1.46), t(5821) = -11.46, p < 0.001, r equivalent = .29, N = 33 than the lowest half: (M: 3.72, SD: 1.76). Discussion. The first observation of this study is that motivational intensity, in both directions, narrowed visual attentional scope during the word search task. The effect of Withdrawal was stronger than that of Approach. This observation is supported by eye tracking data, which indicates that visual attentional scope narrowed as motivation increased: more attention was allocated to the central areas of the screen, gaze duration increased along with the number of words not read per trial. Hence, we attribute the increased number of errors after induction of high motivational intensity to visual narrowing of attention. No Cue × Layout interaction was found on Errors, unlike our prediction. However, a two-way interaction between List and Scatter was found on Radius, suggesting that the effect of motivational intensity on visual search varies depending on whether a list or a scatter is being displayed. We interpret low IBI and high ISCR (measured to assess Hypothesis 2) as increases in activation (arousal). Given that arousal is expected to be particularly heightened by monetary rewards [80], this effect was not surprising. Contrary to our hypothesis, physiology (apart from eye tracking) did not directly predict errors. Nevertheless, the lack of a direct correlation between physiological variables related to activation and Errors suggests that reduced visual cognitive scope was mostly responsible for decreased performance. In other words, while arousal was elicited in expectation of the rewards, it did not directly influence performance or visual search strategy. The strong correlation between physiological variables and motivational intensity we found in our experiment indicates that it would be possible to infer arousal via ECG and EDA while performing visual searches. However, care must be taken in performing such reverse inferences; while motivational intensity can induce arousal, the reverse is not necessarily true. Combining arousal with eye tracking data into a single predictor would be more informative as all these metrics showed robust correlations to motivational intensity, given that Radius did predict errors. We attempted to further investigate these findings in the following two studies. EEG asymmetry analysis did not yield significant results, preventing the prediction of motivational intensity from the FAA signal. This might be due to the difficulty of the task, as alpha activity is generally inversely correlated to brain activity, especially under high memory load [81]. This suggests that more computationally intensive analysis would be required to reveal correlations between EEG and motivational intensity when performing visual word searches. We also observed an interaction between Layout and Feedback. Relatively more errors were committed after losses under the List and Colours conditions. This could be interpreted as a suppression of associative memory related to contextual information presented within the Colours and List layouts, induced by negative affect (Loss). Similarly, relatively less errors where committed in the Colours and Clusters + outlier condition after positive rewards (Gain and Nonloss. This suggests that that both positive and negative rewards affected memory, although the effect of negative rewards was more pronounced. These findings support dual-processing models in which negative affect (or stress) impairs hippocampal-dependent associative memory [78,79]. That is, it has been previously indicated that negative affect at retrieval can impair associative memory without reducing memory for the task-relevant items themselves [77,82]. This is particularly relevant for our context: in our case, we consider contextual information the colour of a word or its position within a list, which is not strictly task-relevant information. Consistently with this interpretation, our effect was also observed at retrieval since we presented reward feedback after the word search phase (which corresponded to the encoding phase). In conclusion, Study 1 indicated that motivational intensity modulates attentional scope by altering visual search strategies. Among the five layouts we employed, we observed interactions between two of them: the Scatter and List conditions (on Radius and Feedback-induced errors). This suggests that these two styles of visualisation were differently affected by our manipulations. These two styles of visualisation are heavily used in computer systems, as mentioned in the introduction. Lists are ubiquitous, while scatters (such as tag clouds) have seen a rise in popularity in recent years [83]. Scattered information is also used in specialised visualisations for which lists would be ineffective (such as flight and naval control systems or star charts). Moreover, some systems display information using both a list and a scatter [29]. This led us to the development of the further two studies presented in the following sections, which focused on assessing whether motivational intensity and affect modulate visual search and physiology independently of visual presentation layout, and whether their effects are more pronounced when employing lists or scatters alone. Study 2 The previous study indicated that visual search and physiology are affected by motivational intensity, while affect (especially negative) influences memory at retrieval. It also indicated that visual search strategy, as measured by eye tracking variables, may not be equally altered by motivational intensity depending on whether information is presented within lists or scatters. In this study, we investigated how motivational intensity affects visual search only when performed on lists, while the following study focused on scatters. Given that visual presentation layout is now fixed, we instead manipulated search paradigm by altering the amount of target categories. Participants were asked to identify words belonging to a number of target categories that varied from 1 to 3. We call this variable Diversity (information diversity). Manipulation of Diversity is of interest to us since motivational intensity has been previously linked to conceptual scope [84]. Although, note that this research has been recently involved in a controversy [85], raising the importance of its replication. Moreover, working memory requirements have been shown to adversely affect visual search performance [86], suggesting that altering Diversity may differentially affect visual search strategy. In addition, working memory requirements may differentially affect lists or scatters. Manipulating Diversity has the added benefit of allowing assessment of linear effects in our hypothesis tests; that is, Diversity was treated as a numerical variable in our models in studies 2 and 3, whereas the Layout variable in Study 1 was nominal. Method The general trial structure was unchanged from Study 1. Three trial phases differed from Study 1: category assignment, word presentation and flanker. The three phases discussed below are marked with asterisks in Fig 3, which depicts this study's trial structure. Category assignment. During the category assignment phase, participants were instructed to identify words belonging to one, two or three target categories. The number of target categories was pseudorandomly selected and balanced across trials, within participants. The resulting design was a 3 × 3 × 3 (Cue × Diversity × Feedback) fractional factorial design, in which the Cue and Diversity factors were fully crossed. As in Study 1, we repeat each combination of the two fully crossed factors (Cue and Diversity) 12 times, resulting in 108 trials (3 × 3 × 12). The lower number of trials was compensated by longer trial length: words were presented for 5 s instead of 3 s. We raised the number of categories from 12 to 14 for this study, by splitting the "animals" category into "birds" and "mammals" and by adding the "building tools" category ("työkalut" in Finnish). Since multiple categories could be displayed in a single trial, the only constraint in category selection was that the same target category could not be used twice in adjacent trials (in Study 1, the same category could not be used as neither target nor irrelevant in two adjacent trials). Word presentation. Words were only presented in a list, uniformly distributed over the vertical axis of the monitor and centred horizontally. Word presentation duration was increased from 3 s to 5 s. We presented 12 words instead of 10 to allow at most 6 categories in total to be displayed with 2 words for each. This duration was determined after pilot testing, during which we aimed at an average recognition accuracy of 75% per participant. An additional difference from Study 1 is that words were presented in random order, rather than alphabetical, to ease comparisons with Study 3. Flanker task. In Study 1 we always presented the flanker at the centre of the screen, as done in previous experiments (e.g. [20]). A possible confound in this approach is that high motivation might provoke an anticipatory narrowing of attention towards the centre of the screen due to the flanker's location. To reduce this possibility, in Studies 2 and 3 we presented the flanker in a randomly selected quarter of the screen. As with Study 1, we verified that participants were motivated by our manipulations by performing a one-tailed t-test on reaction times to flanker stimuli after the Neutral cue against the Withdrawal and Approach cues. Once again, the null hypothesis was rejected in all cases for both Studies 2 and 3. Hypotheses. The high level interpretation of our hypotheses was kept unchanged from Study 1. In Studies 2 and 3, visual search paradigms were manipulated using an independent variable called Diversity (replacing Layout). Diversity was coded as a numeric variable. The structure was similar to Study 1; asterisks mark the three phases in which the design differed from the first study. The differences were: 1) multiple categories were assigned 2) 12 words were presented, for a longer time, always using the same layout and 3) the flanker was displayed in a randomly selected quarter of the screen. This design was nearly identical to that employed in Study 3, which differed only in the arrangement of keywords. Above each phase, variable names of interest are indicated in italics. https://doi.org/10.1371/journal.pone.0218926.g003 • Hypothesis 1: motivational intensity modulates cognitive scope and interacts with search paradigm. We expected a main effect of Cue on Errors and eye tracking variables (Radius, Gaze duration and Words skipped). Diversity was a covariate in these tests, since a low number of categories can be more easily parsed and remembered. We also expected that a larger (vs. smaller) number of target categories would be more easily parsed and encoded into memory, especially when current attentional scope is broad (vs. narrow). Hence, we expected narrow-scope inducing cues (Approach and Withdrawal) to further increase the number of errors committed as Diversity increases, resulting in an interaction. • Hypothesis 2: physiology is influenced by motivational intensity and predicts errors. This hypothesis was tested identically to Study 1. We assessed the effect of Cue on FAA, IBI and ISCR (Hypothesis 2a). We also explored the possibility of predicting Errors from FAA, Radius, ISCR and IBI (Hypothesis 2b). • Hypothesis 3: affect modulates memory and interacts with search paradigm. Similarly to Study 1, we expected a main effect of Feedback on Errors and an interaction between Feedback and Diversity on Errors. Participants. Twenty people (8 female) participated in the experiment. Their age ranged between 34 and 19 years (M: 26, SD: 4.69). The number of participants was decided after verifying that the results we obtained from Study 1 did not change, despite halving the number of participants. Data from one of these participants (a 25 years old male) were discarded as the participant noticed that rewards were not being awarded fairly. Results Study 2 results are displayed below, ordered by hypothesis. Results from all studies are summarised in the final section of the paper and in Tables 1 to 3. Cues significantly interacted with diversity. Participants committed a higher number of errors as diversity increased, after Neutral cues. This increase was due to the higher difficulty of remembering more categories. However, the increase in errors was less pronounced after Withdrawal cues. In detail, the lowest number of errors was found in the pair of Discussion We found a significant interaction between search paradigm and motivation (Cue and Diversity). However, the found interaction was the reverse of what we expected. While we predicted that narrowing of attention caused by high motivational intensity would further increase number of errors, the number of errors increased more steeply after Neutral cues, especially when compared to Withdrawal cues (marginally after Approach cues). The effect of motivational intensity on gaze variables was more limited when compared with Study 1: only Radius was significantly affected by motivational intensity. Moreover, the main effect test of Cue on ISCR resulted not significant; it was suppressed by Diversity instead, suggesting that participants' arousal increased in order to provide more resources when dealing with a more complicated task. We believe these effects are due to the visual search strategy commonly employed when scanning lists. When words are presented in this format, most (if not all) participants would perform the search task by reading the list from top to bottom, word by word, until the allocated time expires. This would not be surprising: this approach is commonly employed in left to right languages such as Finnish. The presence of this strategy is supported by visual gaze vector inspection. We computed gaze vectors for each participant by averaging the order in which words were scanned (Fig 4). We argue that this relatively stable strategy reduces the impact of motivational intensity manipulations on visual attention. Following this premise, it is possible that the increased cognitive control induced by high motivational intensity [17], unable to subconsciously alter visual search strategy, instead facilitated memorisation. In fact, high motivation has previously been shown to increase working memory performance for information presented sequentially [87], a phenomenon which supports this explanation. A static visual search strategy could also explain the Diversity × Feedback interaction. The most negative feedback (Loss) interacted with the number of target categories, having a larger impact when Diversity was low. This effect can have two possible non-dichotomic interpretations: firstly, it could be interpreted as a decrease in associative memory caused by losses (as in Study 1). As more words belonging to a single category are memorised sequentially, more associations are built into memory; these associations, however, can be disrupted by negative affect [77]. The second possibility is that the encoding phase was strengthened by increased resource allocation [17], as induced by the preceding motivational and Diversity cues. This would then counteract the disruption of associative memory caused by negative affect. Study 3 The only difference in design from Study 2 was the arrangement of keywords: in this study, we presented only pseudorandomly scattered words (corresponding to the Scatter layout in Study 1 and shown in Fig 5). Under all other aspects, this study was carried out using exactly the same parameters as Study 2. Results This section lists all results from Study 3, sorted by hypothesis. Results from all studies are summarised in the final section of the paper (Tables 1 to 3). Eye tracking data were consistently affected by motivational cues (especially when opposed to Study 2). Radius (mean distance of gaze from centre of screen), was narrower after both This figure shows the average gaze vectors for various participants, for each cue. Different shades of grey represent different participants. These were obtained by averaging the order in which words were gazed at in each trial. Note that direction and vertical spread for each participant appear to be relatively stable across cues in Study 2, when compared to Study 3. In the latter case, we can observe clear, seemingly random differences between different levels of Cue. This suggests that gaze patterns were less predictable in Study 3. Unlike Study 2, no Diversity × Feedback interaction was found p = 0.227. Discussion In this study, we found a clear effect of Cue on visual attention, unlike Study 2. Among eye tracking variables, Radius and Words skipped were significantly affected by Cue. The amount of Errors was larger (effect size) for Withdrawal and significant for Approach. This suggests that the overall effect of motivational intensity on visual attentional scope is larger when words are presented in a scatter. Cue affected both IBI and ISCR, unlike Study 2. Withdrawal significantly decreased IBI, while Approach (and marginally Withdrawal) were correlated to increased ISCR. The effect was less pronounced than that found in Study 1 (perhaps due to a lower number of participants). Nevertheless, this supports our hypothesis that high motivational intensity increases arousal during visual search, specifically by reducing IBI and increasing ISCR. The comparison of gaze patterns between Study 2 and Study 3 (Fig 4) along with the effects previously discussed, leads us to the interpretation that scatters allows for greater flexibility in visual search strategy. The increased amount of liberty gave way to a greater suppression (narrowing) of visual attention, induced by high motivational intensity. No interaction of Feedback and Diversity on Errors was found, unlike the other two studies. The absence of contextual information (which arises from sequential list parsing) in the visual arrangement of words in this study could explain why this was the case. A main effect of negative rewards, however, was present: Loss and Nongain both increased Errors. As predicted, the effect of Loss was greater than that of Nongain. Results summary Results related to each hypothesis and additional analyses from all three studies are aggregated in this section. Hypothesis 1: Motivational intensity modulates cognitive scope and interacts with search paradigm Effects on errors committed after reward expectation cues are summarised in Table 1. In general, high motivational intensity was correlated to an increased number of errors. The effect of Withdrawal was more pronounced than that of Approach (which was not significant in Study 2). A significant interaction was found in Study 2. Gaze patterns were significantly correlated to both Withdrawal and Approach according to all three metrics (Radius, gaze duration, Words skipped) in Study 1. The outlier keyword in the Clusters + outlier condition was remembered and gazed upon less often after Withdrawal cues. In Study 2, a significant effect was found only for Radius. In Study 3, both Words skipped and Radius were significant. In summary, high motivational intensity seemed to induce narrower visual attentional scope, as suggested by the number of errors committed and gaze patterns. This decrease was present but less pronounced in Study 2, for which we found only a main effect of Withdrawal and a Cue × Diversity interaction. Hypothesis 2: Physiology is influenced by motivational intensity and predicts errors 2a. Correlations between all physiological variables and motivational cues are listed in Table 2. Activation (as measured by IBI and ISCR) was generally increased by high motivational intensity. Once again, Study 2 presented different results than the other two studies: no effect of Cue on ISCR was found, but instead appeared to be suppressed by the effect of Diversity. Table 3. Radius appeared to be a good predictor for number of errors in all studies except Study 2. In Studies 1 and 3, reduced Radius was correlated to more errors, indicating that reduced visual scope decreased overall attention. Unexpectedly, in Study 3 increased arousal (shorter IBI) was correlated to less errors. Hypothesis 3: Affect modulates memory and interacts with search paradigm Effects for all reward levels are displayed in Table 4. Negative rewards (Nongain and Loss) were associated to increased errors in all studies, Nongain being correlated to an increased number of errors in Study 1 and Loss in studies 2 and 3. Nongain did not appear to significantly increase Errors in Study 2. Instead, we found a marginally significant but consistent interaction across the three levels of Diversity with Loss: the negative effect of Loss was gradually reduced as the number of target categories increased. Additional analyses Results from the analyses introduced in the "Additional analyses" section in the Introduction are reported in this Section. These were not part our original hypothesis set. Instead, they were run to support our conclusions, as suggested by the reviewers of the present article. Alpha power. The one-tailed, paired t-test comparing baseline alpha power against the middle second of the word search task was significant for 35/40 participants in Study 1, 19/20 participants in Study 2 and 15/20 participants in Study 3. Alpha power was marginally correlated with Diversity in Study 2 (p = .06). This analysis did not yield significant results in Study 3 p > .50. These results indicate that alpha band desynchronisation took place during the word search task phase, when compared to baseline. This result is consistent with previous work which indicated that reduced alpha band power is correlated with increased mnemonic and cognitive load [70][71][72]. The lack of significant results for the effect of Diversity on alpha band power in Studies 2 and 3 indicates that the difference in task difficulty caused by varying the number of categories was not great enough to be reflected by a detectable increase in alpha band power. That is, the number of words to be remembered did not vary (although the number of categories did), and this semantic difference was not sufficient to elicit a significant effect. It is also worth noting that the eyes were moving greatly during the word search task (especially in Study 3). Eye movement interference can explain the lack of significant results, particularly regarding Study 3, which displayed words scattered across the screen. Nevertheless, parietal alpha power was higher during baseline when compared to the word search task. This could explain the lack of significant findings for EEG asymmetry carried out to test Hypothesis 2. Spill-over effect. We found a spill-over effect in Study 1: more errors when committed in the trials following Nongain (M: 3.53, SD: 1.77), t(7174) = 2.28, p = 0.045, r equivalent = .24, N = 40 and Loss (M: 3.51, SD: 1.67), t(7174) = 2.01, p = 0.004, r equivalent = .24, N = 40 when compared to trials that were preceded by Neutral (M: 3.30, SD: 1.55). No effects were found in Study 2 p = .41 and 3 p > .5. Given that our trials were randomised, it is unlikely that this spillover effect confounded our main results. However, it is worth noting its presence, and is particularly interesting that only negative reward feedback cues elicited an effect. This is consistent with Hypothesis 3 results: negative rewards appeared to have a frequent detrimental effect on memory while the effect of positive rewards (in the opposite direction) was more rare and less strong. Eye tracking calibration. We found no significant effect of trial number on the average distance between the closest word and gaze position in all studies (Study 1 p = .53, Study 2 p = .17, Study 3 p = .46). This indicates that calibration accuracy was relatively stable, given our experimental duration (approximately 1 hour and 30 minutes for Study 1 and 1 hour and 50 minutes for Studies 2 and 3). Overall discussion Across the three studies, main effects of motivational intensity on gaze and performance were found in Study 1 and Study 3. In Study 1, an interaction on Radius (mean distance of gaze from centre) was found between List and Scatter. In Study 2, which employed a list, the main effect of motivational intensity was slightly reduced (being significant only for Withdrawal) while an interaction was found. Study 1 alone indicated that there was an effect of motivational intensity on visual search, which may differ depending on whether scatters or lists are displayed. Studies 2 and 3 indicated that the effect of motivational intensity on visual search patterns is more pronounced when scattered information is presented. We interpret these results as an alteration of search strategies induced by high motivational intensity, which is more likely to be observed when scanning relatively sparse search spaces. When information is presented in a consistent format, such as the list used in Study 2, this effect is reduced. The interaction between motivational intensity and information density exclusively in Study 2 indicated that high motivational intensity slightly reduced the negative impact on performance normally caused by the memorisation of a large number of categories. A possible explanation for these findings is that extrinsic monetary motivational manipulations acted as stressors, constraining visual attention. However, when information was parsed sequentially visual attention was less likely to be constrained. Cognitive resource allocation, in absence of any evident attentional constraint, allowed performance to be less affected by high levels of stress. This indicates that the impact of motivational intensity varies greatly depending on the format in which the search task is organised. This is consistent with previous research which indicated that motivational intensity narrows visual cognitive scope [20]. Our observations are also consistent with previous research in which modulation of visual attention did not play a role in performance. It was observed that motivation increases memory by facilitating maintenance of information when rewards are presented before maintenance [87,88]. Neurologically, this could be explained by increased activity in the lateral prefrontal cortex (LPFC) induced by motivation; it has previously been suggested that stimulating the LPFC can increase working memory capacity [89,90]. Stimulating this area via motivational cues may then favour memory. These effects, in our case, applied only to Approach-directed motivation. It is notable that Withdrawal-directed motivation reduced performance in all cases, including lists. It has been indicated that motivational direction stimulates different areas of the brain, explaining why Study 2 performance results were greatly affected by opposing motivational directions [91]. This further strengthens the idea that both directions of motivation should be considered when studying human behaviour. It is also worth noting that in all studies the most positive reward outcome (Gain) was never found to increase performance under main effects tests. Negative rewards (Loss and Nongain) were instead associated to decreased performance, in all studies and the spill-over effect additional analysis. Loss, in particular, strongly reduced performance when participants were under heavier mental load (main effect on Studies 2 and 3). The effect size of losses was greater than that of nongains, as predicted. This supports previous work which indicated that the "default" cognitive scope is broad [20,23]. Under this assumption, gains would not broaden scope. That is, cognitive scope would be mostly unchanged after positive rewards, preventing the observation of an increase in performance. Implications Given that the attentional-narrowing effect of motivational intensity on lists was reduced, it appears that using vertical lists would be preferable to scatters-especially when users are expected to be exposed to extrinsic motivational stressors. This implies that UI elements such as tag clouds may be less effective than vertical lists, especially unsorted. Vertical lists have also been indicated as the preferred method of arranging words, as they facilitate visual search [65]. This observation applies especially to scenarios in which spatial location would not provide additional properties relatable to the given word; that is, lists should be preferred to scatters if a given word's position within a two-dimensional space does not convey any relevant information. In any case, it is worth remembering that arranging words across a two-dimensional space is an approach vulnerable to attentional biases. Human-computer interaction research which takes into consideration the affective state of users and their correlated behaviour, such as physiological computing [32] or affective computing [33] would benefit from the findings presented in this paper, which suggest that eye tracking data collected while scanning lists might not generalise to scattered information, and vice-versa. For example, designing a visual search task using lists might mask variability in visual attention across conditions, reducing the efficacy of eye tracking metrics. In more general terms, theoretical research on motivation and affect should benefit from our findings, which indicated that these phenomena induce different behaviours depending on 1) the way information is presented and 2) whether approach or withdrawal-directed motivation is being elicited. Prediction of errors from physiology indicated that Radius could be used as a relatively accurate predictor of performance when using sparsely organised information (being significant in Studies 1 and 3). Narrow radius was associated to more errors, supporting our hypothesis that narrowing of attention reduces performance. Such biases could be detected by eye tracking along with IBI, which significantly predicted errors in Study 3. Limitations An important limitation of the present study is due to the use of monetary rewards. In this way, we only explored the effect of extrinsic motivation induced by tangible rewards; it is possible that motivation induced by non-tangible and / or intrinsic rewards may have produced different results. However, testing for intrinsic rewards would require a very different experimental design, as this type of reward would be very difficult to include in a trial-by-trial design. Use of a visual flanker task may have also influenced some of our eye tracking variables or Errors, as attention may pre-emptively shift towards the centre of the screen in anticipation of the flanker. We attempted to limit this confound by using a blank of variable duration after the word search phase and altering the position of the flanker in Studies 2 and 3. Moreover, previous experiments on motivational intensity and visual attention reported similar results regardless of whether visual or less perceptual flanker tasks (such as linguistic tasks) were employed [92]. While there may be overall differences between the types of flankers tasks utilised, it is important to note that it is worth utilising all types of flanker, including the type we utilised. Moreover, potential effects induced by the flanker cannot account for all differences observed across conditions (such as gaze duration on individual words, or ISCR). In Study 1 we employed only one visual layout that included an "outlier" (Clusters + outlier). Ideally, it would have been of interest to include a Colours + outlier condition, in which an irrelevant word would be coloured as target. This would have helped in further investigating whether the attentional narrowing of scope would also apply to sparse arrangements. However, our search tasks were quite demanding and the duration of Study 1 experimental sessions often exceeded 90 minutes (excluding set-up). Including an additional layout condition would have strained participants, considerably reducing the feasibility of the study. It is possible that the Diversity variable in Studies 2 and 3 affected physiological arousal, as it could have been heightened in trials which asked participants to memorise a high number of categories. Conditions were, however, balanced within participants; all levels of Diversity were tested against all levels of Cue. Unfortunately, EEG asymmetry tests were insignificant in all three studies. This could be attributed to the difficulty of the search task (as mentioned in our methodology, we calibrated trial length to obtain an average accuracy of 75%). As indicated by our additional analysis, alpha waves were suppressed (desynchronised) under high memory load when compared to baseline. It has also been previously suggested that asymmetry might only be observed during affective (emotional) manipulations of motivational intensity [91], although it has been observed under non-emotional paradigms as well [44]. Another possible reason for the null result might be provided by the short time of segments available for our analysis, given that sensitivity of time-frequency transforms is normally higher when using longer time segments [63]. It has also been suggested that lower alpha band may be more relevant for motivational manipulations [49]. We attempted separate post-hoc tests on lower and higher alpha (8-10 Hz and 10-12 Hz) but these also yielded non-significant results. It is also worth noting that our studies were aimed at identifying the most suitable physiological signal for motivational intensity estimation. From this point of view, our finding of Radius being more suitable than FAA for the purpose of real time estimation using a low number of sensors is valuable. Pupillometry has been previously used to assess the effect of motivational intensity on cognitive control [93]. Pupillometry has also been used in recent visualisation studies to investigate the effect of confusion [94], and to predict learning curves [95]. Preliminary tests, however, suggested that this technique would have not been applicable to our set-up, as the pupil data we obtained was not sufficiently accurate. Future work The strong eye tracking results we reported suggest that eye tracking alone might be suitable for a follow-up study on the effects of motivational intensity on eye movements during visual search. In this paper, we focused on lists and scatters as they were within the initial layout set of Study 1. However, future studies could focus on additional visual arrangements frequently utilised in computer systems, such as matrices or multiple vertical lists. More eye tracking features that we did not consider could then be utilised (such as distance between saccades) or features could be engineered depending on the type of visualisation utilised (e.g. dwell time on left / right portion of screen, if two vertical lists are presented). Moreover, we often found that the standard deviation for Radius was generally larger when motivational intensity was high. This suggests that there are additional variables that interact with motivational intensity, changing gaze behaviour. Investigating additional layouts with more eye tracking variables, and potentially different types of flanker tasks (such as auditory) could help in identifying which factors are most closely correlated to changes in visual search patterns elicited by motivational intensity, and which gaze-related features are most suitable for motivational state estimation. Interestingly, in Study 3, lower IBI was associated to less errors. This may indicate that higher sympathetic nervous system activation was elicited by motivational intensity, and increased performance during trials in which participants' visual attention was not concurrently narrowed because of a latent variable (or noise). Heart rate (and hence IBI) increase is associated to stronger mental requirements [96], which were relatively higher in Studies 2 and 3. Further research would be needed to disentangle the relationship between withdrawaldirected motivation, IBI and performance during cognitively demanding word search. Regulatory focus theory [97] indicates that approach-oriented individuals might benefit from approach-related motivation. This phenomenon was not tested in our experiment as our main aim was to assess general effects of motivational manipulations on visual scope. Exploring trait effects would require a larger, more balanced (in terms of traits) set of studies. Future experiments inspired by our design would benefit from the inclusion of recall tests, apart from recognition tests. This is because high motivation induced participants in spending more time reading individual words, as measured by our Gaze duration variable. This would be useful in investigating whether high motivation increased encoding for individual words at the expense of recall rate for the remaining words-i.e. whether the "trees", rather than the "forest", were easier to remember. This recall-recognition tradeoff could be used to improve current models of visual search to include the additional dimension of motivational intensity [98]. Conclusion The presented studies indicated that motivational intensity affects visual search strategies, especially when information is presented in sparse layouts. When information was presented in lists, this effect was reduced, being absent for approach-directed motivation. We concluded that increased cognitive control induced by high approach-directed motivation increases memory during encoding when visual attention is not constrained, as lists are normally parsed sequentially. When scanning sparse layouts, our findings indicate that eye tracking metrics would be beneficial in detecting attentional biases. Negative affect (Loss) has been shown to greatly reduce performance when performing visual word search under heavy mental load. Supporting information S1 File. This file contains six data tables, two tables per study and a "readme". For each study, one table contains trial-level data while the other contains word-level data. These tables were used to compute our statistics, as outlined in the "Statistical analysis" section. The 'README' file contained within the zip file explains the contents of the table files, which are in csv format. (ZIP)
2019-07-25T13:03:54.093Z
2019-07-23T00:00:00.000
{ "year": 2019, "sha1": "d09a6a63c8b6bea2148d688ec7d8d24204a5737e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0218926&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d09a6a63c8b6bea2148d688ec7d8d24204a5737e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
261245577
pes2o/s2orc
v3-fos-license
The EDIBLES survey VI. Searching for time variations of interstellar absorption features Interstellar lines observed toward stellar targets change slowly over long timescales, mainly due to the proper motion of the background target relative to the intervening clouds. On longer timescales, the cloud's slowly changing physical and chemical conditions can also cause variation. We searched for systematic variations in the absorption profiles of the diffuse interstellar bands (DIBs) and interstellar atomic and molecular lines by comparing the high-quality data set from the ESO diffuse interstellar bands extensive exploration survey (EDIBLES) to older archival observations, bridging typical timescales of 10 years with a maximum timescale of 22 years. We found good archival observations for 64 EDIBLES targets. Our analysis focused on 31 DIBs, 7 atomic, and 5 molecular lines. We considered various systematic effects and applied a robust Bayesian test to establish which absorption features could display significant variations. While systematic effects greatly complicate our search, we find evidence for variations in the profiles of the $\lambda\lambda$4727 and 5780 DIBs in a few sightlines. Toward HD~167264, we find a new \ion{Ca}{i} cloud component that appears and becomes stronger after 2008. The same sightline furthermore displays marginal but systematic changes in the column densities of the atomic lines originating from the leading cloud component in the sightline. Similar variations are seen toward HD~147933. Our high-quality spectroscopic observations and archival data show that it is possible to probe interstellar time variations on time scales of typically a decade. Even though systematic uncertainties, as well as the generally somewhat lower quality of older data, complicate matters, we can conclude that time variations can be made visible, both in atomic lines and DIB profiles for a few targets, but that generally, these features are stable along many lines of sight. Introduction Interstellar clouds reveal their presence in stellar spectra through numerous absorption lines that are observed from the UV to the interstellar medium (ISM), the carriers of the vast majority of the more than 500 interstellar DIBs have remained unidentified, with one notable exception to date: C + 60 was recently identified as the carrier of several DIBs in the NIR (Campbell et al. 2015;Spieler et al. 2017;Cordiner et al. 2019;Linnartz et al. 2020) confirming an earlier suggestion by Foing & Ehrenfreund (1994).This identification was not a surprise.Indeed, there is a large body of evidence that suggests that the carriers of most of the DIBs are carbonaceous molecules that are abundant and widespread, and that can survive the harsh conditions in the ISM.The most promising carrier candidates are therefore carbon chains, polycyclic aromatic hydrocarbons (PAHs), fullerenes, and their derivatives (Maier et al. 2004;Salama & Ehrenfreund 2014;Omont 2016).They constitute an important part of the organic inventory of the Universe, and thus identifying the DIB carriers is a top priority in the field of astrochemistry (Herbst & van Dishoeck 2009). To narrow down the number of possible DIB carrier species, observational studies aim to provide constraints on the physical properties and the nature of the DIB carriers.This is often done by comparing the properties of the DIBs to the physical conditions in different sightlines, for instance as inferred from other, known constituents (e.g.H I, H 2 , and other atoms and molecules).DIBs have been mapped on large (galactic) scales (e.g.Vos et al. 2011;Kos et al. 2014;Zasowski et al. 2015;Bailey et al. 2015;Lan et al. 2015) as well as on more localized scales (van Loon et al. 2009(van Loon et al. , 2013;;Farhang et al. 2015b;Bailey et al. 2016;Elyajouri et al. 2017), and these studies all reveal clear differences in the DIB carrier distributions that can indeed be attributed to variations in local physical conditions, on relatively large (parsec) scales.Other studies probed the small-scale structure of the ISM using DIBs by comparing sightlines that are nearly in the same direction and at the same distance.In particular, Cordiner et al. (2013) studied ρ Oph A, B, C, and DE and found changes of five to nine percent in the equivalent widths (EWs) of the λ5780 and λ5797 DIBs between the A and B components that are physically separated by only ∼344 au (0.002 pc).In some cases, the relative variations in EW were larger in the DIBs than in the atomic lines towards the same targets.Similar results were reported by van Loon et al. (2013) when comparing the Na I line and 5780 and 5797 Å DIBs in FLAMES spectra towards the Tarantula nebula.On Galactic scales of 0.04 pc, they found that the 5797 DIB profile showed variations of 71 percent compared to only seven percent for the Na I D lines.On larger scales (20 pc) in the Large Magellanic Cloud (LMC), the same DIB showed variations of 77 percent compared to 25 percent for the Na I line.The measured variations for the 4428 Å and 5797 Å DIBs are larger than for the 5780 Å DIB.While such analyses are greatly complicated by saturation effects in the strongest Na I components, as well as continuum uncertainties and stellar line blends especially for the 4428 DIB, these results may suggest that there is more structure (i.e.inner-cloud column density variation) in the colder gas traced by the 5797 Å DIB carrier than found in warmer gas probed by Na I or the 5780 Å DIB carrier (Wolfire et al. 2003;Vos et al. 2011;van Loon et al. 2013).If that is indeed the case, some DIBs may be more sensitive than atomic lines to certain variations in the physical conditions inside interstellar clouds. When comparing the DIB properties in different targets, it is often assumed that the conditions in the sightline are changing so slowly that any variation is imperceptible over timescales of years.However, variability in atomic transitions of neutral (e.g.Ca I, K I, Na I) and singly ionized interstellar species (Ca II) has been detected towards several diffuse ISM sightlines using multi-epoch spectra taken over several years (see e.g.Hobbs et al. 1991;Blades et al. 1997;Price et al. 2000;Welty & Fitzpatrick 2001;Lauroesch & Meyer 2003;Stanimirović et al. 2003;Lauroesch 2007;Smith et al. 2013;Smoker et al. 2003Smoker et al. , 2011;;McEvoy et al. 2015 and also the reviews by Crawford 2003;Stanimirović & Zweibel 2018).Atomic and molecular variations were also reported toward HD 34078, HD 219188, and HD 73882 (Rollinde et al. 2003;Welty 2007;Galazutdinov et al. 2013).Temporal variations have also been reported as a consequence of supernova evolution (Frail et al. 1994;Milisavljevic et al. 2014).Perhaps the most relevant to this work is the study by McEvoy et al. (2015) who used spectra with a resolving power R = λ/δλ =∼ 80 000-140 000 with epochs separated by 6-20 yr.They found that of their 104 sightlines (each typically containing several different absorption components), 1% showed evidence for time variation in the Ca I absorption, 2% in Ca II H and K and 4% in Na I D lines.No evidence for any variation was found in the Ti II, Fe I, CN, CH + , or K I 4044 Å line.Perhaps surprisingly, there were no clear differences in the physical conditions between those sightlines that did and those that did not show variations, although uncertainties are large. There are two likely explanations in the case signals change over time.In the first case, the cloud itself does not change, but the target star in the background has moved significantly (due to proper motion) relative to the cloud so that the sightline toward the star now effectively probes a different part of the cloud.These are then essentially "small-scale variations", similar to the ones studied for specific clouds by, for instance, (Cordiner et al. 2013;van Loon et al. 2013).In the second case, the sightline remains the same, but now the cloud itself exhibits temporal differences in physical conditions (McEvoy et al. 2015).For instance, if the impinging radiation field is variable, this could affect the ionization stages of different species that could affect also DIBs (Cami et al. 1997;Boissé et al. 2013;Farhang et al. 2015aFarhang et al. , 2019)).Whatever the true origin of any variations, they could offer insightful constraints on the DIB carriers if we can tie them to specific physical parameters. In this work, we present the results of a study of temporal variations and stability in atomic and molecular lines and the DIBs.Our aim is to search for time variability or stability in interstellar spectra of targets whose proper motions, of the order of several milli-arcseconds per year, cause them to probe slightly different parts of the intervening interstellar clouds.For example, for a target or a cloud at a distance of 100 pc and with a proper motion of 10 mas yr −1 this equates to probing the ISM on scales of about one au yr −1 .In a similar project (Smoker et al. 2023) we searched for time-variability in the near-infrared DIBs at 1318 nm towards 16 targets at a spectral resolving power of R = 50 000 with baselines of 6-12 months and at R = 8000 for two objects with a baseline of 9 yr.We found no variations for the 1318 nm data and only tentative variation in the C + 60 lines for one of the latter two objects.Finally, non-detection of time variability in the 5780, 5797, and 6613 Å DIBs towards ζ Oph was recently reported by Cox et al. (2020). Section 2 describes the data used in this study which are a combination of EDIBLES and archival spectra.In Sect.3, we describe the methods to assess spectral variations over time, and the sources of uncertainty involved.We present our results in Sect. 4 and discuss these in Sect. 5. A148, page 2 of 14 Farhang, A., et al.: A&A, 678, A148 (2023) Observational data and target selection The main data for this paper are the spectra obtained as part of the ESO Diffuse Interstellar Band Large Exploration Survey (EDIBLES; Cox et al. 2017), a Large Program that used VLT/UVES to obtain a unique DIB data set: we observed 123 DIB targets to obtain spectra at a high spectral resolving power (R ∼ 72 000 for the blue arm and R ∼ 107 000 for the red arm), with a very high signal-to-noise ratio (S /N ∼ 500-1000 per target) and covering a large spectral range (305-1042 nm).The targets are furthermore chosen to represent very different physical conditions in the sightlines, and our sample thus probes a wide range of interstellar environment parameters including interstellar reddening E(B − V) (0-1 mag), visual extinction A V (0-4.5 mag), total-to-selective extinction ratio R V (2-6), and molecular hydrogen fraction f H 2 (0.0-0.8).The program is described in Cox et al. (2017), and several first results have been published (Lallement et al. 2018;Elyajouri et al. 2018;Bacalla et al. 2019;MacIsaac et al. 2022; see also Cami et al. 2018). While we observed some EDIBLES targets multiple times (months or years apart), they offer only a limited baseline in time to search for variability.We therefore also searched for additional high-resolution spectra of EDIBLES targets previously acquired with FEROS (Kaufer et al. 1999), HARPS (Mayor et al. 2003), UVES (Dekker et al. 2000), ESPaDOnS (Donati 2003), HERMES (Raskin et al. 2011), HDS (Noguchi et al. 2002), and HIRES (Vogt et al. 1994).These instruments offer a spectral resolution within a factor of ∼2 of the EDIBLES spectra, thus offering the best opportunity for a detailed comparison of spectra taken at different epochs.Reduced spectra were retrieved from the ESO archive1 , the CFHT archive2 , and the Mercator archive3 . The EDIBLES data set contains 123 different targets that represent different conditions in the ISM.From this sample, we removed the targets where DIBs are very weak or absent, or for which we only have observations with some UVES dichroic settings at this point but not all, thus only offering a partial spectrum.Furthermore, we also need good-quality archival spectra for comparison, and we only found archival spectra for 75 of the initial EDIBLES targets.After retrieving the archival data, we carried out a quality assessment.We found that several archival spectra are of too low quality (with a S /N ≤ 100 or displaying significant artifacts such as fringes) to be useful for our purposes and thus we discarded them.After this quality assessment, we were left with 65 targets for which we have complete EDIBLES spectra and good-quality archival spectra.However, one target (HD 93030) has a very low reddening and displays no DIBs; we therefore only used this target for the analysis of atomic and molecular lines.Consequently, for the remainder of this paper, we will use 64 EDIBLES targets.For just a few targets, we only have EDIBLES spectra taken just a few days or weeks apart; those are unlikely to show any time variation; we included them as a baseline and as an internal consistency check.Apart from those, the median timescale between the oldest archival observation and the most recent EDIBLES observation is 9 yr, with the longest timescale as long as 22 yr. All targets and their key properties are listed in table described in Appendix A. We note that the spectral coverage is not identical for data from different epochs, and thus not all absorption lines or DIBs may be available for each of these targets.We shifted all spectra to the heliocentric rest frame.Finally, if needed, we smoothed the higher-resolution spectra using a Gaussian kernel to match the resolution of the archival observations (see Sect. 3.1). Spectral feature selection Our starting point to select DIBs for this study is the catalog produced by Hobbs et al. (2009) listing the DIBs observed in HD 183143.However, we can only expect to reliably establish variations in fairly strong and well-defined spectral features.We thus first excluded DIBs longward of 6800 Å where telluric contamination is generally quite severe, making it much harder to reliably extract the DIB profiles.For the remaining DIBs, it is clear that the broader a DIB is, the stronger it needs to be for us to be able to establish any such variation confidently.We therefore included: -narrow, moderately strong DIBs with FWHM ≤ 1 Å and EW ≥25 mÅ.From this list, we exclude the λ6699.36DIB since it falls in a wavelength gap not covered by UVES.This then leaves 16 DIBs in this category. This list contains a few so-called C 2 DIBs (as listed in Hobbs et al. 2008) -DIBs that show a particularly good correlation with the column density of C 2 , and that are believed to probe cloud interiors (see e.g.Thorburn et al. 2003;Elyajouri et al. 2018), and we added two more C 2 -DIBs (at 4726.83Å and 5512.68Å) to our sample.During the course of this study, we removed three DIBs from our sample since they are in wavelength ranges that are highly contaminated by telluric absorption lines; given the filler nature of the EDIBLES program, telluric contamination is particularly bad and we often could not obtain a good telluric correction.We thus chose to remove the λλ5923.62,6284.28, and 6520.75DIBs from our study, and are thus left with 31 DIBs.All 31 selected DIBs and their properties are listed in Table 1. In order to compare any changes in the DIBs to varying physical conditions, we also searched for variations in a number of atomic absorption lines and transitions originating from diatomic molecules that can act as diagnostic tools (see Table 2).Of particular importance here are the Na I UV lines that we used to determine the interstellar velocity of the different cloud components.We used the strongest absorption component in these lines as a reference to define an interstellar rest frame for each sightline.This rest frame was then used in our study of the DIBs (for which the rest wavelengths are not known).We note that for all atomic and molecular lines, we report velocities in the heliocentric reference frame. Measuring temporal variations For each of the 64 available targets, we compared all observed spectra from archival observations to the EDIBLES data to search for temporal variations.Our starting point is that we consider having found a real change in an interstellar absorption A148, page 3 of 14 Farhang, A., et al.: A&A, 678, A148 (2023) feature if the change is significant and if we can exclude any other logical origin for this change. A straightforward way to search for such temporal variations is to divide the most recent spectrum by the oldest spectrum of the same target4 .If the two spectra are identical, the resulting ratio spectrum should essentially be flat everywhere within the uncertainties.Any variations in an interstellar line (atom, molecule, or DIB) on the other hand will lead to deviations from a flat line.The most common change would be a change in the column density, which would lead to a deeper or shallower absorption feature in either of the two spectra.All other things being equal, this would show up in the ratio spectrum as an absorption or emission feature with otherwise the same characteristics (wavelength, width, profile shape) as the feature in the original observations.However, artifacts may show up that at first sight suggest temporal variations, but that is really due to systematic errors.et al. (1989). The effect of spectral resolution Perhaps the most obvious type of artifacts originates from the difference in spectral resolving power when comparing EDIBLES spectra to archival spectra.A lower resolution will broaden absorption lines and result in absorption-or emissionlike features in the ratio spectrum.To minimize these effects, we used a Gaussian convolution to degrade the higher-resolution spectrum to match the lower spectral resolution.Our convolution kernel has a Full Width at Half Max (FWHM) value of with R high the resolving power of the highest resolution spectrum and R low the lowest one.This convolution effectively degrades the high-resolution spectrum to match the low-resolution one.Using this degraded spectrum greatly reduces the impact of these artifacts in the ratio spectrum. Other artifacts We considered several other sources of uncertainty as well but found them to be less important since they can generally be recognized as an artifact quite readily.Such effects include (i) telluric correction residuals, (ii) small wavelength shifts between the two spectra caused by imperfections in the wavelength calibration, and (iii) variations in the telescope/instrument/flatfielding response.Furthermore, some targets display complex stellar spectra whose variability can be misidentified as variations in interstellar absorption, including stellar emission lines that may contaminate interstellar absorption bands (see e.g.McEvoy et al. 2015).Any variations of interstellar lines and bands seen towards such targets in wavelength ranges where strong stellar lines exist thus need to be carefully checked for stellar contamination effects.For such targets, we compared their spectra to DIB-free sightlines with similar spectral types and luminosity classes. Searching for temporal variations in the DIBs Taking into account the above considerations, we took the following steps to conclude whether there is any temporal variation for a specific DIB: 1.For each of our targets, we first selected the earliest available good-quality archival spectrum for our comparison with the latest EDIBLES spectrum.If the earliest available spectrum did not have good wavelength coverage or quality, we took the next available.This provides us with the longest possible baseline for our search and thus offers the best prospects to detect gradual, monotonic changes.In doing so, we may miss some variations that happen on shorter timescales and that would show up in intermediate observations.Note however that we only miss those cases where variations are nonmonotonic and in fact cancel each other out such that the earliest archival spectrum and the EDIBLES observations are indistinguishable. 2. Second, we ensured that the EDIBLES and archival spectra were degraded to the same spectral resolution (see Sect. 3.1). 3.Then, we fitted the DIB in each observation with a simple Gaussian absorption line.The purpose here is not to obtain a good fit (since many DIB profiles are not Gaussian), but to have a simple measure for the central wavelength and the width of the DIB in each of the observations.We also use this fit to obtain an estimate of the equivalent width; this may result in differences between the EWs we list here and surveys where the EW has been obtained by direct integration.We expect that the Gaussian parameters in both observations should be very similar if there is no significant time variation.Significant changes in the EWs could thus be a first indication of time variation.We note, however, that the estimate of the FWHM and EW we obtain this way depends on the data quality, and the lower quality of many archival spectra in fact results in several false positives at this point. 4. Next, we inspect the ratio spectrum.If there are no DIB variations, this ratio spectrum should essentially be flat everywhere.After dividing the two spectra, we rescaled the ratio spectrum to an average value of 1.However, systematic errors in the positioning of the continuum will lead to constant ratio values that are slightly different from unity.We thus first calculate a "baseline" reduced χ 2 value by assuming that the model for the ratio spectrum is constant.We will denote this initial value by χ 2 base .If a constant baseline results in a good fit (χ 2 base ≤ 1; assuming our uncertainty estimates are realistic), there can be no significant change in the absorption feature. 5. In case the baseline model does not properly reproduce the ratio spectrum, we consider that the most likely temporal change is an increase or decrease in column density (and hence absorption depth).Consequently, we would expect a feature in the ratio spectrum with similar characteristics (central wavelength & width) as the absorption feature itself.Thus, we next fit a Gaussian to the ratio spectrum as well.However, this time we greatly constrain the Gaussian parameters: while the Gaussian can be in emission or absorption, the central wavelength is restricted to within ±σ/2 of the DIB central wavelength (determined in step 2), and similarly the width should be within 50% of the actual DIB width, established in step 2. This latter choice will then also fit residuals that are a bit too broad to be due to DIB variations but might help us recognize contamination by stellar lines that tend to be broader than the DIBs included in this study.Once a Gaussian has been fit to the ratio spectrum, we calculate the reduced chi-square also with this model and denote it by χ 2 Gauss .6.The final test is to establish whether the Gaussian model is a significant improvement over the baseline model.To do so, we first calculate the Bayesian Information Criterion (BIC; Schwarz 1978), defined as: with N the number of data points included in the fit and m the number of parameters in the corresponding model.This criterion is similar to evaluating a χ 2 statistic, but offers a slightly different measure of the quality of the fit, and includes a "penalty" for adding parameters.Indeed, one would expect the χ 2 to decrease when adding parameters to a model, and the BIC evaluates whether the χ 2 changes enough to warrant adding these new parameters.Using the BIC to test whether the additional parameters are significantly improving the fit is thus comparable to an F-test for additional parameters.When comparing models, we should then stick to the model with the lowest BIC statistic.However, a small change in the BIC value is not necessarily a significant change and one needs to consider scales of evidence (Efron et al. 2001).There exists some level of subjectivity when it comes to deciding what sort of changes should be considered interesting.For this paper, we will deem a change significant if the difference in BIC is larger than 3. Thus, we will conclude that the change to the absorption band is significant only if the Gaussian fit to the ratio spectrum has a BIC that is at least three lower than the baseline model.Figure 1 shows an example of such a DIB comparison in practice, using the λ5780 DIB towards HD 23180.In this particular case, the comparison is fairly straightforward since both epochs are EDIBLES observations.Both observations (panels a, b, and c) look very similar, and indeed the Gaussian fit parameters are very close to one another.Both Gaussians have the same central wavelength of 5780.42Å, and absorption depths of 3.6% and 3.8% of the continuum with widths of 0.83 Å and 0.82 Å for the oldest versus the most recent observation, respectively.This then corresponds to equivalent widths of 75.9 ± 2.3 and 79.1 ± 2.4 respectively.We note that the 1-σ confidence interval for both equivalent width measurements overlaps, and thus the equivalent width measurements are consistent with a constant value within uncertainties. Next, we look at the ratio spectrum (panel d in Fig. 1).The root-mean-square value of this ratio spectrum measured at the edges is ∼0.002 (indicated by the red dashed lines).We then compare this ratio spectrum to a baseline model with a constant value for this ratio.We find the best-fit constant to be slightly less than unity with a χ 2 = 860.2for N = 544 data points, and thus a reduced χ 2 base = 1.58 and BIC base = 253.9.It is clear from Fig. 1 that the ratio spectrum shows an apparent absorption feature, and we could fit this with a Gaussian with an absorption depth of 0.3% and a width of 0.46 Å centered at 5780 Å.These parameters are within the constraints imposed in step 4, and the resulting χ 2 = 757.1 with now four parameters, so the reduced χ 2 Gauss = 1.39, and BIC gauss = 203.Since the BIC value for the Gaussian model is 50.9 units lower than for the straight line, our analysis concludes that there is a significant change in the λ5780 DIB. We further investigated promising cases like this in more detail.We note that there are a few more cases where the 5780 DIB exhibits similar subtle but (according to our method) A148, page 5 of 14 and second (solid blue line) epochs.We note that the profiles are very similar but not 100% identical.(d) The ratio spectrum.The dashed red lines show the 1-σ band around unity.The green solid line shows the best-fit Gaussian to the ratio spectrum.(e) same as (c), but now the 2014 archival spectrum has been shifted by 0.04 Å. (f) The ratio spectrum of the two spectra in (e).We note that the apparent Gaussian feature from (d) has become much broader and is shifted to the blue, but has not disappeared. significant profile variations (HD 37367 and HD 185859).We considered whether these could be due to chance superpositions of telluric lines in the two spectra, but as the transmission spectra in Fig. 1 show, these telluric lines cannot be the origin of the residual absorption features.We also tested the role of wavelength calibration uncertainties.These can be as large as 0.04 Å in this wavelength range.To test the effect of such errors, we shifted the archival spectrum by just 0.04 Å and inspected the results.This shift makes the apparent Gaussian absorption feature all but disappear from the ratio spectrum (see panels e and f in Fig. 1) and instead, a more gradual change in the profile became apparent.This residual feature too can be fitted with a Gaussian profile as shown, but this time the absorption depth is shallower at 0.2% and the width is broader at 0.67 Å -but still well within the constraints imposed in step 4. Once more we compare this to the baseline fit and find from the BIC that the model with the Gaussian is a significantly better fit than the baseline.So even when we allow for wavelength calibration errors, we are left with a significant time variation in our data. We thus have to conclude that in these cases, there are subtle but clear variations in the spectra of the DIBs.For many of the investigated DIBs, such subtle changes have not been found or were proven to be due to artifacts.Only for a small set of observations, changes as illustrated in Fig. 1 actually show up. Searching for temporal variations in interstellar atomic and molecular lines The absorption lines of interstellar atoms and molecules offer an opportunity to approach the search for time variations in a different way, by modeling each absorption line in more detail. Given the narrow nature of these lines, this only makes sense if both the archival and EDIBLES observations are high-resolution and high S/N UVES observations, and this is the case for 16 of our targets (see table of targets described in Appendix A). For all lines, we first determined the local continuum by fitting a cubic spline to featureless regions of the spectra on either side of the absorption feature in question.The spectra were then normalized to this continuum.Next, we fitted a set of Voigt profiles to the absorption lines.The number of components for each of the sightlines was determined by the eye and can be different for different species.For lines originating from the same species (e.g.Na I), we fixed the radial velocities and b-values.We also constrained the b values to be the same between epochs.To do so, we first determined the b-value of each component from the spectrum with the highest resolving power and then modeled the lower resolution epoch with the same b-values.C.1 lists the best-fit parameters for all lines. For our DIB measurements, we fit the continuum (using a third-order Chebyshev function) and the DIB (using a Gaussian) at the same time.Uncertainties in the continuum can affect the Gaussian parameters and are likely the primary source of uncertainties in the derived parameters.To get an estimate of the magnitude of this effect, we then fitted the DIBs using a straight-line continuum and compared the DIB parameters.We used the difference between the two fits as an estimate of the systematic uncertainty, and added this to the statistical uncertainty on the parameters as returned by lmfit (Levenberg 1944;Marquardt 1963), derived from propagating the statistical uncertainties (noise) on the measurements. For our atomic and molecular lines too, we expect that the most likely change over time is an increase or decrease in the column density of a species, and this time we have actual measurements for the column densities that include proper uncertainties.We inspected both the spectra (and profile fits) and plots of the column densities of each species over time to evaluate any changes.When comparing the line profiles at different epochs, many spectra show small or subtle changes in their line profiles.While we cannot exclude that some of these may be due to actual changes in the line profiles, in most cases the uncertainties on the column densities suggest that these variations are most likely due to differences in the resolution of the observations or continuum positioning uncertainties.There are some interesting exceptions though -and those are discussed in Sect. 4. Once more, we stress that all velocities in this study are reported in the heliocentric reference frame. The diffuse interstellar bands For all 64 sightlines, we first searched for possible variations in the DIB properties, following the method outlined in Sect.3.3.A A148, page 6 of 14 Farhang, A., et al.: A&A, 678, A148 (2023) comprehensive overview of our results is presented in the table described in Appendix B, and we also created corresponding plots for them.Our method results in a large number of cases where the ratio spectrum is significantly better reproduced by a constrained Gaussian than by a baseline model.However, this only indicates that there are differences between the spectra at the two epochs, but does not give any information about the source of these differences.To further assess these cases, we inspected the plots in detail.Often, a comparison of the same DIB in different sightlines can offer clues as to the origin of these changes. By and large, the most frequent occurrence is shallow, broad features in the ratio spectra, much broader than the width of the DIBs themselves.A visual inspection does not suggest the DIBs have changed between the two epochs.These features are then most likely induced by continuum positioning errors, and we have marked such cases with a c in Table B.1.Good examples are for instance the λ5513 DIB in HD 23180 or the λ4763 DIB in HD 183143.In other cases, we recognize a blend of a stellar line with the DIB, and the target is a spectroscopic binary.In those cases, variations in the line profile are the consequence of the stellar line shifting in wavelength between the two epochs.We have indicated those cases with a ⋆.A good example of this effect is the λ6440 DIB in HD 147683.There are also a few cases where we believe the changes are due to differences in telluric contamination between the epochs (indicated with ⊕, for example, the λ5419 DIB in HD 41117), and finally, we also identified several instrumental artifacts.In particular, there were cases where we first noticed a significant variation between the two epochs, but after shifting the archival spectra by a fraction of a resolution element, the ratio spectra were flat.This is thus the consequence of inaccuracies in the wavelength calibration.A clear example is the λ6379 DIB toward HD 43384 which shows a clear feature in the ratio spectrum.However, this feature completely disappears if we apply a small wavelength shift to either of the two spectra.Another common artifact is a sudden jump in one of the spectra.We denoted these artifacts as a. In the end, we find only two DIBs, the λλ4727 and 5780 DIBs, that may show changes in their profiles between the two epochs that we cannot immediately explain by any of these causes. The λ4727 DIB We possibly see changes in the profile of this DIB for three targets, but it is most significant in HD 168076 where we see variations on the order of 0.5-2% of the absorption depth between the three 2009 archival observations and the EDIBLES 2018 observations (see Fig. 2).The ratio spectrum shows a broad feature, of approximately the same width as the actual DIB, and the continuum levels on either side of the feature match up quite well.Two more objects (HD 24398, and HD 36861) show similar changes, but they are less pronounced or less significant (∆ BIC of 39 and 95 respectively).This residual feature in the ratio spectrum is too broad to be telluric in nature, and we can rule out a stellar origin for these differences as well since stellar lines are very weak in this range.Given these variations, we searched for more archival data and found a 1997 spectrum from HIRES.Surprisingly, the 1997 spectrum is more similar to the EDIBLES spectrum than the 2009 spectra, with perhaps the exception of the weak secondary absorption feature just redward of 4728 Å.We can rule out that these variations are due to, for instance, flatfield issues in the EDIBLES spectra since we also have several observations of the same DIB toward, for example, HD 170740, The spectra are smoothed using a Gaussian convolution to minimize noise and better bring out the variations.The lower panel shows the ratio spectra and the fitted Gaussian model in each case.The top three ratio spectra (dividing the intermediate spectra by the EDIBLES spectrum) indicate significant variation, while the bottom ratio spectrum shows insignificant change when comparing the oldest archival to the EDIBLES observations.and for that target, we see no variations at all.Since all the 2009 observations are FEROS observations, this may point to an instrumental artifact in FEROS.It could also perhaps be an extreme case of continuum placement errors, but it is hard to tell.Thus, even though the λ4727 DIB toward HD 168076 provides on first sight perhaps the most promising case for observing actual variations in the DIB strength, even in this case, it is hard to be confident that they are true changes in the sightline.Moreover, the magnitude of these variations is hard to understand in the context of physical changes in the sightline (see Discussion). In addition, we noticed that this DIB has an interesting, double-peaked profile that is clearly visible in many sightlines.This has been noticed before; Galazutdinov et al. (2001) assigned the second profile of the λ4727 structure to C II stellar line contamination.Słyk et al. (2006) compared the DIB profile to synthetic stellar spectra containing lines of Ar II at 4726.85 Å, C II at 4727.41 Å, and Si III at 4730.52 Å.Based on the depth of the absorption in the stellar model, they conclude that stellar contamination causes some minor broadening and increases the depth of the feature but does not affect the DIB peak positions.Consequently, they argued for an interstellar origin for the second absorption.Elyajouri et al. (2018) similarly noticed the second absorption dip in the profile and pointed out that the A148, page 7 of 14 Fig. 3. λ5780 profile toward HD 23180, HD 185859, and HD 37367 sightlines.The upper panel compares two epochs with red for the first observation and blue for the latest one.The lower panel shows the ratio spectra in the same order as the upper panel is sorting.The green line determines the Gaussian model to the ratio.shape of this second peak is independent of the stellar rotation rate.Instead, they found that its profile shape is perfectly correlated with the main DIB absorption.They therefore also concluded that this absorption feature is indeed interstellar in nature and given the correlation with the main absorption band, part of one and the same DIB. We compared the line profile of this entire DIB (including the two absorption peaks) in more than 100 sightlines in the current work, and we find that the central wavelengths of the two peaks are constant in the interstellar reference frame, so that neither of the peaks can be a stellar feature.It is perhaps not clear whether these should be considered as two separate DIBs, but given that the ratio of the two features does not change much, we suspect that this is in fact one DIB with a rather large peak separation. The λ5780 DIB As shown in Fig. 1 and discussed in Sect.3.3, we find a small, but significant residual in the ratio spectrum for the λ5780 DIB toward HD 23180.Two more sightlines show a slight variation in the profile of the λ5780 DIB.We did several tests in the wavelength range covering the λ5780 DIB to rule out instrumental artifacts, wavelength calibration issues (considering wavelength shifts of up to 0.04 Å), a telluric origin, and stellar contamination, but such effects cannot explain the observed discrepancies.Figure 3 shows the spectra and the corresponding ratio spectra for these sightlines.For HD 23180, HD 185859, and HD 37367, the variations occur at roughly the same wavelength, just redward of the central peak (see corresponding online figures in Appendix B).For all three targets, the changes represent a tiny increase in strength over time. The very subtle variations we see in Fig. 3 are difficult to understand.They do not really appear to represent a change in the column density of the DIB carrier; they are too narrow for that.At the same time, there is no clear explanation for these residuals in terms of instrumental artifacts either. Molecular and atomic lines From a comparison to the literature results, it is clear that finding time variations is difficult even for narrow and strong atomic lines.Indeed, our data set includes four targets for which such variations have been reported before, and we first verified if we could reproduce some of these variations using our data set. We have five targets in common with McEvoy et al. (2015), and only one of those targets was reported to show variations: the Na I D lines toward HD 113904 have a component at -60.5 km s −1 (LSR) that showed an increase of 10% in intensity between 2002 and 2008 (corresponding to variations in the column density from log N = 11.52 ± 0.01 cm −2 to log N = 11.56 ± 0.01 cm −2 ).Our measurements of this component in archival UVES 2001 spectra yield log N = 11.50 ± 0.01 cm −2 , in good agreement with their 2002 value; in the 2015 EDIBLES spectrum, we find log N = 11.47 ± 0.01 cm −2 which is lower than their 2008 value.Perhaps this indicates a further change in this sightline.Galazutdinov et al. (2013) studied spectra in the direction of HD 73882, and found significant variations in the EWs of the Ca I line at 4227 Å and the Fe I line at 3860 Å, and this over a period of only 6 yr (2006)(2007)(2008)(2009)(2010)(2011)(2012).For the Fe I line component at 18-20 km s −1 , they reported changes in the EW from 1.68 ± 0.15 mÅ to 3.89 ± 0.56 mÅ, corresponding to changes in the column density from log N = 11.77± 0.04 to log N = 12.13 ± 0.06.We compared our 2017 EDIBLES spectrum of this source to the 2014 observations for all atomic lines listed in Table 2.The region around the Ca I line is rather noisy and does not result in good fits.We measured the stronger Fe 1 line at 3719.9 Å and found that the column density (and thus equivalent width) at best marginally decreases over these 3 yr from log N = 11.66 ± 0.1 cm −2 to log N = 11.54 ± 0.04 cm −2 at heliocentric ∼18 km s −1 (see Table C.1) -i.e. a variation opposite to the one reported in the previous study.We note that the column density derived from the EDIBLES data matches their 2006 measurements better than their 2012 measurements.With our data set and approach, we can thus not reliably confirm these variations nor exclude them.For the other species we measured in this sightline, we find that the column densities are typically the same to within uncertainties as those listed by Galazutdinov et al. (2013). A third source for which time variation has been reported is HD 36486 (δ Ori A), for which Price et al. (2000) studied ultra-high resolution observations.They found that an interstellar component at heliocentric = 21.3 km s −1 shows variation in the Na I D1 lines.They suggest that the component consistently increased in strength from 1966 to 1994 and after that declined in intensity by 1999.At the same time, the corresponding lines of Ca II remained roughly constant.The spectra used in the Price study had an ultra-high resolution, and the lower resolution of our EDIBLES observations prohibits resolving the = 21.3 km s −1 component.Nevertheless, we modeled the Na I D1 line in the 2014 EDIBLES spectrum.Given our lower resolution, components six and seven from Price et al. (2000) only show up as a single component at 22.15 km s −1 ; we found that log N = 11.2 ± 1.9 cm −2 for this component which compares well to the total column density of log N = 11.13 cm −2 reported by Price et al. (2000).Within the (rather large) uncertainties, our results are thus consistent with no further change in these components.We also fitted the Ca II K lines in the 2014 EDIBLES spectrum at heliocentric velocities of 15.7, 21.3, and 23.5 km s −1 , yielding column densities of respectively log N = 11.1 ± 0.7, A148, page 8 of 14 Farhang, A., et al.: A&A, 678, A148 (2023) 11.0 ± 1.2, and 10.9 ± 0.2 cm −2 .These values compare well to the 1994 values of 10.6 ± 0.1, 11.1 ± 0.1, and 10.6 ± 0.2 from Price et al. (2000), thus once more proof for no substantial change in this component. Finally, Crawford et al. (2000) reported clear evidence for variation in the profiles of the Na I and K I lines toward HD 81188 between 1996 and 2000, an increase in the column density of 16% and 40% respectively.We compared the EDIBLES spectrum (recorded in 2017) with the available archival data from FEROS (observed in 2019), but unfortunately, the archival spectrum is of low quality, and the comparison did not yield any useful results.However, the EDIBLES spectrum for this target has a good quality, albeit with significant telluric contamination in the wavelength range of the Na I D line, and we can thus compare the EW in our spectra to the earlier results.We first used MOLECFIT (Smette et al. 2015;Kausch et al. 2015) to perform the telluric correction.This however leaves large residuals leading to significant uncertainties in our measurements.We determined the EW of the K I and Na I by integration.Crawford et al. (2000) report an EW of 4.4 ± 0.4 mÅ for the K I line; our measurements yield EW=3.6 ± 0.1 mÅ, so perhaps a slight decrease.For the Na I line, they report 58.9 ± 2.4 mÅ whereas we find 71 ± 8 mÅ, again perhaps a slight increase.Given the uncertainties on the measurements, however, this is at best a marginal change. This exercise emphasizes the importance of good-quality archival spectra that are unfortunately not yet available for all targets and thus highlights the value of the EDIBLES data set to be used as archival observations in future studies. The results of our detailed Voigt modeling of the atomic and molecular lines are listed in online table described in Appendix C, and the corresponding best-fit models are shown together with the observations in it's online figures.For each interstellar cloud component, we searched for variations in the column density that are significant compared to the measurement uncertainty.We found only one indisputable case for a very significant change.Indeed, toward HD 167264, a new Ca I component at = 5.6 km s −1 shows up very clearly in the 2016 EDIBLES spectrum, whereas it was absent in the 2001 archival spectrum.A closer inspection reveals perhaps a small absorption dip in the 2001 spectrum.We searched for additional archival spectra, and they reveal intermediate absorption depths, thus strengthening the case that this is indeed a real-time variation (see Fig. 4).In addition to this new component, we noticed that for the main component in this target (at = −7.5 km s −1 ), the neutral species (and molecules) all show a marginal decrease in their column densities (10-30 percent change for the neutral atoms; five to ten percent change for the molecules, but note the large uncertainties on the column densities) while the ionized species Ca II and Ti II show a marginal increase (∼12% change; see Fig. 5).The Na I D lines are very saturated, and thus we also inspected the Na I UV doublet, revealing first of all very significant changes to the line depths for the main component, but on closer inspection, these lines also show the newly appearing component at = 5.6 km s −1 , at the same strength in both the archival and EDIBLES spectra.So whereas this component greatly increases in strength in the Ca I it has not changed noticeably in the Na I lines.The K I or CN or C 2 lines are not covered in our archival observations.With the exception of the new Ca I component and the main lines in the Na I UV lines, the changes in the column densities that we detect are not very significant by themselves (given the uncertainties).However, the systematic nature of these changes may indicate that this too is a ) may show a small but systematic decrease in the column density over the same time period; no systematic changes are seen for the third component (at = 29.1 km s −1 ).We note that the apparent component at ≈ 49 km s −1 in the Na I UV line is in fact the second line in the doublet and thus not a real component.real effect representing a gradual change in the sightline conditions.A similar systematic set of changes we find for HD 147933 (second row in Fig. 5).We will discuss this further below. Discussion To the best of our knowledge, the current study using a rigorous approach to search for time variations in interstellar lines and DIBs is the most comprehensive such effort done to date, and given the superb EDIBLES data quality, also potentially the most sensitive survey of its kind.The 31 DIBs we have selected offer the best prospects for detecting temporal changes given their strength and narrow widths, and we have also included the spectral lines of nine different atomic or small molecular species to trace possible changes in the physical conditions in the sightlines. The main finding from our study is that there is essentially almost no noticeable time variation in the interstellar features for the time frames studied here, i.e. typical periods of the order of 9 yr.Only two sightlines show small but systematic variations in A148, page 9 of 14 Fig. 5. Column densities of various atoms and molecules as measured in the main component (at ∼ −7.5 km s −1 ) toward HD 167264 between the oldest available archival spectrum and the EDIBLES data.For Na, the black bar indicates the measurements and uncertainties using the Na I D lines; the green star represents those from the UV lines at 3300 Å.The lower panel shows similar measurements for HD 147933.We note that we used the Na I UV lines rather than the saturated Na I D lines.their atomic lines, with one of them, HD 167264, showing unambiguously that a new interstellar line appears and increases in strength over time.Two DIBs possibly exhibit small variations across a few targets.While these detections show that we can detect even small variations, they also point to various systematic uncertainties as the key reason for the low number of DIBs, atomic or molecular lines, and sightlines for which we actually do find significant, conclusive physical variations.The main confounding systematics are too low a spectral resolution (especially in the archival spectra) to separate different components in interstellar atomic lines in some cases; small uncertainties in the wavelength calibration and, or continuum placement, and, or flat fielding for the DIBs; and in just a few cases severe contamination by telluric lines for both DIBs and atomic and molecular lines. In spite of concerns about systematic uncertainties, we must conclude that, in general, interstellar sightlines do not change noticeably on the time scales studied here -which for some targets are as long as 22 yr.For all our targets, proper motions have been measured, and thus we can determine the physical distance (transverse to our sightline) that our target stars have moved behind the intervening clouds over the time scales between the epochs we consider here.Those distances (expressed in au) are listed in online table of Appendix A and are typically several tens of au; the largest value is 294 au for HD 183143.If the interstellar clouds in the sightline are physically close to the target stars, the lack of detectable variations then implies that the physical conditions and chemical composition of most interstellar clouds do not change much over distance scales of several tens of aus; if the clouds are much closer to us, this, of course, corresponds to smaller distances.However, there are some interesting exceptions in our data set, pointing to tiny scale structures in diffuse interstellar material. Variations in HD 167264 We see very clear variations in the Ca I line toward HD 167264.In particular, a new component shows up at = 5.6 km s −1 in our most recent spectra that was absent or at best very weakly present in archival spectra.Interestingly, McEvoy et al. (2015) investigated this sightline and reported no variability for this target for Na I, Ca II, and Ca I. We note that their study covered observations up to 2008.We checked all available spectra in the archive for different instruments for this sightline and found useful observations from 2001, 2005, 2008, 2012, and 2018 with FEROS, UVES, and ESPRESSO.These convincingly show that the Ca I component is getting stronger after 2008, with the feature very clearly visible from about 2012 (see Fig. 4).Given the presence of this feature in the later observations, one may also consider that the absorption line is very weakly present also in the 2001 and 2005 observations, but their depth is at the noise level.Our Voigt model fits clearly show how the column density increased by at least a factor of five over the past 17 yr (see lower panel in Fig. 4).The very clear variations we see in the Ca I line suggest that the sightline towards this object started crossing a region of enhanced Ca I recently, with the most significant changes happening shortly after 2008.Interestingly, the Na I D lines also exhibit a component at this velocity, that is already clearly present in the 2001 observations.Since these lines are saturated, we also included the Na I lines in the UV (at 3300 Å).There too, an interstellar component is clearly (though weakly) present in both archival and EDIBLES observations, and does not display any variation between the two epochs. It is interesting to note that also the main component (at = −7.5 km s −1 ) of the Ca I line exhibits much smaller, but systematic changes in the line strength as well: the column density appears to decrease by 0.08 dex over a period of 15 yr (see Fig. 5).The uncertainties on the column densities are derived from the Monte Carlo simulations, and appear perhaps small compared to the epoch-to-epoch variations of ∼0.1 dex that we see especially in the third component (at = 29.1 km s −1 ); however, the main component (at = −7.5 km s −1 ) shows a systematic decrease over time in the column density that we do not see in this latter component.At the same time, it is interesting to look at the column densities of other species in this main = −7.5 km s −1 component (top panel Fig. 5).All the neutral atoms and key molecular species show small, but systematic decreases in their column densities while the ionized species show small increases.The Na I UV lines are perhaps the most extreme and show clearly that the column density A148, page 10 of 14 decreases very significantly by 0.25 dex!This then suggests that the changes in the sightline probed by the main component reflect a change towards a more exposed part of the cloud.A similar change is also apparent for HD 147933 (discussed below). HD 167264 (15 Sgr) is located at a distance of 1140 pc (with uncertainties on the parallax allowing a range of 958-1408 pc; Gaia Collaboration 2020) and is moving with a proper motion of 2.2 mas yr −1 (Gaia Collaboration 2020), which corresponds to a transverse velocity of 2.5 au yr −1 (or thus 11.8 km s −1 ) at that distance.Given how clearly the feature shows up in our 2012 spectra, most of the change must have happened in only about 4 yr, corresponding to a transverse distance of only 10 au.If the region probed by these variations is located close to the background star, we are thus probing clear variations at the scale of ∼10 au; if this region would be much closer to us, we are probing even smaller scales. Clearly, the region probed by the variations in the Ca I line represents the smallest scales, and it is interesting to compare this to some of the DIBs.In particular, Ehrenfreund et al. (1997) compared the DIBs toward BD+63 • 1964 with those toward HD 183143.While both targets represent comparable overall reddening, the DIB spectra are rather different.The sightline toward BD+63 • 1964 was found to represent a more neutral environment, with a Ca I/Ca II ratio that is higher toward BD+63 • 1964 than toward HD 183143.At the same time, most of the narrow DIBs are much stronger toward BD+63 • 1964 than toward HD 183143 and thus represent DIB carriers that reside more in the more neutral parts of interstellar clouds.The most prominent differences in EW between those two objects were measured for the λλ5797, 5849, 6379, and 6614 DIBs.We had a closer look at these DIBs and compared the archival spectra to the EDIBLES spectra to see if there are any notable changes.While there are some small differences between the spectra for some DIBs, we believe those originate mostly from contamination, and thus we do not detect any changes in the DIBs whose carriers are likely to reside in the environment that produces the Ca I absorption.This is somewhat surprising: the column density of the new Ca I component is about 40% that of the main component (see online table in Appendix C), so if the DIB EWs would scale directly with the Ca I column density, this should result in a significant change.Thus, a more neutral environment alone is not enough to activate these DIB carriers. Variations in HD 147933 The bottom panel in Fig. 5 shows the column densities derived for the interstellar sightline toward HD 147933.While none of the variations are significant given the uncertainties on the measurements, we once more notice a systematic trend: all the neutral and molecular species show a slight decrease in column density, while the Ca II line shows an increase.Here too, we may be witnessing a subtle change in the sightline properties. It is interesting that variability on small spatial scales has been established already for the environment of this object.Indeed, HD 147933 is also known as ρ Oph A and forms a double star system with HD 147934 (ρ Oph B) whose separation on the sky corresponds to 344 au (Cordiner et al. 2013).The interstellar cloud towards HD 147933 is characterized by kinetic temperatures of several tens of Kelvin in the molecular gas sampled by H 2 (46 K) and a molecular hydrogen fraction of 0.1 (Savage et al. 1977).Cordiner et al. (2006) used high signal-tonoise observation toward HD 147933 and HD 147934 and found minor differences in their CN/CH ratio, a small density contrast in the molecular gas toward the dark cloud and small variations in some of the DIBs.The derived densities toward components A and B are respectively 625 and 450 cm −3 (Cordiner et al. 2013), and for component A, we also know that the (total) hydrogen column density is log N = 21.68 cm −2 (Jenkins 2009).This implies that for component A, we are probing a cloud with a length scale of ∼2.5 pc.Thus, the small scale variations probed between components A and B is only a tiny fraction of the overall length scale of the cloud, and the possible proper motion variations we probe here are even smaller (of the order of 65 au; see Table A.1). Variations in the λλ4727 and 5780 DIBs The aim of this paper was to search for variations in the DIBs, and the only two DIBs for which we possibly see such variations are the λ4727 and λ5780 DIBs, and even then, we have to be cautious as we cannot rule out further systematics that could affect our results.Especially for the λ4727 DIB, instrumental issues appear to be the main reason for the observed variations.For the λ5780 DIB, the changes do not correspond to an increase in column density; if these changes are real, they would imply variations in the shape of the profile, which could then be a subtle expression of a change in physical conditions in the sightline.The changes appear to be subtle, and a good characterization of these changes may require a better quantitative description of the profiles of these DIBs. As we noted before, the lack of variations for most of our targets indicates that interstellar environments on average do not change much on distance scales of several tens of aus.The two cases noted above on the other hand correspond to sightlines where changes happen in only a few years, and thus for a transverse distance of only ∼10 au for HD 167264 and 65 au for HD 147933.The scales on which we see variations here correspond well with those traced by HI as observed towards quasars, where clear variations are observed on scales as small as 10-25 au (Brogan et al. 2005;Lazio et al. 2009; see also Stanimirović & Zweibel 2018).The lack of variations in the DIB properties then suggests that the DIB carriers do not reside in these tiny scale structures. Summary and conclusions In this study, we compared the high-quality EDIBLES spectra for 64 sightlines to available archival observations to search for time variation in the profiles of 31 narrow and strong DIBs, and additionally also in nine atomic and molecular lines.We considered false positives due to various systematic uncertainties and used a mathematical formalism with a robust Bayesian approach to establish potential significant physical variations. For the 31 DIBs we considered, we primarily searched for changes in the equivalent width with corresponding changes in the DIB profiles.We found that only two DIBs, those at λ4727 and λ5780, show possible significant variations in only a few sightlines, but even in these cases, some caution is needed since the changes are small and there may still be systematic effects that we underestimate or that we have not accounted for.For the λ4727 DIB, we see profile changes in three different sightlines.The most significant variation (a change of 0.5-2% of the absorption depth between the two epochs) occurs for HD 168076, while HD 24398 and HD 36861 show less pronounced variations.Furthermore, we have several observations of HD 168076, and at least three of those show considerable deviations from the EDIBLES reference spectrum.We also confirm in our data A148, page 11 of 14 Farhang, A., et al.: A&A, 678, A148 (2023) set that this DIB has a double-peaked profile that appears to be intrinsic.For the λ5780 DIB, we found a small but significant residual in the ratio spectrum toward HD 23180, HD 185859, and HD 37367.The profile variation for these targets occurs at roughly the same wavelength and shows increasing strengths over time. We fitted the atomic and molecular lines with a set of Voigt profiles, which allows for studying variations in the different line parameters more quantitatively.We found one incontrovertible case for a very significant change: toward HD 167264, a new Ca I component shows up very prominently at 5.6 km s −1 in the 2016 EDIBLES spectrum, whereas this component was absent in the 2001 archival spectrum and previous research indicated no variability until 2008.Additional archival spectra reveal that this component does indeed increase in strength over time.In addition, we noticed that for the main cloud component toward this target, the neutral species all show a marginal decrease in their column densities, in contrast, the ionized species show a marginal increase.These sightline changes are most likely induced by the proper motion of the background target and imply variations at scales of 10 au or smaller.Finally, we see a similar set of marginal, but systematic variations of the atomic lines toward HD 147933, a target for which small-scale structure variations (at the ∼344 au scale) have already been established. The fact that we can detect some variations in both DIBs and atomic and molecular lines shows that high-quality data like those in the EDIBLES data set have the potential to reveal these subtle changes.The archival baseline we present with this elaborate dataset allows us to trace subtle changes in the environmental cloud parameters and their implications on the life cycle of atomic and molecular interstellar species.Future work that can use the EDIBLES data as the archival reference will undoubtedly find more such variations crucial to understand the evolution of interstellar species. Fig. 1.Variation in λ5780 DIB toward HD 23180.(a): EDIBLES "archival" observations on 2014-10-29 (black) and the best-fit Gaussian model (solid red line) and the atmospheric transmission spectrum (brown) showing weak telluric lines.(b) Same as panel (a) but for the second epoch on 2017-08-27.(c) Comparison of the first (solid red line)and second (solid blue line) epochs.We note that the profiles are very similar but not 100% identical.(d) The ratio spectrum.The dashed red lines show the 1-σ band around unity.The green solid line shows the best-fit Gaussian to the ratio spectrum.(e) same as (c), but now the 2014 archival spectrum has been shifted by 0.04 Å. (f) The ratio spectrum of the two spectra in (e).We note that the apparent Gaussian feature from (d) has become much broader and is shifted to the blue, but has not disappeared. show the resulting best fits, and Table Fig. 2 . Fig. 2. Variation in λ4727 DIB spectra towards HD 168076 used in this study.The EDIBLES spectrum is plotted in blue, and the earliest archival spectrum is in red; the intermediate observations are in green.The spectra are smoothed using a Gaussian convolution to minimize noise and better bring out the variations.The lower panel shows the ratio spectra and the fitted Gaussian model in each case.The top three ratio spectra (dividing the intermediate spectra by the EDIBLES spectrum) indicate significant variation, while the bottom ratio spectrum shows insignificant change when comparing the oldest archival to the EDIBLES observations. Fig. 4 . Fig. 4. Archival and the EDIBLES spectra in the wavelength range of the Na I lines (top two panels) and the Ca I line (third panel) toward HD 167264.The fourth panel shows the residuals of the archival spectra after subtracting the 2016 EDIBLES spectrum in the range of the Ca I lines.The bottom panel shows the corresponding column densities for the three different Ca I components over time.Note the very clear increase in column density for the central component (at = 5.6 km s −1 ) corresponding to the feature indicated by the arrow in the Ca I plot.The main component (at = −7.5 km s −1) may show a small but systematic decrease in the column density over the same time period; no systematic changes are seen for the third component (at = 29.1 km s −1 ).We note that the apparent component at ≈ 49 km s −1 in the Na I UV line is in fact the second line in the doublet and thus not a real component. Table 1 . DIBs selected for this study. Table 2 . Rest wavelengths λ c,air and oscillator strengths f of atomic and molecular transitions included in this study.
2023-08-29T06:50:29.600Z
2023-08-22T00:00:00.000
{ "year": 2023, "sha1": "813c698e99dd1147f676299421038ce1b1c4d666", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1051/0004-6361/202037581", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "b5a88edf3911177898453df2b7b2524598df81ab", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
255077701
pes2o/s2orc
v3-fos-license
Removal effect of Candida albicans biofilms from the PMMA resin surface by using a manganese oxide nanozyme-doped diatom microbubbler To prevent oral candidiasis, removal of the Candida biofilms from dentures is important. However, common denture cleaners are insufficiently effective in removing biofilms. A manganese oxide (MnO2) nanozyme-doped diatom microbubbler (DM) can generate oxygen gas microbubbles by a catalase-mimicking activity in hydrogen peroxide (H2O2). DM can invade and destroy biofilms with the driving force of continuously generated microbubbles. In this study, the Candida biofilm removal efficiency by co-treatment of DM and H2O2 was investigated. Diatom particles were reacted with (3-aminopropyl)triethoxysilane to prepare amine-substituted diatom particles. These particles were reacted with potassium permanganate to fabricate DMs. The morphology and components of DM were analyzed by using a scanning electron microscope (SEM). Four types of denture base resin specimens on which biofilms of Candida albicans were formed were treated with phosphate-buffered saline (PBS group), Polident 5-Minute (Polident group), 0.12% chlorhexidine gluconate (CHX group), 3% H2O2 (H2O2 group), and co-treatment of 3 mg/mL of DM and 3% H2O2 (DM group). The biofilm removal effect of each group was quantitatively analyzed by crystal violet assay, and the results were visually confirmed by SEM images. After each treatment, the remaining C. albicans were stained with Hoechst 33342/propidium iodide, and observed with confocal laser scanning microscopy (CLSM) to evaluate the viability. MnO2 nanozyme sheets were successfully doped on the surface of the fabricated DM. Although biofilms were not effectively removed in the Polident and CHX groups, CLSM images showed that CHX was able to effectively kill C. albicans in the biofilms on all resin specimen types. According to the crystal violet analysis, the H2O2 groups removed the biofilms on heat-activated and 3D-printed resins (P < .01), but could not remove the biofilms on autopolymerizing and milled resins significantly (P = .1161 and P = .1401, respectively). The DM groups significantly removed C. albicans from all resin specimen types (P < .01). heat-activated and 3D-printed resins (P < .01), but could not remove the biofilms on autopolymerizing and milled resins significantly (P ¼ .1161 and P ¼ .1401, respectively). The DM groups significantly removed C. albicans from all resin specimen types (P < .01). Introduction Oral candidiasis is denture-related stomatitis, the most common human fungal infection [1]. This oral disease is caused by the Candida species [2]. Candida can penetrate the oral mucosal tissue, evade the host's defensive mechanisms, and have several virulence factors [3]. Candida albicans is a primary microbe in denture-related stomatitis. [2], and there is a greater number on the dentures' surface covering the mucous membrane than on the patient's mucosa itself, indicating that the dentures can become infection reservoirs [4,5]. Polymethyl methacrylate (PMMA) denture base is porous and rough, so Candida can easily adhere and form a biofilm [6,7]. The presence of biofilms on dentures has been associated with denture stomatitis, as well as with systemic conditions, especially in elderly patients [8]. To prevent denture stomatitis caused by Candida and protect the patients' health, it is necessary to remove the biofilm formed on the denture using an appropriate cleaning procedure [9,10]. Dentures can be cleaned mechanically and chemically, as well as with a combination of the two [11]. Mechanical methods generally use brushing with or without dentifrice [12]. Since it is difficult for the elderly or disabled patients to use correct mechanical cleaning methods, the use of chemical methods is recommended [13,14]. Chemical denture cleaners such as hydrogen peroxides, chlorhexidine digluconate, and commercial effervescent denture cleaners have been conventionally used [15,16,17,18] However, these cleaners are insufficiently effective in removing matured biofilms, especially those formed by Candida. [19,20] The C. albicans biofilm on dentures has resistance to denture cleaners and antifungal drugs [21,22]. Manganese oxide nanozyme-doped diatom microbubbler (DM), a recently developed material used as an active cleaning agent, is a hollow cylinder-shaped diatom biosilica with manganese oxide (MnO 2 ) nanozyme sheets [23]. DM generates oxygen gas bubbles by rapidly decomposing H 2 O 2 with the catalase-mimicking activity of MnO 2 in H 2 O 2 . In a previous study, DM invaded and destroyed Escherichia coli biofilm with the driving force of continuously generated microbubbles, and H 2 O 2 molecules diffused into the biofilm, effectively removing it [23]. This research aimed to evaluate the feasibility of using DM as a novel denture cleaner. The effect of DM on C. albicans biofilms' removal formed on heat-activated, autopolymerizing, 3D-printed, and milled acrylic resin specimens was studied and compared to conventional denture cleaners. This study's null hypothesis is that the effectiveness of the denture cleaners tested would be similar. Fabrication of the DMs DMs were fabricated as described previously [23]. For preparing amine-substituted diatom particles, 2 g of diatom particles (2 g) and 60 mL of toluene were placed in a three-necked round-bottom flask with a reflux condenser, a thermometer and an N 2 gas tube. After that, 0.6 mL of distilled water was mixed and stirred at room temperature for 2 h. Then, 3.4 mL of (3-aminopropyl)triethoxysilane was added to the mixture and refluxed for 6 h at 60 C. After cooling the mixture, it was washed three times sequentially with toluene, 2-propanol, and distilled water. After drying in a vacuum desiccator for 2 days, 0.1 g of amine-substituted diatom particles was added to 1 mL of potassium permanganate solution (50 mM) and sonicated at room temperature for 30 min. Finally, the samples were washed three times sequentially with distilled water and ethanol, and dried in an oven at 60 C for 1 day. Physicochemical characterization of DMs The scanning electron microscopy (SEM) images for observing the morphology of DMs were obtained with Apreo S (Thermo Fisher Scientific) operating at 10.0 kV. The element mapping of DMs was analyzed using an energy-dispersive spectrometer coupled with the SEM system at 20.0-kV acceleration voltage. Preparation of denture base acrylic resin specimens A total of 240 disk-shaped (Ø10 Â 2-mm) denture base acrylic resin specimens were prepared by using four fabricating techniques as follows (60 disks for each fabricating technique): autopolymerization (Vertex Self-Curing; Vertex Dental), heat-activated polymerization (Meliodent Heat Cure; Heraeus Kulzer GmbH), milling (Pink PMMA BLOCK; Huge Dental Material), and 3D printing (Denture Plus ARUM 5.0; ARUM Dentistry). The specimens were designed with a CAD software program (Meshmixer, Autodesk; San Rafael) to prepare the same design and size regardless of the manufacturing techniques. Autopolymerizing and heatactivated resin specimens were fabricated by using a conventional flasking and pressure-pack technique. Milled resin specimens were fabricated with a milling machine (DEG-5X100; ARUM Dentistry). The 3D-printed resin specimens were fabricated by using a digital light processing 3D printer (ASIGA MAX UV; ASIGA) and post-polymerized with an ultraviolet light-polymerization unit (PURE PRO; U-Dent) according to the manufacturer's instructions. The prepared resin specimens were rinsed for 5 min in an ultrasonic cleaner and immersed in distilled water for 24 h [24]. After that, the specimens were sterilized under ultraviolet light for 8 h per side and then stored in sterile bags [16]. Biofilm formation on resin specimens To create a biofilm, resin specimens were coated with saliva [25]. Unstimulated saliva was collected in sterile plastic tubes at least 1.5 h after eating, drinking, or tooth brushing. The ethical approval for this research was obtained from the Institutional Review Board (IRB) of Seoul National University Dental Hospital (CRI 22008). Besides, written informed consent was obtained from all the participants prior to the study. Collected saliva was centrifuged (12000 rpm, 10 min, 4 C), and only the supernatant was mixed with PBS (pH ¼ 7.4) with 1:1 (v/v) ratio. This mixture was filtered through a 0.2-μm pore size Minisart syringe filter (Sartorius Stedim Biotech GmbH). The sterile resin specimens were placed in a 24-well tissue culture plate and incubated with a filtered saliva/PBS mix for 2 h at 37 C. After removal of saliva, 1 mL of C. albicans (ATCC 18804) cultured in yeast-malt extract broth (0.3% yeast extract, 0.3% malt extract, 0.5% peptone, and 1.0% dextrose, Figure 2. Candida albicans biofilm removal effect evaluated using crystal violet analysis. Denture base acrylic resin specimens were prepared by using four fabricating techniques. A, Autopolymerizing resin. B, Heat-activated resin. C, Milled resin. D, 3D-printed resin. Data are expressed as mean values AE SEMs. The Kruskal-Wallis test is performed with Dunn's multiple comparisons test. The following symbols represent statistically significant differences between the PBS and experimental groups (ns: not significant, *: P < .05, **: P < .01, ***: P < .001, and ****: P < .0001). concentration 1 Â 10 6 cells/mL) was added and incubated aerobically for 24 h at 37 C. Biofilm removal treatments and crystal violet assay The 60 resin specimens prepared with the same fabrication technique were randomly assigned to six groups (n ¼ 10). After biofilm formation, the resin specimens were gently washed with PBS and treated according to each group's corresponding protocol. The concentration and application time of each group's agents were determined based on previous studies or the manufacturer's instructions: PBS group, PBS for 10 min; Polident group, Polident 5-Minute (GlaxoSmithKline) for 5 min (manufacturer's instructions), CHX group, 0.12% (w/v) CHX for 10 min; H 2 O 2 group, 3% (v/v) H 2 O 2 for 10 min; DM group, co-treatment of 3 mg/mL of DM and 3% H 2 O 2 for 5 min; and negative control (no contamination to verify asepsis of the experiment), PBS for 10 min [16,23]. After each treatment, the remaining biofilms were quantified by crystal violet assay [26]. The resin specimens were washed with PBS and incubated with 1 mL of 1% (w/v) crystal violet solution (Junsei Chemical) for 10 min to stain the remaining biofilm on the resin specimens. After that, the specimens were rinsed three times with PBS to remove the residual dye. The remaining crystal violet dye in the biofilms was extracted by using 95% ethanol. The dissolved crystal violet dye's optical density was quantified using a microplate reader (Epoch 2; Bio-Tek Instruments) at 570 nm. SEM analysis To visually confirm the crystal violet assay results, SEM analysis was performed as described previously [27]. After each treatment, the resin specimens were fixed with 1 mL of 4% paraformaldehyde for 4 h and washed three times with 1 mL of PBS for 15 min. After that, the specimens were fixed with 1 mL of 1% osmium tetroxide for 60 min and rinsed three times with 1 mL of PBS for 15 min. The specimens were dehydrated in a successively increasing ethanol concentration for 15 min each at 70%, 80%, 90%, 95%, and 100%. Subsequently, the resin specimens were treated in 1 mL of 100% hexamethyldisilazane for 20 min. After the specimens were completely dried, platinum coating was performed. Each resin specimen was examined by SEM at a 10-kV voltage. Biofilm analysis by confocal laser scanning microscopy (CLSM) The viability of C. albicans remaining after each treatment was evaluated by using the CLSM. The specimens were stained in broth containing 5 μg/mL Hoechst 33342 (Invitrogen-Life Technologies) and 5 μg/ mL propidium iodide (Invitrogen-Life Technologies) for 30 min at 4 C as previously described [28]. Each specimen was washed three times with PBS and placed upside-down on glass-bottomed confocal dishes (SPL Life Science) with BacLight mounting oil (Thermo Fisher Scientific). The images were obtained by using a CLSM instrument (LSM700; Carl Zeiss) equipped with 405-and 555-nm excitation lasers. Statistical analysis All data are presented as mean value AESEM (standard error of the mean). The significance of the differences among groups was determined by the Kruskal-Wallis test followed by Dunn's multiple comparisons test (α ¼ .05), since the data's normality and homoscedasticity assumptions had been violated. GraphPad Prism 9 (GraphPad) was used for statistical analyses. Physicochemical characterization of DMs Fossilized Aulacoseira diatom particles in the form of hollow cylinders (approximately 10 μm in diameter and 18 μm in length) with many holes (approximately 500 nm in diameter) on the surfaces were used in this study ( Figure 1A). Elemental analysis through SEM revealed that MnO 2 nanozymes were uniformly doped on the diatom particles' silica surfaces ( Fig. 1B and C). Crystal violet assay The crystal violet assay results represent the total C. albicans biofilms remaining on the acrylic resin specimens after each treatment (Figure 2). In the case of autopolymerizing acrylic resin, OD values of DM group (0.51 AE 0.06) showed significantly lower than the PBS group (1.41 AE 0.39) (P < .01), but Polident, CHX, and H 2 O 2 groups showed no significant difference from the PBS group (Figure 2A). Even in the case of milled resin, only DM Group (0.17 AE 0.04) showed significantly lower OD values than PBS Group (0.56 AE 0.04) (P < .001), but the other treated groups showed no significant difference from the PBS group ( Figure 2B). In the case of OD values of heat-activated acrylic resin specimens, the H 2 O 2 and DM groups had significantly lower values (0.71 AE 0.10 and 0.43 AE 0.07, respectively) than the PBS groups (1.42 AE 0.20) (P < .01 and P < .0001, respectively), but Polident and CHX groups showed no significant difference from the PBS group ( Figure 2C). In the case of 3Dprinted resin, OD values of H 2 O 2 group (1.52 AE 0.37) and DM group (0.40 AE 0.04) were significantly lower than the PBS groups (3.77 AE 0.32) (P < .01 and P < .0001, respectively), the other treated groups showed no significant difference from the PBS group ( Figure 2D). SEM analysis To visually confirm the C. albicans biofilm removal efficiency, four types of acrylic resin specimens were subjected to SEM imaging after each treatment (Figure 3). The C. albicans cluster was similarly observed in the PBS, Polident, and CHX groups. In the H 2 O 2 groups, a relatively small number of C. albicans was observed. In the DM groups, very few C. albicans and fragmented or unrecognizable biofilm remnants were observed. Biofilm analysis by CLSM According to the CLSM images of all acrylic resins types (Figure 4), C. albicans stained with propidium iodide were more clearly observed in the CHX groups. In the H 2 O 2 group, a relatively small number of cells was observed compared to that in the PBS group, and there were few remaining cells in the DM group. Discussion The present study's results demonstrated that each tested denture cleaner showed a different biofilm reduction on the acrylic resin specimens; therefore, the null hypothesis was rejected. In the crystal violet assay results, the Polident and the CHX groups did not show a significant difference from the PBS group ( Figure 2). Polident and CHX could not effectively remove the biofilms of the denture base resin specimens. H 2 O 2 effectively removed the biofilms of heat-activated and 3D-printed resin specimens and showed no significant difference from the PBS group in other resin types (Figure 2). When compared to the PBS groups, DM is the only group to have significantly removed more C. albicans from the four kinds of acrylic resin specimens (Figure 2). The SEM images were observed to visually confirm the biofilm removal efficiency, which supported the crystal violet assay results ( Figure 3). The remaining biofilms of the Polident and CHX groups were similar to those of the PBS groups, but relatively few biofilms were observed in the H 2 O 2 groups. In the DM groups, only fragmented or unrecognizable biofilm remnants were observed. To evaluate the viability of C. albicans remaining after each treatment, the specimens were stained and observed with CLSM (Figure 4), and a dual-staining method using Hoechst 33342/propidium iodide was used [28,29]. Hoechst 33342 can cross cell membranes and stain the DNA of living and dead cells; in contrast, propidium iodide selectively labels dead cells as it only enters cells with damaged plasma membranes [29]. The CHX groups did not significantly remove biofilms, but effectively killed C. albicans (Figure 4). Polident is a denture cleaning tablet that has been reported in previous studies to be effective [30]. However, in this study, Polident could not effectively remove the biofilms or kill C. albicans, and these results are similar to those reported by previous studies [18,31]. CHX has been reported to show high effectiveness against C. albicans [32], but this agent did not effectively remove the biofilms according to the crystal violet assay result. Da Silva et al. evaluated the effect of CHX on killing C. albicans by culturing biofilms, and the same results can be confirmed with the CLSM images in the present study. Therefore, CHX can kill C. albicans; however, the dead C. albicans and exopolymeric substances were not removed. H 2 O 2 could remove biofilms on heat-activated and 3D-printed resins, but did not show a significant effect on the other two resin types. Martínez-Serna et al. reported that the effect of reducing C. albicans was insufficient when H 2 O 2 was used alone [16]. The generated O 2 gas bubbles nucleate and form microbubbles inside the diatoms' hollow space [23]. As the bubbles build up the pressure, the DM particles continuously eject microbubbles and move randomly, propelled by driving forces. DM particles penetrate the biofilms, and the continuous generation of O 2 gas and the diffusion of H 2 O 2 become possible within the biofilms [23]. Consequently, the biofilms could be removed more effectively in the DM than in the H 2 O 2 groups. Practitioners and patients can perform cleaning procedure by placing a denture sprinkled with DM particles in a denture case containing 3% H 2 O 2 solution. In this study, resin specimens were prepared by four fabricating techniques to evaluate the effect of removing biofilms on various denture base acrylic resin types used in general dental practice. Unpolished resin specimens were used in this study because the C. albicans main reservoir is the dentures' surface covering the mucosa, and C. albicans can more easily penetrate the unpolished surface [33,34]. Regardless of the resin fabricating technique, DM effectively removed the biofilms. In this study, the DM concentration was set to 3 mg/mL, and the treatment period was 5 min according to the previous study [23]. Co-treatment of 3 mg of DM and 3% H 2 O 2 solution could effectively decrease the cell viability a lot within 5 min [23]. This study is the first to report the possibility of DM application as a novel candidate for denture cleaning. Further studies are needed on the appropriate DM concentration and application time for denture cleaner use in clinical practice. It is also necessary to study the effect of removing multispecies biofilms and the biofilms on the dentures used by patients. DM can be helpful in dental clinical situations where biofilm removal is required. This material has the potential to be used for cleaning other prostheses, mouthwash for orthodontic patients, root canal treatment, and treatment of periodontitis and peri-implantitis. Conclusions Based on the findings of this in vitro study, the following conclusions were drawn: 1. Co-treatment of DM and H 2 O 2 effectively removed C. albicans biofilms formed on autopolymerizing, heat-activated, milled, and 3Dprinted denture base resin specimens. 2. H 2 O 2 effectively removed the C. albicans biofilms of heat-activated and 3D-printed resin specimens, but showed no significant difference from the PBS group in autopolymerizing and milled resin. 3. CHX killed C. albicans, but did not effectively remove biofilms from the resin specimens. Author contribution statement Eun-Hyuk Lee: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Yun-Ho Jeon: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper. Sun-Jin An: Contributed reagents, materials, analysis tools or data. Yu-Heng Deng; Hyunjoon Kong: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data. Ho-Beom Kwon; Young-Jun Lim: Conceived and designed the experiments. Myung-Joo Kim: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper. Data availability statement Data will be made available on request.
2022-12-25T16:15:54.126Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "85aa67f6efa65069e52e6e500eb72d6da666cd60", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c569c3c8d02611b4deeb09073059031dcd98273b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
230523760
pes2o/s2orc
v3-fos-license
Bayesian Uncertainty Quantification for Low-Rank Matrix Completion We consider the problem of uncertainty quantification for an unknown low-rank matrix $\mathbf{X}$, given a partial and noisy observation of its entries. This quantification of uncertainty is essential for many real-world problems, including image processing, satellite imaging, and seismology, providing a principled framework for validating scientific conclusions and guiding decision-making. However, existing literature has mainly focused on the completion (i.e., point estimation) of the matrix $\mathbf{X}$, with little work on investigating its uncertainty. To this end, we propose in this work a new Bayesian modeling framework, called BayeSMG, which parametrizes the unknown $\mathbf{X}$ via its underlying row and column subspaces. This Bayesian subspace parametrization enables efficient posterior inference on matrix subspaces, which represents interpretable phenomena in many applications. This can then be leveraged for improved matrix recovery. We demonstrate the effectiveness of BayeSMG over existing Bayesian matrix recovery methods in numerical experiments, image inpainting, and a seismic sensor network application. Introduction Low-rank matrices play a vital role in modeling many scientific and engineering problems, including (but not limited to) image processing, satellite imaging, and network analysis. In such applications, however, only a small portion of the desired matrix (which we denote as X ∈ R m1×m2 in this article) can be observed. The reasons for this are two-fold: (i) the cost of observing all matrix entries can be high, requiring expensive computational, experimental, or communication expenditure; (ii) there can be missing observations at individual entries due to sensor malfunction, experimental failure, or unreliable data transmission. The matrix completion problem aims to complete the missing entries of X from a partial (and often-times noisy) observation. Matrix completion has attracted much attention since the seminal works of Candès and Tao (2010), Candès and Recht (2009), and Recht (2011). The theory and methodology behind point estimation are now well-understood for matrix completion, under the assumption that X is low-rank, with various convex and non-convex optimization algorithms developed for performing this recovery. However, much of the literature (a detailed review is in Section 1.1) has focused on the completion, i.e., point estimation, of X, with little work on exploring the uncertainty of such estimates. In many scientific and engineering applications, such estimates are much more useful when coupled with a measure of uncertainty. The principled characterization (and reduction) of this uncertainty is known as uncertainty quantification (UQ), see, e.g., Smith (2013). UQ is becoming increasingly important in various applications, providing a principled framework for validating scientific conclusions and guiding decision-making. In this paper, we address the problem of UQ for the matrix completion problem from a Bayesian perspective. We propose a novel Bayesian modeling framework, called BayeSMG, which quantifies uncertainty in the desired matrix X via posterior sampling on its underlying subspaces. BayeSMG can be viewed as a hierarchical Bayesian extension of the singular matrix-variate Gaussian (SMG) distribution (see Gupta and Nagar, 1999;Mak and Xie, 2018), with hierarchical priors on matrix subspaces. A scalable posterior sampling algorithm is then derived for BayeSMG, which leverages the efficient subspace sampling algorithms proposed in Hoff (2007) and Hoff (2009). By integrating the subspace structure for posterior inference, we show that BayeSMG enjoys improved recovery performance and better interpretability compared with existing Bayesian models in extensive numerical experiments and a real-world seismic sensor network application. Existing literature Much of the existing literature on inferring X from partial observations falls under the topic of matrix completion -the completion (or point estimation) of X from observed entries. Early works in this area include the seminal works of Candès and Tao (2010), Candès and Recht (2009), and Recht (2011), which established conditions for exact completion via nuclear-norm minimization, under the assumption that observations are uniformly sampled without noise. This is then extended to the noisy matrix completion setting, where entries are observed with noise; important results include Candès and Plan (2010), Keshavan et al. (2010), Koltchinskii et al. (2011), and Negahban and Wainwright (2012), among others. There is now a rich body of work on matrix completion; recent overviews include Davenport and Romberg (2016) and Chi et al. (2019). However, completion focuses solely on the point estimation of matrix entries and does not provide uncertainty quantification on those unobserved. In scenarios where only a few entries are observed(see motivating applications), this uncertainty can be as valuable as point estimates in assessing the quality of the recovered matrix. The current research literature has generally focused on point estimation of the unknown matrix X. The problem of quantifying uncertainties in X has been relatively unexplored, but it is nonetheless an important one given the motivating applications. One recent pioneering work on this is Chen et al. (2019), which proposed entrywise confidence intervals for both convex and non-convex estimators on X, via debiasing using low-rank factors of the matrix. The resulting debiased estimators admit nearly precise nonasymptotic distributional characterizations, which in turn enable optimal construction of confidence intervals for missing matrix entries and low-rank factors. Our approach has several distinctions from this work. First, the latter is a frequentist approach with appealing theoretical guarantees, whereas our approach is Bayesian and yields a richer quantification of uncertainty on X via a hierarchical Bayesian model. Second, to derive elegant theoretical results, the latter requires a sample size complexity condition on X, similar to the minimum sample size condition in standard matrix completion analysis (see, e.g., Candès and Recht, 2009). Our UQ approach, in contrast, is applicable for any sample size n on X, particularly for the "small-n" setting where observations are limited and uncertainty quantification is most needed. Another approach for quantifying uncertainty is via Bayesian modeling. There is a growing literature on Bayesian matrix completion, of which the most popular approach is the Bayesian Probabilistic Matrix Factorization (BPMF) method in Salakhutdinov and Mnih (2008). BPMF adopts the following probabilistic model on X: is an upper bound on matrix rank. Each row of the factorized matrices M and N are then assigned i.i.d. Gaussian priors N (μ M , Σ M ) and N (μ N , Σ N ), respectively. Conjugate normal hyperpriors are then assigned on the row and column means μ M ∼ N (0, Σ M β), μ N ∼ N (0, Σ N β), with Inverse-Wishart hyperpriors on row and column covariance matrices The hyperparameters β and W are typically specified to provide weakly-or non-informative priors. This model allows for an efficient Gibbs sampler, which performs conjugate sampling on each row of M and each row of N , along with conjugate updates on the mean vectors (μ M , μ N ) and covariance matrices (Σ M , Σ N ). With this, the BPMF can be shown to tackle problems as large as the Netflix dataset, with millions of user-movie ratings. A similar Bayesian model was proposed in Mai and Alquier (2015), with priors on each entry of M and N . Many other existing Bayesian matrix completion methods (e.g., Lawrence and Urtasun, 2009;Zhou et al., 2010;Babacan et al., 2011;Alquier et al., 2014) can be viewed as variations or extensions of this BPMF framework. From a modeling perspective, the key novelty in the BayeSMG model is that it requires orthonormality in the factorized matrices, whereas the BPMF does not. Such a factorization can be viewed as parametrizing X via its singular value decomposition (SVD). This yields several advantages for our method, which we demonstrate later. First, by explicitly parametrizing row and column subspaces as model parameters, BayeSMG can incorporate prior knowledge on subspaces within the prior specification of such parameters. This prior information is often available in many signal processing and image processing problems, e.g., known signal structure or image features. Second, BayeSMG allows for direct inference on subspaces of X via posterior sampling, which is of direct interest in many problems, e.g., in sensor network localization (Zhang et al., 2020; an application we tackle later on) and topology identification problems (Eriksson et al., 2012). For subspace inference, our approach avoids performing an additional SVD step for every posterior sample (compared to the BPMF), which significantly speeds up inference for high-dimensional problems. Finally and perhaps most importantly, BayeSMG can leverage this posterior learning on subspaces to provide improved inference on X. Compared to the BPMF, our approach can yield faster posterior contraction for unobserved entries when the underlying matrix has a low-rank structure, in both numerical simulations and applications. It enables a more accurate estimate and more precise uncertainty quantification of X over the BPMF. The BayeSMG model also provides several novel theoretical insights. In Section 4, we show that the maximum a posteriori (MAP) estimator takes the form of a regularized matrix estimator, which provides a connection between the proposed method and existing matrix completion techniques. We also show that the BayeSMG model provides a probabilistic model on matrix coherence (Candès and Recht, 2009). Coherence has been widely used in the matrix completion literature as a theoretical condition for recovery, which measures the "recoverability" of a low-rank matrix. Through this, we then establish an error monotonicity result for BayeSMG, which provides a reassuring check on the UQ performance of the proposed model. The paper is organized as follows. Section 2 introduces the BayeSMG model. Section 3 presents an efficient posterior sampling algorithm for X via manifold sampling on its subspaces. Section 4 reveals connections between the BayeSMG model and coherence, and its impact on error convergence. Section 5 investigates numerical experiments with synthetic and image data. Section 6 explores a real-world seismic sensor network application. Section 7 concludes with discussions. The SMG model We first describe the Singular Matrix-variate Gaussian (SMG) distribution, and how it can be utilized for modeling matrix subspaces. Problem set-up Let X ∈ R m1×m2 be the matrix of interest, and assume X is low-rank, i.e., R := rank(X) m 1 ∧ m 2 . Let [m] := {1, · · · , m}. Suppose X is sampled with noise at an index set Ω ⊆ [m 1 ] × [m 2 ] of size |Ω| = n, yielding observations: (2.1) Here, Y i,j is the observation at entry indexed by (i, j), corrupted by noise i,j . In this work, we assume i,j ∼ N(0, η 2 ), i.e., the noise on each entry follows an i.i.d. Gaussian distribution with zero mean and variance η 2 . Furthermore, let Y Ω := (Y i,j ) (i,j)∈Ω ∈ R n denote the vector of noisy observations, and let X Ω c be the vector of unobserved matrix entries, where Ω c := ([m 1 ] × [m 2 ]) \ Ω is the set of unobserved indices. With this framework, the desired goal of uncertainty quantification (UQ) can be made more concrete. Given noisy observations Y Ω , we wish to not only estimate the unobserved matrix entries X Ω c , but also quantify a notion of uncertainty on both observed or unobserved entries (since observation noise is present). SMG model We adopt the following SMG model for the low-rank matrix X, which we assume to be normal with a zero mean. Definition 2.1 (SMG model, Definition 2.4.1 of Gupta and Nagar, 1999). Let Z ∈ R m1×m2 be a random matrix with entries Z i,j The random matrix X has a singular matrix-variate Gaussian (SMG) distribution if We will denote this as X ∼ SMG(P U , P V , σ 2 , R). In other words, a realization from the SMG distribution can be obtained by first (i) simulating a matrix Z from a Gaussian ensemble with variance σ 2 , i.e., a matrix with i.i.d. N (0, σ 2 ) entries, then (ii) performing a left and right projection of Z using the projection matrices P U and P V . Recall that the projection operator P U = U U T ∈ R m1×m1 maps a vector in R m1 to its orthogonal projection on the R-dimensional subspace U spanned by the columns of U . By performing this projection, the resulting matrix X = P U ZP V can be shown to be of rank R < m 1 ∧ m 2 , with its row and column spaces U and V corresponding to the subspaces for P U and P V . The matrix X also lies in the . With a small choice of R, this provides a flexible probabilistic model for the low-rank matrix X. The SMG distribution provides several appealing properties for modeling low-rank matrices. First, it provides a prior modeling framework on the matrix X involving its row and column subspaces U and V. It is known from Chikuse (2012) that, for each projection operator P ∈ R m×m of rank R, there exists a unique R-dimensional hyperplane (or an R-plane) in R m containing the origin which corresponds to the image of such a projection. It connects the space of rank R projection matrices and the Grassmann manifold G R,m−R , the space of R-planes in R m . Viewed this way, the projection matrices parametrizing X ∼ SMG(P U , P V , σ 2 , R) encode useful information on the row and column spaces of X. Second, since the projection of a Gaussian random vector is still Gaussian, the left-right projection of the Gaussian ensemble Z results in each entry of X being Gaussian-distributed as well. It is useful for deriving a UQ property of the BayeSMG model. We now show several distributional properties of the SMG model: (a) The density of X is given by where etr(·) := exp{tr(·)}. (b) Consider the block decomposition of P V ⊗ P U : Conditional on the observed noisy entries Y Ω , the unobserved entries Here, γ 2 = η 2 /σ 2 , and where ⊗ is the Kronecker product, and (2.5) Remark. Lemma 2.1 reveals two key properties of the SMG model. First, prior to observing data, part (a) shows that the low-rank matrix X lies on the space T , and follows a degenerate multivariate Gaussian distribution with mean zero and covariance matrix σ 2 (P V ⊗ P U ). Second, after observing the noisy entries Y Ω , part (b) shows that the conditional distribution of X Ω c (the unobserved entries in X) given Y Ω is still multivariate Gaussian, with closed-form expressions for its mean vector X P Ω c and covariance matrix Σ P Ω c in (2.4). Can we directly use the SMG model for UQ? Lemma 2.1 provides a closed-form posterior distribution for the low-rank matrix X after observing the noisy observations Y Ω . It points to a potential way for computing confidence intervals on each entry in X, assuming the underlying row and column subspaces U and V are known. Of course, in practice, such subspaces are never known with certainty. One solution might be to plug in point estimates of U and V (estimated from data) within the predictive equations in Lemma 2.1, to directly estimate unobserved entries and their uncertainties. We investigate the efficacy of this plug-in approach via a simple numerical example. The simulation set-up is as follows. Let m = m 1 = m 2 = 8 be the row and column dimensions of the matrix, and let R = 2 be its rank. We first simulate two random orthonormal matrices U and V of size m × R, via a truncated SVD on an m × m matrix with i.i.d. U[0, 1] entries. With P U = U U T and P V = V V T , the "true" lowrank matrix is then simulated from the SMG model X ∼ SMG(P U , P V , σ 2 = 1, R = 2). Finally, noisy observations are sampled via (2.1) with noise variance η 2 = 0.5 2 . In total, 36 entries are observed (56.25% of total entries), with such entries chosen uniformly at random. From this, we can obtain point estimates of the subspaces U and V, by first estimating X via nuclear norm minimization (Candès and Plan, 2010), a popular method for matrix completion, and then taking the row and column subspaces for this matrix estimate via SVD. These subspace estimates are then plugged into the expressions in Lemma 2.1 for UQ. This process is then replicated for 50 times. Figure 1(a) plots, for a representative simulation run, the point estimates and 95% plug-in confidence intervals (CIs) for each matrix entry using Lemma 2.1, with its corresponding true value marked in red. We see that these intervals provide poor coverage performance since many of the true matrix entries are not within these intervals. For this replication, the coverage ratio is only 43.8%, and across the 50 replications, the average coverage ratio is only 46.1%, meaning only around half of the confidence intervals cover the true entries. This poor coverage suggests that this CI approach (with plug-in subspace estimates) can significantly underestimate the underlying uncertainty of point estimates, which is unsurprising since uncertainty for subspace estimation is not incorporated when using Lemma 2.1. Figure 1(b) plots, for a representative simulation, the point estimates and 95% posterior predictive intervals using the proposed BayeSMG method, which accounts for subspace uncertainty by assigning hierarchical priors on subspaces U and V from the SMG model. We see that our approach yields much better coverage: the 95% intervals, which are now slightly wider, cover the true matrix entries well. For this replication, the coverage ratio is at 95.3%, and across the 50 replications, the average coverage ratio is 93.9%, which is much closer to the nominal coverage rate of 95% than the earlier plug-in approach. This shows the proposed method can indeed provide better uncertainty quantification of X via a fully-Bayesian model specification on matrix subspaces. Model specification We now present the hierarchical specification for the proposed Bayesian SMG model, or BayeSMG for short. We begin by first introducing the matrix von Mises-Fisher (vMF) distribution, which will serve as prior models for the row and column orthonormal frames U and V . We then present a Gibbs sampling algorithm that makes use of a reparametrization of the SMG model for efficient posterior sampling. The matrix von Mises-Fisher distribution (Khatri and Mardia, 1977;Mardia and Jupp, 2009) provides a useful class of distributions on the row and column frames, which lie on a so-called Stiefel manifold. A Stiefel manifold (Chikuse, 2012) consists of all orthonormal subspaces of rank R in the space of R m ; this is denoted as V R,m hereafter. The matrix vMF distribution assumes the following probability density function of matrix W on V R,m : where 0 F 1 (; ·; ·) is the hypergeometric function, and F ∈ R m×R is the concentration matrix. We denote this distribution by W ∼ MF(m, R, F ). The matrix vMF distribution provides conditionally conjugate priors for a wide range of multivariate models, including cluster analysis (Gopal and Yang, 2014) and factor models (Hoff, 2013). One appeal of this class of distribution is that it can be efficiently sampled. Hoff (2009) We will leverage this useful family of priors via the following reparametrization of the BayeSMG model. The following proposition gives a nice reformulation of the SMG model under uniform subspace priors on U and V: k=1 not necessarily in decreasing order. Then: 1. The singular vectors U and V follow independent priors MF(m 1 , R, 0) and MF(m 2 , R, 0), respectively. 2. The singular values diag(D) = (d k ) R k=1 follow the repulsed normal distribution, with density: The proof of this proposition is provided in the supplementary material (Yuchi et al., 2022). The first part of the proposition shows that the use of uniform priors on the projection matrices P U and P V corresponds to independent MF(m 1 , R, 0) and MF(m 2 , R, 0) priors for the singular vectors U and V , which are uniform priors on the Stiefel manifolds V R,m1 and V R,m2 , respectively. The second part shows that the singular values in D follow the repulsed normal distribution, which is closely connected with the distribution of singular values for a Gaussian ensemble (Shen, 2001). This proposition then motivates the following reparametrization of the BayeSMG model: is the repulsed normal distribution in (3.2), and the priors on U , V and D are independently specified. When little is known a priori on matrix subspaces, one can set the concentration matrices as F 1 = F 2 = 0, which provides non-informative priors on U and V . In problems where some prior information is available on matrix subspaces, one can elicit a good choice of prior parameters for the vMF priors via a moment matching approach (Wang and Zhou, 2009). We show in the next section that this reparametrization allows for a Gibbs sampling algorithm which makes use of conditionally conjugate priors for efficient posterior sampling. Finally, we complete the Bayesian specification by assigning the following priors on the variance parameters σ 2 and η 2 : where IG(α, β) is the Inverse-Gamma distribution with shape and rate parameters α and β. Table 1 summarizes the full Bayesian model specification for BayeSMG. Posterior sampling Using the reparametrized model (3.3), we now present a subspace Gibbs sampler for posterior sampling on the BayeSMG model, specifically on the parameters Θ = {U , D, V , σ 2 } given partial and noisy observations Y Ω . We first introduce the sampler under complete observation of the noisy matrix Y , then describe a data imputation procedure for posterior sampling under partial observations Y Ω . Consider first the setting where complete observations on Y are obtained. It can then be shown (see supplementary material for a full derivation) that the full conditional distributions of U , D, V and σ 2 take the form: (3.5) One can then perform the above full conditional updates cyclically for posterior sampling on [Θ|Y ] via Gibbs sampling. These full conditional sampling steps are related to the Gibbs sampler proposed in Hoff (2007) for probabilistic SVD. As mentioned previously, there are efficient sampling algorithms for the matrix vMF distribution (Hoff, 2009;Jauch et al., 2020), which enable efficient full conditional sampling on U and V . The full conditional distribution of D follows the aforementioned repulsed normal distribution with a location shift of μ (denoted as RN (μ, δ 2 )), with density: where d k > 0, k = 1, · · · R. We have found that this can be quite efficiently sampled via a Metropolis-Hastings sampler (Metropolis et al., 1953), with an "independent" proposal distribution (i.e., independent of the current state) set as a non-central, multivariate t-distribution with mean vector μ and scale parameter δ. Consider now the setting where only partial noisy observations Y Ω are available. We describe a posterior sampling algorithm for [Θ|Y Ω ], which makes use of a modification on the above Gibbs sampler on [Θ|Y ]. The idea is to first sample from the joint distribution [Θ, Y Ω c |Y Ω ] of both the parameters Θ and unobserved noisy entries Y Ω c , then take only the marginal samples of parameters Θ. With an initialization of Θ = Θ , the joint distribution [Θ, Y Ω c |Y Ω ] can be sampled via the following Gibbs sampling steps: Since the missing entries Y Ω c is assumed to be conditionally independent of the observed entries Y Ω given X = U DV T , this is equivalent to sampling [Y Ω c |X], which amounts to simulating the observation noise in Y Ω c given ground truth X Ω c . Gibbs sampling: T -total samples Step (i) can be viewed as a data imputation step, which imputes missing entries in the noisy matrix Y . Step (ii) performs the earlier posterior sampling steps for parameters Θ given the full noisy matrix Y . It is worth noting that step (i) depends on an implicit assumption that the entries are either missing completely at random (MCAR) or missing at random (MAR); see Little and Rubin (2019) for further discussion on missing data modeling. When the entries are missing not at random (MNAR), the sampling of [Y Ω c |Y Ω , Θ ] can become much more complicated, since it would depend on the underlying MNAR mechanism for missing entries. One approach is to adopt a probabilistic model for the MNAR entries (see, e.g., Hernández-Lobato et al., 2014 for one such model), then sample [Y Ω c |Y Ω , Θ ] given this model. There are, however, several limitations to this approach: (i) the conditional distribution [Y Ω c |Y Ω , Θ ] may be computationally expensive to sample from in the MNAR setting, and (ii) in the case of misspecification for the MNAR model, the resulting recovery of the matrix X can be highly biased and inaccurate. In the absence of prior information on how matrix entries are missing (which is the case in many applications), it may be preferable to adopt Algorithm 1 for posterior inference. We will show later (in Section 5.2) that the BayeSMG is empirically robust to mild violations of this implicit MAR assumption for missing entries. Algorithm 1 summarizes the above steps for the posterior sampling algorithm. The algorithm is first initialized with estimates U [0] , D [0] , and V [0] obtained from a nuclearnorm completion of X (Carson et al., 2012), and σ 2 [0] is randomly initialized from the prior (3.4). Next, the missing noisy entries Y Ω c are imputed using step (i), then a posterior draw is made using step (ii) via the Gibbs steps in (3.5). This is then iterated until a desired number of posterior samples is obtained. Using the posterior samples of (U [t] , D [t] , V [t] ) at each iteration t, we can obtain a sample X [t] = U [t] D [t] V T [t] from the desired posterior distribution [X|Y Ω ]. These posterior samples {X [t] } T t=1 can then be used for the target goal of uncertainty quantification: the mean of such samples provides a point estimateX for the recovered matrix, and its variability aroundX provides a measure of uncertainty for this recovery. While the computational complexity of this algorithm is difficult to establish given the complex manifold sampling steps, we found this posterior sampler to be quite efficient and scalable in practice. For a relatively large 256 × 256 matrix, the sampler takes around 1 minute to generate T = 1000 samples on a standard laptop computer (Intel i7 CPU and 16GB RAM), which is quite efficient given the size of the matrix. We will report computation times for larger matrices in the numerical studies later. Inference on matrix rank The BayeSMG model as presented above assumes the rank of the matrix X is known, which is often not the case in practice. There has been some literature on this problem of rank estimation for matrix inference. Shapiro et al. (2018) investigates a lower bound of the matrix rank needed for the matrix completion problem to be stable. Hoff (2007) proposes a Bayesian dimension selection method that models the dimension of matrix subspaces via a singular value decomposition (SVD), thus allowing for a Gibbs sampler for sampling the matrix singular vectors, singular values, and rank. While one can conceptually adopt a similar fully Bayesian approach for rank R here, we have found such an approach to be too computationally expensive for the high-dimensional matrices in later numerical experiments, where m 1 and m 2 can be on the order of thousands. This is because Algorithm 1 needs to be performed for each choice of rank R, which can be expensive for large m 1 and m 2 . For such high-dimensional applications, we instead favor the following maximum a posteriori (MAP) approach for rank inference, which sacrifices a richer quantification of uncertainty for computational efficiency and scalability. Consider the MAP estimate of the unknown matrix X, which can formulated as: (3.7) Here, [X|R] follows the BayeSMG prior specification (3.3) given matrix rank R, and [R] is a prior distribution assigned on matrix rank. Under uniform subspace priors and a flat prior on R over {1, · · · , m 1 ∧ m 2 }, it can be shown (see Section 4.1 for a full derivation) that the MAPX can be well-approximated by the nuclear-norm formulation: Here, X * is the nuclear norm of X (the sum of its singular values, see Candès and Tao, 2010), and λ is a regularization parameter. The optimization problem (3.8) can be efficiently solved via convex optimization algorithms (see Section 1.1 for further details). In practice, λ can be estimated via cross-validation (Friedman et al., 2017) on the observed entries Y Ω . We first divide these entries into multiple folds. For each fold, we first use nuclear-norm minimization (3.8) to estimate the entries of the particular fold. Then we compute the cross-validation error for these estimates. We then select the optimal tuning parameter λ * such that it is the λ that minimizes the sum of these cross-validation errors for all folds. With this estimate λ * , an (approximate) MAP estimateX can be obtained by solving (3.8) with λ = λ * . This in turn yields an approximate MAP estimate of R via the rank of the matrix estimateX. Finally, this rank estimate can be plugged into Algorithm 1 for uncertainty quantification on matrix X. For high-dimensional problems with either m 1 or m 2 large, this plug-in MAP approach for rank estimation can yield significant computational savings over a fully Bayesian treatment. Theoretical insights We now provide some theoretical insights on the BayeSMG model. We first discuss an interesting link between the maximum-a-posterior (MAP) estimator and regularized estimators in the literature, then present a connection between model uncertainty from the BayeSMG model and coherence, which is then used to prove an error monotonicity result on uncertainty quantification. Connection to regularized estimators The following lemma reveals a connection between the BayeSMG model and existing completion methods: Table 1, with F 1 = F 2 = 0, η 2 and σ 2 fixed, and a uniform prior on rank R. Conditional on Y Ω , the MAP estimator for X becomes Lemma 4.1 (MAP estimator). Assume the BayeSMG model in The MAP estimatorX in (4.1) connects the proposed model with existing deterministic matrix completion methods (see Davenport and Romberg, 2016 and references therein). Consider the following approximation to the MAP formulation (4.1). Treating log(2πσ 2 )rank 2 (X) as a Lagrange multiplier, one can view this as a constraint on rank 2 (X), or equivalently, on rank(X). Replacing rank(X) by its nuclear norm X * (its tightest convex relaxation, see Keshavan et al., 2010), and treating this new constraint as a Lagrange multiplier, the optimization in (4.1) becomes: for some choice of λ > 0 and α ∈ (0, 1). Using (4.2) to approximate (4.1), we can then view the MAP estimator as an analogue of the elastic net estimator (Zou and Hastie, 2005) from linear regression for noisy matrix completion. To see the connection between the MAP estimatorX and existing matrix completion methods, set α = 1 in (4.2). The problem then reduces to the nuclear-norm formulation in (3.8), which is widely used for deterministic matrix completion (Candès and Recht, 2009;Candès and Tao, 2010;Recht, 2011). This provides an intuitive connection between the proposed Bayesian model and existing completion methods, which we leveraged earlier for efficient inference on matrix rank. Uncertainty and coherence Consider next the following definition of subspace coherence from Candès and Recht (2009), ignoring scaling factors: Definition 4.1 (Coherence, Definition 1.2 of Candès and Recht, 2009). Let U ∈ G R,m−R be an R-plane in R m , and let P U be the orthogonal projection onto U. The coherence of subspace U with respect to the i-th basis vector, e i , is defined as μ i (U) := P U e i 2 2 , and the coherence of U is defined as μ(U) = max i=1,...,m μ i (U). In words, coherence measures how correlated a subspace U is with the basis vectors suggests that U is highly correlated with the i-th basis vector e i , in that the projection of e i onto U preserves much of its original length; a small value of μ i (U) suggests that U is nearly orthogonal with e i , so a projection of e i onto U loses most of its length. Figure 2 visualizes these two cases using the projection of three basis vectors on a two-dimensional subspace U. Note that the projection of the red vector onto U retains nearly unit length, so U has near-maximal coherence for this basis. The projection of the black vector onto U results in a considerable length reduction, so U has near-minimal coherence for this basis. The overall coherence of U, μ(U), is largely due to the high coherence of the red basis vector. In matrix completion literature, coherence is widely used to quantify the recoverability of a low-rank matrix X. Here, the same notion of coherence arises in a different context within the proposed model's uncertainty quantification. Lemma 2.1 provides the basis for this connection. Consider first the case where no matrix entries have been observed. From Lemma 2.1(a), vec(X) follows the degenerate Gaussian distribution N {0, σ 2 (P V ⊗ P U )}. The variance of the (i, j)-th entry in X can then be shown to be: Hence, before observing data, the model uncertainty for entry X i,j is proportional to the product of coherences for the row and column spaces U and V, corresponding to the i-th and the j-th basis vectors. Put another way, BayeSMG assigns greater variation to matrix entries with higher subspace coherence in either its row or column index. It is quite appealing given the original role of coherence in matrix completion, where larger row (or column) coherences imply greater "spikiness" for entries; our framework accounts for this by assigning greater model uncertainty to such entries. Consider next the case where noisy entries Y Ω have been observed. Let us adopt a slightly generalized notion of coherence: Definition 4.2 (Cross-coherence). The cross-coherence of subspace U with respect to the basis vectors e i and e i is defined as The cross-coherence ν i,i (U) quantifies how correlated the basis vectors e i and e i are, after a projection onto U. For example, in Figure 2, the pair of red / blue projected basis vectors have negative cross-coherence for U, whereas the pair of blue / black projected vectors have positive cross-coherence. When i = i , this cross-coherence reduces to the original coherence in Definition 4.1. Define now the cross-coherence vector . From equation (2.4) in Lemma 2.1, the conditional variance of entry X i,j for an unobserved index (i, j) ∈ Ω c becomes: where ν i,j := ν i (U) • ν j (V), and • denotes the entry-wise (Hadamard) product. The expression in (4.4) yields a nice interpretation. From a UQ perspective, the first term in (4.4), μ i (U)μ j (V), is simply the unconditional uncertainty for entry X i,j , prior to observing data. The second term, ν T i,j [R N (Ω) + γ 2 I] −1 ν i,j , can be viewed as the reduction in uncertainty, after observing the noisy entries Y Ω . This uncertainty reduction is made possible by the correlation structure imposed on X, via the SMG model; (4.4) also yields valuable insight in terms of subspace correlation. The first term μ i (U)μ j (V) can be seen as the joint correlation between (i) row space U to row index i, and (ii) column space V to column index j, prior to any observations. The second term can be viewed as the portion of this correlation explained by observed indices Ω. Error monotonicity This link between coherence and uncertainty then sheds insight on expected error decay. This is based on the following proposition: Table 1, with F 1 = F 2 = 0 and fixed σ 2 and η 2 . Let Y Ω contain the noisy entries at Ω ⊆ [m 1 ] × [m 2 ], and let Y Ω∪(i,j) contain an additional noisy observation at (i, j) ∈ Ω c . For any index (k, l) ∈ [m 1 ] × [m 2 ], the expected variance of X k,l can be decomposed as Proposition 4.1 (Variance reduction). Suppose X follows the BayeSMG model in where Var(X k,l |Y Ω∪(i,j) ) is provided in (4.4), and Remark. Proposition 4.1 shows, given observed indices Ω, the reduction in uncertainty (as measured by variance) for an unobserved entry X k,l , after observing an additional entry at index (i, j). The last term in (4.5) quantifies this reduction, and can be interpreted as follows. For an unobserved index (k, l) / ∈ Ω ∪ (i, j), the amount of uncertainty reduction is related to the "signal-to-noise" ratio, where the signal is the conditional squared-covariance between the "unobserved" entry X k,l and the "to-be-observed" entry X i,j , and the noise is the conditional variance of the "to-be-observed" entry. The insight of error monotonicity then follows: Then 2 N +1 (k, l) ≤ 2 N (k, l) for any (k, l) ∈ [m 1 ] × [m 2 ] and N = 1, 2, · · · . Remark. This corollary shows that, for any sampling sequence and any index (k, l), the expected squared-error in estimating X k,l with the conditional mean X P k,l is always monotonically decreasing as more samples are collected. This is intuitive since one expects to gain greater accuracy and precision on the unknown matrix X as more entries are observed. The fact that the proposed model quantifies this monotonicity property provides a reassuring check on our UQ approach. Numerical experiments We now investigate the performance of the proposed BayeSMG method in numerical experiments and compare it to the BPMF method (Salakhutdinov and Mnih, 2008), a popular Bayesian matrix completion method in the literature. Synthetic data For the first numerical study, we assume the true matrix X ∈ R 24×24 is generated from the SMG distribution, i.e., as X ∼ SMG(P U , P V , σ 2 = 1, R = 2), with uniformly sampled subspaces U and V. The entries are assumed to be missing-at-random and the observed entries are contaminated by noise with a variance η 2 = 0.05 2 , which we presume to be known. The prior specifications are as follows. For BayeSMG, we assign a weakly-informative prior σ 2 ∼ IG(0.01, 0.01) on the variance parameter σ 2 , with non-informative manifold hyperparameters F 1 = F 2 = 0. For BPMF, we assign the recommended weak Inverse-Wishart priors on covariance matrices Σ M ∼ IW(R = 2, I), Σ N ∼ IW(R = 2, I). We then ran 10,000 MCMC iterations for both methods, with the first 2,000 samples taken as burn-in. Standard MCMC convergence checks were performed via trace plot inspection (see Figure 3 (b)) and the Gelman-Rubin statistic (Gelman and Rubin, 1992). We employ two metrics to compare the posterior contraction and UQ performance of these two methods. The first is the Mean Frobenius Error (MFE), defined as The MFE calculates the Frobenius norm of the difference between the posterior predictive samples {X [t] } T t=1 and the original matrix X. A smaller MFE suggests better recovery and faster posterior contraction for the desired matrix X. The second metric is the Mean Spectral Distance (MSD), defined as where U (or U ) is any frame in subspace U (or U ). The MSD calculates the spectral distance (Calderbank et al., 2015) between the posterior samples {U [t] } T t=1 for the row subspaces (equivalently, {V [t] } T t=1 for the column subspaces) and the true row subspace U (equivalently, the true column subspace V). A smaller MSD suggests better recovery and posterior contraction for matrix subspaces. The first two plots in Figure 3(a) visualize the true matrix X and the observed Y Ω , with 20% of the entries observed uniformly-at-random. Here, the rank R is estimated via the approximate MAP approach in Section 3.3. The two subsequent plots visualize the posterior mean estimates for X using BayeSMG and BPMF. We see that the BayeSMG method provides visually better recovery of the matrix X, with a lower posterior MFE than the BPMF method. The first two plots in Figure 3(b) visualize the true and estimated row spaces using BayeSMG and BPMF. We again see that BayeSMG gives a visually better recovery of the row space of X (the same holds for its column space), with a lower posterior MSD than BPMF. The next two plots show the trace plots for the first-row coherence μ 1 and the first matrix entry X 1,1 , which is unobserved. We see that the posterior samples for BayeSMG concentrate tightly around the true coherence and matrix values, whereas the posterior samples for BPMF fluctuate much more around the truth. The above observations suggest that when the matrix is generated from the assumed prior model, BayeSMG yields much faster posterior contraction than BPMF, leading to more accurate and precise estimates of X and its subspaces. Next, we will show in the following image recovery and seismic sensor applications that the BayeSMG method provides similar improvements over BPMF via modeling and integrating subspace information. Image inpainting Image inpainting is a fundamental problem in image processing (Bertalmio et al., 2000;Cai et al., 2010), which aims to recover and reconstruct images with missing pixels and noise corruption. It appears in numerous applications where image data are susceptible to unreliable data transmission and scratches. Take, for example, the problem of solar imaging (Xie et al., 2012). When a satellite transmits an image of the sun back to the earth, many pixels will inevitably be lost or corrupted due to the instabilities in the transmission process. The missing pixels would become a problem when the image is scaled up. In this case, the quantification of image uncertainty can be as important as the recovery, since this UQ provides insight into the quality of recovered image features in different regions. There has been some work on applying deterministic matrix completion methods for image in-painting (e.g., Xue et al., 2017), but little has been done on uncertainty quantification. Our method addresses the latter goal. We consider the aforementioned solar imaging problem, where the matrix X is a 256 × 256 image solar flare. The pixel intensity value is encoded from 0 to 255 and represents the use of pseudo-color in the images. We then normalize pixel intensities to have zero mean and unit variance. Half of the pixels in this image are observed uniformly at random, then corrupted by Gaussian noise η 2 = 0.05 2 . We note that, for this problem, the recovery and UQ of the row and column subspaces are of interest as well. This is because image features are often represented in the row and column spaces. Here, these subspaces may represent domain-specific, interpretable phenomena, such as different classes of solar flares, certain shapes, and sunspots. Furthermore, human eyes are typically not as sensitive to high-frequency image features; therefore, a few SVD components can often capture the vital features of an image, making its rank low. For BayeSMG and BPMF, we estimate the rank to be R = 18 following the approximate MAP approach in Section 3.3, and perform 1,000 iterations of MCMC, with a burnin period of 200. As before, MCMC convergence checks were performed via trace plot inspection and standard diagnostics. Figure 4 shows the original solar image, its partial observations, and the recovered image using BayeSMG and BPMF via its posterior predictive mean, as well as its corresponding uncertainties via its 95% highest posterior density (HPD) interval width (Hyndman, 1996). We see that the BayeSMG method provides a much better recovery, with a noticeably lower MFE of 31.0 compared to the BPMF method (350.8). Visually, we see that the BayeSMG recovery captures the key features of the image, e.g., different types of solar flares. The BPMF recovery, on the other hand, loses much of the smallerscale features and contains significant blocking defects. One plausible explanation is when a low-rank subspace structure is present in X (as is the case here), the proposed method can better learn and integrate this structure for improved recovery. Apart from that, an inspection of the HPD plots shows that the BayeSMG provides more accurate estimates of the recovered image, with narrow posterior HPD intervals across the whole matrix. In contrast, the BPMF is much more uncertain of its recovery: its entry-wise posterior density intervals are considerably larger, particularly for pixels with low intensities. Computation-wise, the posterior sampling for BayeSMG can be carried out within one minute on a standard laptop (Intel i7 processor with 16GB RAM), which is quite fast considering the relatively large image size. Additionally, we study the effect of noise on BayeSMG performance. We consider the same solar image problem, where half of the normalized matrix entries are observed and corrupted with noise. We then tested Gaussian errors with various variances η 2 = 0.05 2 , 0.1 2 , 0.3 2 , and 0.5 2 . Figure 5 shows the recovered images and the posterior estimateη of the noise standard deviation in each case. The MFE for the four cases are 31. 00, 35.39, 57.48 and 75.83, respectively. The quality of recovery improves as noise decreases, which is as expected. For small to moderate noise levels, we see that BayeSMG yields a good recovery of the solar flare image, suggesting that it is quite robust to noise. In all four cases, the posterior estimateη is slightly larger than the actual noise standard deviation η. One reason may be that the estimated noise level η captures both the true error, as well as small variations in estimating the low-rank matrix X from the few observed entries. This difference becomes smaller as η increases, which is unsurprising since the error variance would dominate the underlying low-rank matrix signal. To demonstrate the scalability of BayeSMG, we consider next a much higher-dimensional image of the Georgia Tech campus. This image is converted to a gray-scale matrix of size 1911×3000 and standardized to zero mean and unit variance. As before, half of the pixels are observed uniformly at random, then corrupted by a Gaussian noise η 2 = 0.05 2 . To reduce computation time for posterior sampling, we fix the rank as R = 30 for both BayeSMG and BPMF, instead of estimating the rank using the procedure in Section 3.3. We run the MCMC sampler for 500 iterations after a burn-in period of 100. Figure 6 shows the true image, its partial observations, and the recovered image from BayeSMG as well as its corresponding uncertainty. The MFE of this recovery is Figure 7: Performance of BayeSMG on MNAR image pixels. In the first row, the first image is the original matrix, the second is the noisy matrix with entries sampled uniformly at random (MAR), and the third is its recovery estimate via the posterior mean of BayeSMG. In the second row, the first image is the noisy matrix with entries sampled MNAR, and the second image is its recovery estimate via BayeSMG. 1005.1, which is again noticeably smaller than that for the BPMF recovery (3004.8). We see that the recovered BayeSMG image captures the original image's main features, which shows that the proposed method can learn and integrate the subspace structure for recovery. As before, the BayeSMG is quite confident of this completion, with narrow posterior HPD intervals over all pixels. Despite this being a much larger image, we can still carry out BayeSMG on the same standard laptop, albeit with a time of close to two hours. It suggests that the proposed method can yield effective probabilistic matrix recovery in high-dimensional settings. Recall from Section 3.2 that the proposed posterior sampler for BayeSMG implicitly assumes the matrix entries are missing at random. To see how robust BayeSMG is to slight deviations from this MAR assumption, we investigate the recovery performance of BayeSMG for a 256 × 256 lighthouse image, where the entries are missing in a notat-random setting. In particular, we consider the MNAR case where image pixels with a higher intensity value (i.e., darker) are more likely to be observed, and pixels with a lower intensity value (i.e., lighter) are more likely to be missing. Here, 40% of the entries with intensities higher than the population median are observed randomly, 25% of entries with intensities equal to the median are observed randomly, and 10% of the remaining entries are observed randomly. Overall, around 25.1% of image pixels are observed using this scheme, but the probability of missing for a single pixel depends on the true pixel intensity. Figure 7 shows the sampled image pixels for this MNAR setting with its corresponding image recovery via the posterior mean of the BayeSMG method. For comparison, we also show the sampled pixels under an MCAR setting (where every entry is observed independently with a probability of 25%), with its corresponding image recovery via BayeSMG. We estimate the ranks in both scenarios via the procedure in Section 3.3. For the MNAR case, the MFE is 154.35, compared to an MFE of 148.33 for the MCAR case. While the error is slightly higher for the MNAR case (around 4% larger), we see from Figure 7 that there is little discernible difference visually between the recovered images in both cases. It suggests that the proposed BayeSMG sampler appears to be quite robust to mild violations of the implicit missing-at-random assumption for Al- gorithm 1. However, if prior information on the MNAR nature of the missing entries is known, then we can integrate such information within BayeSMG, yielding further improvements in recovery performance (see Section 3.2). Seismic sensor network recovery Seismic imaging is applied widely for finding oil and natural gas beneath the surface of the earth. Ambient Noise Seismic Imaging (ANSI) (Bensen et al., 2007) is a relatively new technique for seismic imaging with great potential. It uses "ambient noises" instead of actively collected signals and is non-invasive to the environment (compared to the traditional active imaging techniques). ANSI has proved useful for imaging shallow earth structures; it utilizes the pairwise cross-correlation function between signals recorded by seismic sensors followed by time-frequency analysis. From these cross-correlations, we can determine the time delay between each pair of sensors. These pairwise time delays are then combined into a data matrix, which is useful for further seismic studies. In a recent study (Xu et al., 2019) on the Old Faithful geyser at Yellowstone National Park, 133 sensors were deployed in its vicinity to collect ambient noise signals for investigating geological structures. Figure 8(a) shows the locations of these sensors. One shortcoming of ANSI, however, is that many pairwise cross-correlations do not contain identifiable signals. In other words, the peak in the cross-correlation is unobserved since ANSI works on weak ambient noises. This missing data then results in missing entries in the 133 × 133 data matrix. To determine whether a cross-correlation is "missing", we first identify which correlations have an unsatisfactory signal-to-noise ratio (SNR), by inspecting the standard deviation ξ outside of the main wave lobe relative to the magnitude of the wave peak g. The correlation is deemed missing if g/ξ < 20. We note that entries on this cross-correlation matrix X are observed with Figure 9: Performance comparison between BayeSMG and BPMF on the ambient noise cross-correlation time delay data matrix. The first plot (from the left) shows the observed entries in the delay. matrix, with missing entries in white. The second plot shows the completed matrix via the posterior mean from BayeSMG. The third and fourth plots visualize the widths of the entry-wise 95% HPD intervals from BayeSMG and BPMF. noise due to background vibrations caused by bubble collapse and boiling water. Here, the standard deviation of the noise is estimated to be η = 0.05 from an inspection of sensor readings during the period when only noise signals are present; this is then used to initialize η in the Gibbs sampler. Figure 9 shows the observed noisy matrix entries Y Ω . To proceed with ANSI analysis, one would then need to estimate missing entries in the delay data matrix X. Bensen et al. (2007) shows that such a matrix is indeed lowrank. Here, uncertainty quantification is crucial for estimating geologic structure and identifying the source of activities. With this uncertainty, engineers can better interpret the wave tomography generated from time delay estimates, and identify parts where estimates are accurate and where they are not. This in turn impacts the accuracy of analysis downstream, which subsequently provides greater insight into reconstruction quality. Figure 9 visualizes the recovery and UQ performance from BayeSMG and BPMF, using an estimated rank of R = 15 via the approach in Section 3.3. We see that the BayeSMG yields much more precise estimates (i.e., narrower HPD interval widths) compared to the BPMF. In particular, when an entire row or column of X is missing, the uncertainties returned by BPMF can be very high, which reduces the usefulness of its recovered entries. On the contrary, the proposed BayeSMG method, by leveraging subspace information, can yield more precise inference on these missing rows and columns. One underlying reason is that the BayeSMG approach explicitly integrates subspace modeling for recovery and UQ. From the visualization of Y Ω in Figure 9, we see that there are clearly-seen bright stripes on the left and top edges of the plot, which strongly suggests the presence of low-rank subspaces in X. It is not a surprise since we know several sensors have highly correlated signals due to their proximity. The BayeSMG appears to exploit this subspace structure to provide more confident predictions. The BPMF yields much higher uncertainty in inference, particularly in rows and columns with little to no observations. While the ground truth for the entire matrix X is not known for this sensor network, we would expect from previous experiments that the BayeSMG yields improved recovery performance over the BPMF, particularly in rows and columns with few observations. With posterior samples on X in hand, we can then use its subspace information to locate (or match) a few sensors that contain highly correlated signals with each other. This sensor matching is helpful in seismology studies since we can use it to estimate the dimension and the capacity of the hydrothermal reservoir of the geyser (Wu et al., 2017). We first perform an SVD step on the posterior meanX, and find the singular vector with the largest singular value. We then inspect all the rows of the matrixX, and select the rows most aligned with this vector. We check these rows to locate the most significantly correlated sensors. Figure 8(b) shows the locations of the 12 most correlated sensors and their relative directions from each other. The identified sensors are among the closest to the Old Faithful geyser, and their related observations are dominated by the highly fractured and porous geological structure underground adjacent to the geyser. Using readings from these sensors, researchers can identify a different pattern of the waveform in tremor signals, which suggests a variety of geological structures underneath the geyser. Conclusion We proposed a new BayeSMG model for uncertainty quantification in low-rank matrix completion. A key novelty of the BayeSMG model is that it parametrizes the unknown matrix X via manifold prior distributions on its row and column subspaces. This Bayesian subspace parametrization allows for direct posterior inference on matrix subspaces, which we can use for improved matrix recovery. We introduced a Gibbs sampler for posterior inference, which provides efficient posterior sampling even for matrices with dimensions on the order of thousands. Additionally, we showed that BayeSMG provides a probabilistic interpretation for subspace coherence, which we can use to show an error monotonicity result for UQ. We then showed the effective recovery and UQ performance of BayeSMG on simulated data, image data, and an application for seismic sensor network recovery. Codes for the BayeSMG sampler with illustrative examples will be released in a package in MATLAB. For future work, it would be interesting to design locations for observations to control the uncertainties, exploring the connection with experimental design literature, e.g., integrated mean-squared error designs (Sacks et al., 1989) or distance-based designs (Mak and Joseph, 2018). The exploration of this Bayesian uncertainty quantification for guiding sequential sampling of entries (see Mak et al., 2021) is also of interest. We would also like to investigate further the problem of rank estimation for matrix completion, including theoretical guarantees and an efficient fully Bayesian implementation, extending the work of Hoff (2007). Another interesting topic to explore is an extension of the i.i.d. Gaussian error assumption to account for skewed or spatially correlated errors.
2021-01-06T02:15:52.068Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "abde44fa3954bbca5da02cfa755df75589fb0f50", "oa_license": "CCBY", "oa_url": "https://projecteuclid.org/journals/bayesian-analysis/advance-publication/Bayesian-Uncertainty-Quantification-for-Low-Rank-Matrix-Completion/10.1214/22-BA1317.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4486a885c6ce2a046605cf2da35d27730711913b", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
56895541
pes2o/s2orc
v3-fos-license
The Next Generation of Metadata-Oriented Testing of Research Software Research software refers to software development tools that accelerate discovery and simplifies access to digital infrastructures. However, although research software platforms can be built increasingly more innovative and powerful than ever before, with increasing complexity there is a greater risk of failure if unplanned for and untested program scenarios arise. As systems age and are changed by different programmers the risk of a change impacting the overall system increases. In contrast, systems that are built with less emphasis on program code and more emphasis on data that describes the application can be more readily changed and maintained by individuals who are less technically skilled but are often more familiar with the application domain. Such systems can also be tested using automatically generated advanced testing regimes. INTRODUCTION Research software refers to software development tools that accelerate discovery and simplifies access to digital infrastructures [1]. These so-called research software platforms are an essential tool for interdisciplinary research, in which collaboration becomes a key factor, and many examples of such platforms have emerged recently in a wide variety of domains, including health, environment and astronomy. However, although research software platforms can be built increasingly more innovative and powerful than ever before, with increasing complexity there is a greater risk of failure if unplanned for and untested program scenarios arise [2][3][4]. As systems age and are changed by different programmers the risk of a change impacting the overall system increases. In contrast, systems that are built with less emphasis on program code and more emphasis on data that describes the application (i.e., metadata [5]) can be more readily changed and maintained by individuals who are less technically skilled but are often more familiar with the application domain. Such systems can also be tested using automatically generated advanced testing regimes. This paper discusses how the descriptions of metadatadriven research software systems are being transformed into automated testing regimes that exercise and stress the systems within a systematic and reproducible framework. In this way, the paper is discussing some aspects of this transformation towards the next-generation of metadata-oriented testing of research software. For more than fifteen years members of the Computer Systems Group at the University of Waterloo's David R. Cheriton School of Computer Science (UWCSG) have been building complex operational data management systems, including research software, that are comprised of more metadata and less program code. II. TOWARDS THE NEXT GENEARATION OF METADATA-ORIENTED TESTING The question is: How can metadata, that is, data that describes the application, support the creation of automated testing regimes that exercise and stress the systems within a systematic and reproducible framework? In the next section, we discuss some answers for this question A. Towards Metadata-Driven Systems In traditional systems programmers write detailed programs that control user interactions and data accesses in great detail. While software development kits and various utility function packages can be parameterized, the values of parameters are usually defined within the calling functions of the program code. In a system with a greater emphasis on descriptive data (metadata), definitions are stored for at least two key aspects of how that data should be managed. In particular, how data access facilities should access the actual application data (i.e., database queries, file access methods, etc.) as well as how that data should be presented and accessed by users through a user interface are all stored in a metadata storage facility such as additional database tables. A relatively simple and generalized "kernel" of data access and user interface code can retrieve configuration and context settings from the metadata storage on demand, incorporate additional data from other datasources as needed and perform the required operations. In systems with more detailed code there is a greater requirement for highly skilled programmers, a scarce and expensive resource, to create and maintain the system. With a greater emphasis on data that describes the application, the need for programmers is reduced. As well the likelihood of coding errors is reduced. Administration of these metadata-driven systems is performed by editing the metadata; no program code changes are required for most changes to the system. Because the metadata is stored in a structured form, such as tables in a relational database system, it can be edited using a simple forms editing facility and these changes can be made by someone without programming experience. They are often made by staff of partner organizations that have more knowledge of the application domain and much less technological capability. B. Metadata-Driven Testing UWCSG members have also made significant use of automated test suites for testing and tuning system performance. As many of the systems evolved to a web-based architecture, web-based technologies for testing were adapted to a metadata-driven architecture. One version that is frequently used is based on a php-webdriver implementation. In the metadata-driven architecture, collections of webdriver actions are stored in database tables along with the results of each execution of a suite and each action within the suite. The testing framework can also identify expected results for actions. A small and simple PHP codebase was created to retrieve test suite directives from the database, pass them to the php-webdriver interface, retrieve results, store those in the database and then repeat with the next test suite directive. C. Test Replay and Repeatability Test suites, expected outputs and actual outputs can all be preserved indefinitely in database tables for future analysis. As a system is changed it is highly desirable to rerun every possible test scenario to ensure that old problems that were believed to have been resolved don't reappear and that new problems don't arise. D. Distributed Multi-Site Testing. In the current version of the application development framework that is used at UWCSG, called the "Web-based Information Development Environment" ("WIDE"), data entry form fields are described with several fields, including the following: The "entity name" is used in an HTML <input name="(identifier)"> data field definition; "Data type" defines the type of data to be permitted; "Required" specifies if the form can be entered successfully without any value in this field; "Maximum width" is an optional integer value that specifies the maximum number of characters that are permitted in the data field (usually specified as the "size" attribute of an <input> data entry field. // capture a screen snapshot of the application (browser) window These directives enable web pages to be opened, form fields to be cleared of any existing value, text to be entered into a form field and an entity on a page to be clicked. Several other directives are supported as described in the Selenium Grid server framework at http://www.seleniumhq.org/projects/grid/. E. Transformation-Based Generation With both the application and test suites defined as metadata it's possible to define a variety of transformations that use the application definition to generate tests for a wide variety of aspects of the system. For example, if a field on a data entry form is defined as accepting a numeric value in the range zero to 250, tests can be generated to attempt entering data values for zero, 250, -1, 251 and a variety of other possible values. As well non-numeric values can be attempted to verify that appropriate diagnostics are generated and system response is acceptable. To test the data entry facility, including field value validation, for a form field named "variable1" of type integer that is a required value with up to three digits on a form with a "Submit" button named "actionSubmit", a test sequence similar to the following can be used: // verify that an empty form field for "variable1" is not permitted… open (url for the form, possibly including logon/password sequence) clear "name=variable1" click "name=actionSubmit" displayScreen // record the result screen // verify that a value of "0" in "variable1" is acceptable… open (url) clear "name=variable1" type "name=variable1","0" click "name=actionSubmit" displayScreen F. Logging for Diagnosis and Additional Testing An additional component of metadata-driven systems, and indeed many traditional systems, can further aid in system testing, maintenance and analysis. Detailed application logging can help greatly to reproduce a request or series of requests that resulted in a program error. Logs should contain enough data to allow the request sequence to be completely reproduced. In multi-user scenarios, such as web-based systems, the timestamps for log entries can be invaluable for accurately reproducing the order and timing of a sequence of requests. In the WIDE toolkit access logs are stored in database tables and periodically archived, depending on the application usage. Logs are very helpful in identifying when an entity was changed or accessed, what userid is associated with the access and what the old and new value of the entity is. Logging data can also be used to generate many additional tests for several purposes. From a sustainability perspective, ensuring that problems don't recur is a high priority. As well, focused system performance improvements and tuning can be performed by exercising a system with frequently used requests, as identified from an analysis of the operational logs. In addition, the test engine can, itself, be tested by using testing engine log data. G. Software Agents for Metadata-Based Testing Just as a data management system can be created with a greater emphasis on metadata and a reduced emphasis on program code, so also software agents can be defined with more emphasis on metadata. We use the name "Declarative Software Agents" to describe such an entity [6]. By using descriptive data to define the agent's input, rules and actions and then log data is recorded as the output of the agent, actions such as autonomous consistency checking between two sites become much simpler to describe and perform. In many data management scenarios repeated instances of the same data are viewed as a maintenance problem and a challenge to be avoided or overcome, but with a declarative agent together with results validation, these scenarios are transformed into opportunities to achieve improved data caching and overall system performance. H. Self-Testing Components An additional test suite directive, "checkpointDB" will be added to the test suite environment in the future to enable an entire database or some part of it to be saved in a place that can be accessed by a subsequent test operation. When the system either performs an action within a test suite or fails to do so, a comparison against the checkpointed version will be made to determine whether the desired action(s) and only the desired action(s) were performed correctly. Tests like these can be run either in response to a manual request or as the result of a timed or other autonomous decision criteria. As systems are used, maintained and age, automated testing and detailed logging are two facilities that help to ensure that the system continues to perform to its expected standards. In conclusion, with metadata-driven testing, we believe that in the future, as more metadata is captured and used, it will be possible to automate the generation of more aspects of system testing, in particular: what needs to be tested, when it should be tested and how to test it. III. CONCLUSIONS AND FUTURE WORK This paper has discussed some aspects of the next generation of metadata-oriented testing of research software, aiming at addressing some key aspects in this research direction. We believe the insights described in this paper can contribute to improve research software testing, an area that has much less attention than design and implementation.
2018-12-25T23:48:50.000Z
2018-12-25T00:00:00.000
{ "year": 2018, "sha1": "d79d63c25009c0c1710d7852ee9a8e4b94d65cb2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d79d63c25009c0c1710d7852ee9a8e4b94d65cb2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
121763304
pes2o/s2orc
v3-fos-license
Light bullets from Mid-IR femtosecond filament in air We numerically investigated the formation of light bullets under filamentation of 3.8- μm femtosecond pulse in presence of anomalous GVD in humid air. The dispersion of humid air in infrared spectral region was characterized by model fits of the real part of refractive index from HITRAN database. The fit for the 2.8-4.2 μm region describes the areas of anomalous GVD near 3.1 μm and 4 μm in humid air. During the nonlinear propagation of femtosecond pulse in humid air, the compressed in space and time wave packet – light bullet – appears in the central time layers of the pulse and moves to the pulse tail. The duration of light bullet reaches the few-cycle value and the energy fluence in the in the light bullet cross-section is above 1 J/cm2. Together with the light bullet formation the spectrum of the pulse broadens strongly and covers the spectral region from 3 μm to 5 μm. The interference model was used for investigation of the frequency-angular distribution of the supercontinuum spectrum components. Introduction The start point of the filament, the length of its plasma channels and the efficiency of the spectral conversion into supercontinuum could be controlled by the input pulse parameters according to the nonlinear medium in which the pulse propagates [1][2][3]. Nowadays high-power Mid-IR femtosecond sources are started to use for filament formation in the gaseous media [4][5]. The generated coherent [6] supercontinuum covers over several octaves, that is very promising for the molecular spectroscopy. The filamentation of 3.1-µm pulse under anomalous GVD was investigated numerically in [7]. In this work 3 To whom any correspondence should be addressed. Figure 1. Refractive index of air. a) Real part of air refractive index from HIRAN database (blue line) and model fits (red line) [11]. b) Imaginary part of air refractive index from HIRAN database (blue line) [11] and numerical approximation (red line) used in this work. Temperature 290.65 K, humidity 10 % , atmospheric pressure 101330 Pa. In our numerical simulation the resonance areas are represented as the areas with high absorption (figure 1 b). In this work we numerically investigated the formation of light bullets (LBs) under filamentation of 3.8-µm femtosecond pulse in presence of anomalous GVD in humid air. The behaviour of the dispersion parameter near 3.8 µm for humid air (figure 2) is close to the behaviour of 2 k near 2 µm in fused silica [12]. The anomalous DVG in fused silica plays a key role in the formation few-cycle wave packets that have the quasi-soliton properties [13]. To investigate the frequency-angular distribution of the supercontinuum spectral components during filamentation of the 3.8-µm femtosecond laser pulse we used the interference model [14]. This model is based on the assumption, that the broadened supercontinuum point source moves along the high-intensity areas in the filament. The interference of the emitted spectral components defines the modulation of the frequency-angular spectrum of the supercontinuum. Results and discussion The transformation of the laser pulse starts from the Kerr self-focusing of the central time layers (figure 3, z = 21.1 m). Then the self-induced laser plasma defocuses the pulse tail. The peak plasma density reaches the value Ne max = 6×10 14 1/cm 3 . In presence of the Kerr-induced positive self-phase modulation (SPM) the anomalous GVD leads to the power flows from pulse front and tail to the pulse center. Such regime of the pulse propagation under positive SPM and anomalous GVD causes to formation of the localized in space ant time wave packet -light bullet [15]. The pulse central wavelength λ 0 = 3.8 µm in our numerical simulations lays in the region of anomalous GVD in humid air [11]: k 2 = -1.25 fs 2 /cm. The input pulse parameters: τ FWHM = 170 fs and a 0 = 1 cm. Ander the considered conditions the light bullet appears at the propagation distance about z = 21 m. During filamentation in humid air the duration and radius of LB decrease from input pulse values to the τ LB = 30 fs and a LB = 250 cm (figure 3). Peak intensity reaches the value I LB = 5×10 13 W/cm 2 , and the energy fluence in the in the LB cross-section is above F LB = 1 J/cm 2 . The results of numerical simulation, based on both models, describe the same LB evolution during the pulse propagation. LB forms in the central time slices and than shifts to the pulse tail (figures 3,4). At the same time the spectrum of the LB broadens and the central wavelength shifts to the red side of the spectrum. At the propagation distance z = 23 m the supercontinuum spectrum covers the region from 3 µm to 5 µm. The dip in SC around 4.25 µm appears due to the absorption in the resonance region (figure 1 b) [11]. Using the interference model [14] we found the frequency-angular distribution of the supercontinuum components of the 3.8-µm light bullet in humid air. We used the refractive index n(ω) from HITRAN database [11] and the numerically obtained group velocity of the light bullet LB υ , that is smaller than the group velocity of the input pulse g υ : . The length of the emitting region L in the filament also was estimated from our numerical simulation. It was assumed that the broadband source emits at all wavelengths and in all directions equally. We found that in the conical emissions is formed in Stokes and in anti-Stokes regions of the supercontinuum spectrum S interf (θ,λ) (figure 5) with the increasing of the emitting area length in the filament. The most strong interference maximum appears in the long-wavelength region of the supercontinuum spectrum S interf (θ,λ). The wavelength of constructively interfering radiation increases with the increasing of the angle of its propagation. This behaviour of the long-wavelength supercontinuum components is in a good agreement with the experimental results on the registration of the conical emission of the Mid-IR radiation in the different types of filamentation processes [16,17].
2019-04-19T13:11:35.888Z
2014-10-27T00:00:00.000
{ "year": 2014, "sha1": "8c62aa56b2b3f334a1a9f9ad13303fd9554c9fdd", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/541/1/012071", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "173232ced95080788f47103d618992d102d68f27", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18388390
pes2o/s2orc
v3-fos-license
Singular Liouville fields and spiky strings in $\rr^{1,2}$ and $SL(2,\rr)$ The closed string dynamics in $\rr^{1,2}$ and $SL(2,\rr)$ is studied within the scheme of Pohlmeyer reduction. In both spaces two different classes of string surfaces are specified by the structure of the fundamental quadratic forms. The first class in $\rr^{1,2}$ is associated with the standard lightcone gauge strings and the second class describes spiky strings and their conformal deformations on the Virasoro coadjoint orbits. These orbits correspond to singular Liouville fields with the monodromy matrixes $\pm I$. The first class in $SL(2,\rr)$ is parameterized by the Liouville fields with vanishing chiral energy functional. Similarly to $\rr^{1,2}$, the second class in $SL(2,\rr)$ describes spiky strings, related to the vacuum configurations of the $SL(2,\rr)/U(1)$ coset model. Introduction Integrability of string dynamics in AdS spaces is one of the most actively discussed topic of the last decade due to its important role in the AdS/CFT correspondence. String equations have the most simple and symmetric form in conformal coordinates. The conformal gauge freedom of string dynamics in AdS × S can be fixed by turning the spherical part of the Virasoro constraint to a positive constant h [1]. Then, the AdS part of the constraint becomes −h and the string description splits in two schemes of Pohlmeyer reduction [2]. They lead to generalized sin-Gordon and sinh-Gordon equations for the sphere and AdS space, respectively. These equations allow a Lax pair representation, which is a basis for the integration of string dynamics. Details of this approach and the related list of references one can find in [3] and [4]. If strings propagate only in AdS space (the case h = 0), the described gauge fixing procedure fails. However, the Pohlmeyer scheme can be modified in a conformally invariant form [5][6][7]. Recently, this approach was effectively used in AdS 3 , providing there a new set of interesting string solutions [8,9]. The Pohlmeyer scheme is usually formulated in terms of a linear system of differential equations for a basis along the string surface. This basis is formed by the tangent and orthogonal vectors to the surface and, therefore, the equations of the linear system contain components of the first and the second fundamental quadratic forms. The consistency conditions for the linear system provide dynamical equations and chirality relations for the worldsheet variables. In ref. [10] we studied general aspects of Pohlmeyer reduction for AdS strings in arbitrary dimensions. Motivated by the Alday-Maldacena conjecture [11], we have extended the Pohlmeyer scheme to the spacelike surfaces, where the chiral conditions are replaced by holomorphic ones. To simplify the discussion, we turned the worldsheet chiral (or holomorphic) functions to constants. Locally this is always allowed due to the conformal freedom. However, information about some nontrivial string worldsheets might be encoded in global properties of chiral or holomorphic functions (as in [9]) and, therefore, such solutions could be lost by a simple gauging. Gauge fixing in string theory is a subtle problem even in flat spacetime. It appears that the standard light cone gauge string surfaces in 3d Minkowski space are singular. Namely, the induced metric tensor is degenerated at some points or lines of the surface, and the scalar curvature diverges there. Among the new solutions constructed in [8] there are the spiky strings [12], which become important ingredients in AdS/CFT correspondence [13][14][15]. Note that the spiky singularities also correspond to the degeneracy of the induced metric tensor. A natural question related to non regular string surfaces is to understand the character of singularities and their role in quantized string theory. In the present paper we study this problem for timelike closed strings in three dimensions. We start with the analysis of the flat case. Using a gauge fixing for the components of the fundamental quadratic forms, we integrate the linear system of the Pohlmeyer scheme and realize that the obtained surfaces are associated with the standard lightcone gauge strings. Then we shown that the chiral u(z) and the antichiralū(z) components of the second quadratic form do not have fixed signs in the lightcone gauge. To analyze the sector with fixed signs of u(z) andū(z) we use the gauge, which turns these functions to constants u(z) → λ,ū(z) →λ. The consistency condition in this gauge reduces to the Liouville equation, and the periodicity of closed string worldsheets fixes the monodromy class of Liouville fields by the matrixes ±I. These are singular Liouville fields, which are parameterized by the Virasoro coadjoint orbits of the vacuum configurations T (z) = −n 2 /4 andT (z) = −n 2 /4, where n andn are nonzero integers [16]. The vacuum configurations of the Liouville field describe oscillating circular and rotating spiky strings. The shape of a spiky string configuration at a fixed time essentially depends on the relative sign between λ andλ, as well as on the values of n andn. The Virasoro coadjoint orbits define internal degrees of freedom of spiky strings, providing their 'conformal deformations'. In Section 3 we consider strings in AdS 3 . Here we 'improve' the integrability of string dynamics by adding the WZ-term to the action, which turns the system to the SL(2, R) WZW theory with Virasoro constraints [17]. This model, called SL(2, R) or SU(1, 1) string, was a subject of intensive study in the 90's (see [1] for references). We apply again the Pohlmeyer type scheme. Using the isometry between sl(2, R) and R 1,2 , the scheme is formulated in a form equivalent to 3d Minkowski case, which allows to use some results of the previous classification here. Namely, the Kac-Moody currents have the same parameterization as the worldsheet tangent vectors in R 1,2 . The next step is the integration of the Kac-Moody currents to string worldsheets embedded in SL(2, R). The main difference with the flat case arises just at this point. Another difference is related to the calculation of the worldsheet metric, which involves the string solution, and not only the chiral currents, as in R 1,2 . Finally, we summarize the results and discuss open problems. Some technical details are shifted to the Appendixes. 2 Closed strings in R 1,2 In this section we analyze string dynamics in 3d Minkowski space R 1,2 within the scheme of Pohlmeyer reduction [2]. This approach sheds new light to known results of string theory in flat spacetime. Pohlmeyer scheme The Pohlmeyer scheme for string dynamics is applied in the conformal gauge where X(τ, σ) satisfies the free field equation The conformal gauge conditions (2.1) provide the relation ∂ τ X · ∂ τ X = 2∂X ·∂X. The timelikeness of the string worldsheet implies ∂ τ X · ∂ τ X < 0 and, therefore, the non zero component of the induced metric tensor can be parameterized by However, it has to be noted that the induced metric can be degenerated (∂X ·∂X = 0) at some points or lines of the string worldsheet, where the tangent vector ∂ τ X becomes lightlike. These singular points correspond to α → −∞, though the functions X µ (τ, σ) (µ = 0, 1, 2) remain smooth (differentiable) there. To follow the Pohlmeyer scheme, we introduce a basis (B,B, N) in R 1,2 formed by the vectors B = ∂X,B =∂X and N, which is an unit vector orthogonal to the string surface Then, from (2.1)-(2.4) one finds the following linear system of equations for the 'moving' basis along the string world sheet where u andū are the components of the second fundamental form on the worldsheet (2.6) The consistency conditions for the linear system (2.5) arē ∂∂α + e −α uū = 0 , (2.7) ∂ū = 0 ,∂u = 0 . All these equations are invariant under the conformal transformations together with X(z,z) → X ζ(z),ζ(z) and N(z,z) → N ζ(z),ζ(z) . The functions ζ(z) andζ(z) here are monotonic and they satisfy the monodromy conditions This symmetry is a remnant of the reparameterization invariance of string theory in the conformal coordinates. One can use this invariance to remove remaining non physical degrees of freedom and simplify the integration procedure. We follow this scheme in the next two subsections. The functions u(z) andū(z) are assumed smooth and periodic, like the components of the tangent vectors B µ (z) andB µ (z). We specify two different classes of u(z),ū(z). The first class is formed by the functions which change signs in the interval of periodicity, whereas u(z) andū(z) have no zeros for the second class. The gauge fixing conditions differ for these classes are different. After integration of the linear system (2.5) we realize that the first class corresponds to the standard lightcone gauge and the second class describes spiky and oscillating circular strings. Lightcone gauge The scheme proposed in this subsection is quite similar to the one used in [18] and [8]. Before integration of the linear system (2.5) we have to find solutions of the consistency conditions (2.7). These conditions are satisfied by the following simple parameterization The aim is to describe the class of functions f andf, which lead to periodic X(τ, σ). Note that solutions of the linear system (2.5) with periodic coefficients, in general, are only quasi-periodic. Therefore, the periodicity of the functions u,ū and e α is a necessary, but not a sufficient condition for periodicity of X(τ, σ). The integration of the linear system (2.5) with u,ū and α given by (2.11) is done in Appendix A and it leads to Here e, e + , e − are (z,z)-independent R 1,2 -valued vectors, which arise as integration constants in solving (2.5). The map from the vectors (e, e + , e − ) to (B,B, N) is invertible and the orthonormality conditions for the basis (B,B, N) is equivalent to (2.14) These conditions are realized by where e µ (µ = 0, 1, 2) is the standard orthonormal basis in R 1,2 , with e ν µ = δ ν µ . The scalar and exterior products of these basis vectors e µ · e ν = η µν = diag(−1, 1, 1) , e µ × e ν = ǫ µν ρ e ρ , (2.16) are given by the metric η µν and the Levi-Civita ǫ µνρ tensors, respectively. With ǫ 012 = 1, the algebra of exterior products yields e × e ± = ± e ± and 2e − × e + = e. Using then (2.11), the normal vector (2.41) can be written in a Lorentz invariant form Other realizations of (2.14) are obtained by Lorentz transformations of (2.15). The tangent vectors to a closed string worldsheet B = ∂X andB =∂X are periodic chiral and antichiral vector functions respectively. Hence, the functions f (z) andf(z) in (2.12) have to be periodic as well and they enjoy the Fourier mode expansions (2. 18) In addition, the periodicity of X(τ, σ) requires equality of the zero modes of B(z) and B(z). These conditions are given by Eqs. (2.18)- (2.19) define the class of f (z),f(z) leading to periodic X(τ, σ). These string surfaces, besides f (z) andf (z), depend on the parameters of Lorentz transformations of the basis (2.15) and also on three integration constants related to the final equations ∂X = B and∂X =B. The general solution of the consistency conditions (2.7), given by [8] depends on two chiral (u, Φ) and two antichiral (ū,Φ) functions. The parameterization (2.11) defines a 'constraint surface' in the space of fundamental quadratic forms and it can be treated as a gauge fixing condition. In fact, a gauge fixing condition in (2.20) can be written in the form where a is a constant. Then, with f (z) = aΦ(z) andf (z) = −aΦ(z) we obtain (2.11). In order to find independent parameterizing variables of string surfaces X(τ, σ), it is important to analyze the remaining freedom of conformal transformations in (2.11). This analysis is done in Appendix B. It shows that the freedom of conformal transformations in (2.11) is described by three parameters φ 0 ,φ 0 and c. The first two correspond to translations in the chiral and antichiral sectors. The transformations parameterized by c are f,f dependent. Their infinitesimal form is defined by The variable c could be included in the infinitesimal parameter ε, however, it is more convenient to keep this form. In Appendix B we also show that the conditions (2.19) are invariant under these conformal transformations. We use the remaining conformal symmetry to reduce the number of parameterizing variables. Let's consider Lorentz transformations of the basis (2.15). A boost in X 1 -direction transforms the basis (e + , e − , e) to (P e + , P −1 e − , e), where P > 0 and θ = log P is the boost parameter. The corresponding tangent vectors (2.12) become P dependent This destroys the structure of (2.23). In particular, the e + -components of the transformed tangent vector B ε is not constant anymore. The same is valid forB ε . Here we use the remaining conformal freedom (2.22). An infinitesimal transformation z → z + εφ(z) corresponds to 30) and the coefficient of e + becomes constant with which is an allowed conformal transformation (2.22) with c = −2 P −1 . The transformed constant coefficient of e + is equal to P +2εp and it is easy to check that the transformed coefficients of e − and e are related as in (2.23). Summarizing the discussion on Lorentz transformations of the basis (2.15), we conclude that eq. (2.23) indeed describes the general form of the tangent vectors. From (2.23) follows that ∂ τ X + = P and ∂ σ X + = 0, where X + is the e + component of X(τ, σ). These are the standard light cone gauge conditions in 3d bosonic string. The above mentioned integration constants of the equations ∂X = B and∂X =B, together with the freedom in (φ 0 ,φ 0 )-translations, describe the coordinate zero modes of X(τ, σ) in the lightcone gauge. Thus, the parameterization of the first and the second fundamental forms by (2.11), after factorization of the remaining conformal symmetry, corresponds to the lightcone gauge. Now we analyze the conformal factor of the metric tensor e α , defined by (2.11). The functionf − f is given as a sum of the non-zero Fourier modes ā n e −in(τ −σ) − a n e −in(τ +σ) , (2.32) and its integration by σ around the unit circle vanishes. This means thatf (z) − f (z) has not a fixed sign. The points where this function vanishes correspond to the above mentioned degeneracy of the induced metric, i.e. ∂ τ X · ∂ τ X = 0 = ∂ σ X · ∂ σ X and α → −∞. Calculating the tangent vector ∂ σ X = B −B, from (2.12) we find which vanishes atf(z) − f (z) = 0. Note that the normal vector (2.13) diverges at these points . Since this vector has the unit norm, it diverges in the lightlike direction. The worldsheet scalar curvature, calculated in the conformal coordinates, is given by R = −2e −α∂ ∂ α. In the parameterization (2.11), it takes the form which is singular atf (z) −f (z) = 0. This singularity can not be removed by coordinate transformations. Hence, the lightcone gauge string surfaces in three dimensions are always singular. In higher dimensions, the conformal factor of the induced metric tensor in the lightcone gauge is given by where the summation index a corresponds to the transverse (to e + and e − ) coordinates. Eachf a − f a has the structure (2.32) and, therefore, they vanish at some points. But if these points for different a's do not coincide, α is globally regular. The Fourier modes in (2.18) are canonical variables, which are used for the quantization of the lightcone bosonic string [19]. The key point for a consistent quantization is to check the commutation relations of the Poincare group generators. This calculation in an arbitrary dimension of spacetime is non-trivial only for the commutators of the Lorentz transformations [M − a , M − b ], where a and b are indices for the transverse coordinates. The Poincare symmetry requires vanishing of these commutator, which in 3 dimensions is trivially fulfilled, since there is only one transverse coordinate. So, there is no quantum anomaly in the Poincare algebra of the lightcone quantized 3d bosonic string. However, it appears that there is an additional class of string solutions in three dimensions, which is not covered by the lightcone gauge strings. Before introducing the new class, we describe those properties of u andū, which distinguish the classes. These functions have vanishing zero modes since u = f ′ ,ū =f ′ and f,f are periodic. Note that if u = 0,ū = 0, the tangent vectors (2.23) become constants and the string surface degenerates to a massless particle trajectory. Neglecting this degenerated case, from (2.36) follows that u(z) andū(z) change signs in the interval of periodicity. This property is obviously invariant under the conformal transformations (2.8). Thus, the lightcone gauge describes the string surfaces with changing signs of u(z) andū(z). In the next subsection we show that the class of string surfaces with fixed signs of u(z) andū(z) is not empty, and this class describes oscillating circular and rotating spiky strings. There is an additional class of u(z),ū(z), which have zeros, but do not change signs there. It is an 'intermidiate' class between the lightcone and spiky strings. The corresponding surfaces have different type of singularities, which 'move' in the lightcone directions around the (τ, σ)-cylinder. We do not consider this class in this paper. Liouville gauge Suppose u(z) andū(z) have no zeros. Such functions can be transformed to constants u(z) → λ,ū(z) →λ by the conformal transformation (2.9). The dynamical variables λ andλ have the same signs as u andū, respectively, and their modules are given by the conformal invariants which easily follow from the monodromy properties of ζ andζ. The choice of constant u(z) andū(z) fixes the conformal gauge freedom up to zero modes of ζ andζ. We call this choice the Liouville gauge, since the corresponding consistency condition (2.7) reduces to the Liouville equation The general solution of this equation is given by where F,F are monotonic functions F ′ > 0,F ′ > 0 and ǫ = sign λ,ǭ = signλ. For a symmetry reason, we treat all four possibilities of (ǫ,ǭ) simultaneously, though the general solution (2.39) depends only on the sign of ǫǭ. The integration of the linear system (2.5) with α given by (2.39) and u(z) = λ, u(z) =λ can be done similarly to the lightcone gauge. Repeating the same steps as before (see Appendix A) we obtain Here (e, e + , e − ) are again R 1,2 -valued integration constants with the same orthonormality conditions (2.14). The space of Liouville fields (2.39) is invariant under the transformations ). These are the conformal transformations in Liouville theory. In spite of similarity, there is an essential differences between the conformal transformations (2.8) and (2.42). Namely, the transformations (2.8) describe the freedom in choice of conformal coordinates and they do not change the string surface. Whereas, (2.42) acts on the Liouville fields and it changes the date of the linear system (2.5), which is not a worldsheet reparameterization. Note also that the conformal weight of e −α is equal to 1 by (2.42) and −1 by (2.8). To avoid misunderstanding with these two conformal transformations, we use for (2.42) and the related maps in Liouville theory the name Virasoro transformations. Let us discuss the regularity issue of closed string worldsheets in the Liouville gauge, related to peculiarities of the Liouville field α(τ, σ). It is well known that a globally regular Liouville field on a cylindrical spacetime exist only for ǫǭ = −1 and it belongs to the hyperbolic monodromy [20]. The parameterizing functions of this monodromy class satisfy the conditions F (z + 2π) = e P F (z) andF (z + 2π) = e PF (z), with P > 0. However, these conditions do not correspond to periodic tangent vectors (2.40). It means that the linear system (2.5) does not provide a closed string configuration in the Liouville gauge, if the Liouville field on the cylinder is regular. There is a class of singular Liouville fields with a regular stress tensor of the theory and some other remarkable properties [16,21,22]. Singularities of these fields correspond to zeros of the exponent e α . The equation e α(τ,σ) = 0 is equivalent to ǫF (z) +ǭF (z) = 0 and it defines non-intersecting, smooth lines on the (τ, σ)-manifold. It appears that this type of singular Liouville fields on the cylinder can match the periodicity conditions of closed string dynamics. The authors of ref. [16] gave a complete classification of periodic Liouville fields by the coadjoint orbits of the Virasoro algebra. This classification is based on the analysis of the Schrödinger (Hill) equation with a periodic potential given by the stress tensor of Liouville theory. We use these results here to select an appropriate class of Liouville fields. In Appendix C we give a list of relations in Liouville theory which are helpful for understanding of the technical details below. The solutions of Hill equations (C.4) in the chiral and the antichiral sectors are denoted by ψ(z), χ(z) andψ(z),χ(z), respectively. They are normalized by the unit Wronskians (C.5). The parameterization of the functions F (z) andF (z) in terms of these solutions (C.7) define the following form of the tangent vectors (2.40) The periodicity of conditions B(z + 2π) = B(z),B(z + 2π) =B(z) requires which corresponds to the monodromy matrix ± I. In the classification of the coadjoint orbits this class is denoted by E ± . Its typical representatives are where k andk are positive integers. They count the number of zeros of these functions in the interval [0, 2π). The coefficients in front of sin and cos-functions correspond to the normalization of Wronskians (C.5). The corresponding Liouville field configurations are associated with vacuum solutions, since the stress tensor for these fields is constant The general representatives of E ± , are obtained by the Virasoro transformations of the functions (2.45) with the conformal weight − 1 Thus, the acceptable class of Liouville fields is associated with the Virasoro group orbits of the vacuum configurations. Let us consider the string configurations related to (2.45) in more detail. In this case the tangent vectors (2.43) read where we have introduced the notations Note that Λ andΛ are positive and n,n are nonzero integers. It is easy to see that the periodicity of X(τ, σ) requires Λ =Λ. To proceed, we choose the constant basis vectors as in (2.15) and rewrite eq. (2.48) in the form The integration of these tangent vectors, up to translations, yields the string surface and the spatial part is described by the function The conformal factor (2.3) of the induced metric is given by and this function vanishes at The zeros of e α correspond to ∂ τ X · ∂ τ X = 0 = ∂ σ X · ∂ σ X. From eq. (2.51) follows that the vector ∂ τ X = B +B is indeed lightlike at the singular points (2.56) and ∂ σ X = B −B vanishes there. The conditions ∂ σ X = 0 and the lightlikeness of ∂ τ X are the boundary conditions for an open string. Therefore, the singular points on the worldsheet look like the end points of an open string and they have a spiky character. According to (2.56), the number of spikes for a fixed τ is equal to |n −n|. The casē n = n is special and we consider it separately. The string worldsheet (2.52) forn = n reduces to (2.57) The corresponding string configuration at a fixed τ is a circle on the (X 1 , X 2 )-plane with the center at the origin. The radius of the circle oscillates in time. At τ = (m + 1 2 ) π n , m ∈ Z, the circle shrinks to the origin. Thus, the casen = n describes oscillating circular strings without spikes. Now we consider the general case with an arbitrary n andn, exceptn = n. From eq. (2.54) follows the relation and Z(σ) = Z(0, σ) corresponds to the string configuration at τ = 0. The shift of the argument σ by ω 0 τ does not change the string shape. Therefore, eq. (2.58) describes a rotating string around the origin with the frequency ω. The sign of ω defines the direction of the rotation. Thus, the initial shape is preserved in dynamics. Using again (2.52), the shape of the string configuration at τ = 0 can be written as This curve, given as a composition of two 'rotations', is known as a epicycloid if n andn have the same sign and a hypocycloid if their signs are opposite. The rotation parameter is σ. The integers n andn define the rotation frequencies and the rotation radiuses are given by their inverse numbers, up to the scale factor Λ. In Appendix D we give several plots of Z(σ) for different values of n andn. Below we list some properties of the curves (2.60), which help to understand the structure of these plots and also to visualize the general case. 2. The following inequalities hold for σ ∈ (σ m , σ m+1 ) 3. The spike corresponding to σ m has the direction of the radius vector at Z(σ m ) if nn < 0; and the direction of the spike is opposite to the radius vector, if nn > 0. 5. When σ changes from σ m to σ m+1 , the polar angle of Z(σ) rotates on ∆φ = 2π min(|n|, |n|) |n −n| . (2.64) According to (2.62) and (2.63), the spikes are the farthest points from the origin, if nn < 0, and they are the nearest ones, if nn > 0. These two properties easily follow from eq. (2.60). The proof of other properties and some additional information about the spiky strings is given in Appendix E. The derivation of eq. (2.64) there does not apply to the case n +n = 0 (see eq. (E.9)). We consider this case here separately. The properties 1 -5 help to visualize these curves and draw them qualitatively. The properties 1 and 3 provide the positions of spikes and their directions, respectively. Other properties (2, 4 and 5) define how the spikes are connected by smooth curves. The form of these curves and the direction of the spikes essentially depend on the sign of nn, as one can see in Appendix D. Since these curves correspond to a composition of two rotations, they arise in description of simple mechanical systems. Their properties were investigated long time ago and one can find them in the literature. Here we present them for completeness. The string solutions given as rotating hypocycloids and epicycloids first were obtained in [23] as a model of hadrons. Later these solutions were rediscovered in cosmic strings and in AdS/CFT correspondence by different authors. A list of references and interesting comments one can find in a recent letter paper [24]. To integrate these vectors to a periodic X(τ, σ), their zero modes have to be equal This equation relates ζ andζ by three (one for each vector component) conditions. The induced metric now is degenerated at nζ(z) +nζ(z) = (2m + 1)π (m ∈ Z) and the tangent vector ζ ′ (z)∂X −ζ ′ (z)∂X vanishes there. Let us consider the following infinitesimal transformation It is easy to check that the corresponding B(z) in (2.67) is a Lorentz transformed vacuum vector from (2.51). In particular, ε 0 becomes an infinitesimal rotation angle in (X 1 , X 2 ) plane, whereas ε 1 and ε 2 are infinitesimal boost parameters in X 1 and X 2 directions, respectively. Thus, the Virasoro transformations of the parameterizing Liouville field contain the Lorentz transformations of the target space as a subgroup. As it was mentioned in the previous subsection, the lightcone gauge does not cover the string solutions of the Liouville gauge. Therefore, the complete quantum picture of R 1,2 strings requires quantization of singular Liouville fields, which is an open and a challenging problem in its own right. Closed strings in SL(2, R) We start this section with a standard approach to string dynamics in AdS spaces. Then we pass from AdS 3 to SL(2, R) group valued variables and introduce the chiral structure of WZW theory there. This enables us to formulate the Pohlmeyer type scheme in the same manner as for R 1,2 . String dynamics in AdS 3 AdS 3 is realized as the hyperbola The choice of conformal coordinates on a timelike string worldsheet Y (τ, σ) assumes and the nonzero element of the induced metric tensor is parameterized as in (2.3) String dynamics in the conformal gauge is described by the Lagrangian where Λ is a Lagrange multiplier. Its elimination from the equation of motion yields The Pohlmeyer scheme for this system leads to the sinh-Gordon equation for the α field [5,6]. Though the system is formally integrable, one can write in an explicit form only the string solutions corresponding to the sinh-Gordon solitons [8] (see also [25]). To improve the integrability of the system by the structure of WZW theory [26], we use the isometry between AdS 3 and the SL(2, R) group manifold. Map to SL(2, R) and WZW theory First we describe the isometry between the sl(2, R) algebra and R 1,2 . Let us introduce the basis in sl(2, R) These three matrices (t µ , µ = 0, 1, 2) satisfy the relations where η µν = diag(−1, 1, 1) form the metric tensor of 3d Minkowski space, I denotes the unit matrix and ǫ µνρ is the Levi-Civita tensor with ǫ 012 = 1 (see (2.16)). The expansion of a ∈ sl(2, R) in the basis (3.6), a = a µ t µ , provides a map a → a µ from sl(2, R) to R 1,2 . The inner product in sl(2, R), introduced by the normalized trace a b = 1 2 tr(a b), leads to t µ t ν = η µν and makes this map isometric. A helpful remark is in order here. The transformations of the adjoint representation a → g a g −1 (g ∈ SL(2, R)) preserve the inner product in sl(2, R). Therefore, the matrixes Λ µ ν = t µ g t ν g −1 define the Lorentz transformations of R 1,2 . Since SL(2, R) is connected, t µ g t ν g −1 ∈ SO ↑ (1, 2) and t 0 g t 0 g −1 ≥ 1. Now we consider the map from Y ∈ AdS 3 to g ∈ SL(2, R) which provides the equivalence between eq. (3.1) and the condition det g = 1. Eq. (3.8) can be written in the form g = Y0 I + Y µ t µ . Therefore, the inverse map is given by Note that g −1 = Y0 I − Y µ t µ . Using these compact forms of g and g −1 in terms of t µ matrices, and the algebraic relations (3.7), one easily checks that (3.10) Due to this isometry, the conformal gauge conditions (3.2) are equivalent to g −1 ∂g g −1 ∂g = 0 = g −1∂ g g −1∂ g , (3.11) and the parameterization of the nonzero component of the worldsheet metric by (3.3) can be written as g −1 ∂g g −1∂ g = −e α . (3.12) The AdS 3 string dynamics in the conformal gauge is described by the action Its variation leads to the equation of motion which corresponds to (3.5), together with (3.1) and (3.3). To get the equations of WZW theory [26] ∂(∂g g −1 ) = 0 = ∂(g −1∂ g) , (3.15) one has to add to the action (3.13) the WZ-term, which is a volume integral of the 3-form H = 1 3 g −1 dg ∧ g −1 dg ∧ g −1 dg . This form on SL(2, R) is exact H = dF , with (3.16) Note that this 2-form is globally well defined due to the remark above. Then, with Stokes' theorem, the action of the SL(2, R) WZW theory is given by a surface integral from the Lagrangian [27] The Euler-Lagrange equations obtained from (3.17) reproduce the equations of WZW theory (3.15). Adding to these equations the conformal gauge conditions (3.11), one gets a system called the SL(2, R) string [17]. In the next subsection we investigate this system by the Pohlmeyer scheme. Pohlmeyer scheme for SL(2, R) string Before starting the Pohlmeyer scheme note that a reparameterization invariant description of the system, yielding both the equation of motion (3.15) and the constraints (3.15), is given by the action Here ξ a (a = 0, 1) are worldsheet coordinates, h is the determinant of the worldsheet metric tensor h ab , h ab is its inverse and ǫ ab is the 2d Levi-Civita tensor with ǫ 01 = 1. In the conformal gauge h ab ∼ diag (−1, 1), we indeed obtain eqs. (3.11) and (3.15), with ξ 0 = τ and ξ 1 = σ. Let us consider the Kac-Moody currents which, according to (3.15), satisfy the chirality conditions The parameterization of the induced metric (3.12) in terms of these currents reads Due to the isometry between sl(2, R) and R 1,2 , J(z) andJ(z) are associated with lightlike vectors as B(z) = ∂X andB(z) =∂X in R 1,2 . Similarly to the R 1,2 case, we consider the inner product of the Kac-Moody currents JJ . Taking into account that J andJ are lightlike and gJ g −1 corresponds to a proper Lorentz transformation ofJ , one finds that JJ and J gJ g −1 have the same sign. Therefore, we can use the parameterization (3.23) Thus, the data for the Kac-Moody currents (J,J) and the tangent vectors (B,B) are similar. Using then the isometry between sl(2, R) and R 1,2 , one gets the same linear system for a moving basis as (2.5) Here K is a sl(2, R) valued unit vector, orthogonal to J andJ and v,v are defined similarly to the coefficients of the second fundamental form (2.6) The consistency conditions for the linear system (3.24), coincide with (2.7). Due to the equivalence with the R 1,2 case, we use the same gauge fixing conditions. The first corresponds to the parameterization 28) and the second to the Liouville gauge with v(z) = λ ,v(z) =λ ,∂∂β + λλ e −β . (3.29) As we will see in the next subsection the conditions (3.28) describe the nilpotently gauged WZW theory. On the level of solutions of the linear system, the pair (J,J) is completely equivalent to (B,B), and we can use the solutions (2.12) and (2.43). The next step is a construction of string worldsheets g(z,z). The WZW field splits into the product of chiral (left) and anti-chiral (right) fields g(z,z) = g l (z) g r (z) , (3.30) and to find the string worldsheet one has to integrate the equations Then the induced metric (3.21) can be calculated by We realize this programme in the following two subsections. Some helpful formulas for SL(2, R) calculations are presented in Appendix F. Nilpotent gauge Let us consider the gauge (3.28). The isometry between sl(2, R) and R 1,2 relates the standard orthonormal bases of this spaces t µ ↔ e µ , µ = (0, 1, 2). The basis (2.15) then corresponds to and similarly to (2.12) we obtain the Kac-Moody currents The matrixes t ± = 1 2 (t 0 ± t 1 ) are nilpotent elements (t 2 ± = 0) of the sl(2, R) algebra. The currents (3.34) have constant components in t + direction equal to 1 (J + = 1 =J + ). Note that the transformation to the basis (P e + , P −1 e − , e) used in (2.23) is equivalent to the rescaling of the t + and t − components in (3.34) by P and P −1 , respectively. It is well known that the nilpotent gauging of the SL(2, R) WZW model leads to Liouville theory [28]. This gauging corresponds to the Hamiltonian reduction with constant J + (z) andJ + (z), similarly to (3.34). The Kac-Moody currents (3.34) satisfy the Virasoro constraints (3.22) as well. Constant J + (z),J + (z) and the Virasoro constraints together form the second class constraints. Thus, these two sets of constraints are complementary to each other and they provide a coset construction. Applying the reduction scheme used in coset WZW models, we write the chiral and antichiral parts of the WZW-field in a matrix form and from (3.31) find the relations and the Hill equations which are satisfied by the components of the first row of g l (z) and the second column of g r (z). These components are invariant under the gauge transformations generated by the nilpotent currents. The unimodularity conditions det g l (z) = 1 = det g r (z) provide the unit Wronskians, as the normalization conditions for the solutions of (3.37) Thus, we get the WZW-field parameterized by the gauge invariant chiral (ψ, χ) and antichiral (ψ,χ) fields, which are related by the unit Wronskians (3.38). The g 12 matrix element of the WZW-field g 12 (z,z) = ψ(z)ψ(z) + χ(z)χ(z) is also gauge invariant and it is identified with the Liouville field exponent V (z,z) of the conformal weight − 1 2 (see Appendix C). The potentials in the Hill equations form the stress tensor of Liouville theory On the other hand, the WZW-field (3.39) describes a string surface in SL(2, R). The worldsheet induced metric (3.32) obtained from (3.39) reads In this way we parameterize the SL(2, R) string surfaces by the chiral and antichiral functions of Liouville theory. However, the fact that the components of the stress tensor (3.40) are given as derivatives of periodic functions imposes certain restrictions on allowed Liouville fields. Namely, the chiral and antichiral energy functionals of Liouville theory, given by the integral of the stress tensor over the period, have to vanish These functions correspond to the parabolic monodromy with M = I + 2π t + . Inserting them in (3.39) with f (z) = c andf (z) =c one gets a WZW-field, which depends only on τ , and describes a particle trajectory like constant f (z) andf(z) in R 1,2 . In general, the Hill equation can not be integrated explicitly for an arbitrary potential. Therefore, in contrast to R 1,2 , the functions f andf are not convenient parameterizing variables for string surfaces in SL(2, R). Usually, it is more helpful to parameterize the functions (ψ, χ) and (ψ,χ) directly and express f andf through them. Liouville gauge for SL(2, R) string Now we consider the Liouville gauge (3.29) and the surfaces related to the ground state functions (2.45). As in the previous subsection, we use the correspondence between the worldsheet tangent vectors in R 1,2 and the SL(2, R) Kac-Moody currents. The tangent vectors (2.48) are equivalent to the currents Note that the periodicity condition here does not require necessarily Λ =Λ. However, if Λ =Λ, the currents (3.54) have equal constant t 0 components J 0 = Λ =J 0 . These constraints correspond to the vector gauged SL(2, R)/U(1) coset model [29]. A more well investigated case is the axial gauged SL(2, R)/U(1) model [30], which corresponds to J 0 = Λ = −J 0 . This model was integrated in [31] similarly to Liouville theory, and in the periodic case a free-field parameterization was obtained in the hyperbolic sector. The exact integrability of the model has been generalized in [32] to the vector gauged model and to the elliptic sector of both models as well. We use the corresponding technique to integrate the Kac-Moody currents (3.54) to WZW-fields. The currents (3.54) can be written in the form (see (F.4)) and with h l (z) = e − 1 2 nz t 0 g l (z) and h r (z) = g r (z) e − 1 2nz t 0 , eq. (3.31) reduces to The integration is then straightforward and leads to g(z,z) = e where g 0 is a SL(2, R) valued integration constant and a = (2Λ − n) t 0 + 2Λ t 1 ,ā = 2Λ −n t 0 − 2Λ t 1 . (3.58) The periodicity of (3.57) imposes the following condition Equations (3.57)-(3.59) define the WZW-fields g(z,z) and, thereby, the string surfaces in SL(2, R). However, to describe the structure of these surfaces in detail, similarly to the flat case, an additional labour is needed. If ǫ = −1, n±n and θ−sθ are odd. Therefore, instead of (3.75) we use the notations θ − sθ = 2ν + 1 , θ + sθ = 2µ + 1 , n −n = 2k + 1 , n +n = 2l + 1 , (3.77) again with integer ν, k and l. The solution (3.69) in this case becomes One has to remember that the parameters of the solutions are restricted by where the last two inequalities follow from (3.72). If these conditions are not fulfilled, the coefficients A ± and B ± are either singular, or vanishing. The vanishing of the coefficients correspond to the degenerated case Λ = 0 =Λ. The formulation of the conditions (3.79) in terms of the new parameters (µ, ν, k, l) is more complicated. Therefore, sometimes it is more convenient to keep the old parameters (n,n, θ,θ). The spatial part of AdS 3 is given by the complex plane Z. Due to the similarity with the flat space, the function Z in (3.69) can be described in a same manner as (2.54) for R 1,2 strings. The case θ − sθ = 0 is special, like n −n = 0 in R 1,2 , and we consider it here separately. The condition θ − sθ = 0 implies s = 1 and θ =θ. The corresponding solutions of eq. (3.74) are σ-independent discrete values of τ , like for the circular oscillating strings in R 1,2 . For simplicity, let's assume n =n and 2φ = −π. The solution (3.69)-(3.70) then reduces to Z = n 2 − θ 2 2i n θ sin(θτ ) e inσ , Z 0 = n 2 + θ 2 2n θ sin(θτ ) + i cos(θτ ) e inτ , (3.80) and the conformal factor (3.71) becomes The time variable in AdS 3 is given by the phase of Z 0 , which in (3.80) is only τ dependent. For a fixed τ , the function Z in (3.80) provides a circle with the radius proportional to sin(θτ ). Therefore, eq. (3.81) describes a circular oscillating string like (2.57) in R 1,2 . This type of string solutions were obtained earlier in [33], where the authors used the Pohlmeyer type scheme for the embedding space R 2,2 , and provided some non periodic solutions as well. In the limit θ → 0 eqs. (3.80)-(3.81) are reduced to This case correspond to a nilpotent a in (3.62), and the related WZW-field belongs to the parabolic monodromy. Eqs. (3.80)-(3.81) allow a continuation to imaginary θ, which leads to the hyperbolic solutions with (3.83) and the conformal factor e α = (θ 2 + n 2 ) 2 8θ 2 sinh 2 (θτ ) . Let's assume now that θ − sθ = 0. The function Z(τ, σ) in (3.69) then fulfills the relation (2.58) with As in the flat case, the dynamics in τ preserves the shape of this closed curve. Effectively it rotates only. The curve defined by (3.86) is represented again as a combination of two rotations. Therefore, the properties of the spiky strings, discussed in Subsection 2.3 and Appendix E, can be generalized to this case. Finally, we briefly describe the general case with the Kac-Moody current with h l (z) = e − 1 2 nζ(z) t 0 g l (z). The change of variable z → ζ in (3.88) yields where ρ(ζ) = dz dζ 2 . The chiral current, which stands in this equation has vanishing t 2 and constant t − components. Therefore, it can be interpreted as a current of nilpotently gauged WZW theory in the gauge J 2 = 0. This reduction is described again by Liouville theory and one can use the scheme of the previous subsection. Summary Here we give a summary of the paper and describe some unsolved problems. In summary we list the following items: 1. On the basis of the isometry between R 1,2 and sl(2, R), the Pohlmeyer scheme for string dynamics in R 1,2 and SL(2, R) is formulated in equivalent forms. This equivalence maps the tangent vectors of the R 1,2 string surfaces to the SL(2, R) Kac-Moody currents, which obey the Virasoro constraints. 2. The closed string dynamics in R 1,2 is integrated within the Pohlmeyer scheme, using the parameterization (2.11) for the components of the fundamental quadratic forms. These parameterization fixes the conformal gauge freedom up to translations and a one parameter subgroup. The factorization of the set of solutions by the remaining conformal freedom provides the string surfaces in the lightcone gauge. These surfaces have a degenerated induced metric and the chiral components of the second fundamental form u(z),ū(z) do not have fixed signs in the interval of periodicity. 3. The second class of closed string surfaces in R 1,2 corresponds to the case with non vanishing u(z) andū(z). In the gauge where u(z) andū(z) are constant, the Gauß equation reduces to the Liouville equation and, on the basis of its general solution, the linear system of the Pohlmeyer scheme is integrated in the form (2.40) and (2.43). The periodicity condition is satisfied by the class of singular Liouville fields with the monodromy matrix ±I. This class is parameterized by the Virasoro coadjoint orbits, which are labeled by two nonzero integers (n,n). The signs of n andn coincide with the signs of u andū, respectively. The vacuum Liouville fields with constant stress tensor T (z) = − n 2 4 ,T (z) = −n 2 4 describe oscillating circular (if n =n) and rotating spiky (if n =n) strings. The number of spikes is equal to |n −n| and the shape of string configurations at a fixed time essentially depends on the sign of nn. 4. The tangent vectors of R 1,2 string surfaces in the lightcone gauge correspond to Kac-Moody currents of a nilpotently gauged SL(2, R) WZW model. The related Liouville fields are singular and have vanishing chiral energy functional. The corresponding string surfaces are described by a pair of monotonic functions ζ(z),ζ(z), used in parameterization of 2d conformal group. These SL(2, R) string surfaces are always singular, like the surfaces in R 1,2 . 5. In the Liouville gauge, the vacuum field configurations provide the coset Kac-Moody currents of the SL(2, R)/U(1) model. They are labeled by two nonzero integers n andn. The integration of the currents in the elliptic sector of WZW-fields leads to circular and spiky strings. These string surfaces split in four classes, depending on the sign of nn and the parity of n −n. The parabolic and hyperbolic solutions are obtained by the analytical continuation of the elliptic solutions with nn > 0 and even n −n. The main result of the paper is the description of the new classes of closed string solutions in R 1,2 and SL(2, R). They are given by (2.67) in R 1,2 , and by (3.39), (3.46) and (3.69)-(3.70) in SL(2, R). The construction of quantum theory of these solutions requires quantization of singular Liouville fields. The quantized Liouville theory on a cylindrical spacetime [34] and on its Euclidean counterpart [35] are one of the most remarkable results in 2d QFT. However, this theory corresponds to the quantization of regular Liouville fields, which allow a free-field parameterization. This parameterization is a basis for canonical quantization in the Minkowskian spacetime, where the parameterizing field can be chosen as the in (or out) field of the theory. It also helps to construct the vertex operators [36] and calculate their causal algebra [37]. The singular Liouville fields, we are interested in, have a regular stress tensor. A natural way for a generalization of the canonical scheme to the singular case is to find a free-field parameterization of these singular fields. Note that such Liouville fields on a plane allow a free-field parameterization. Namely, a Liouville field with N lines of singularities can be canonically parameterized by one regular free field and the asymptotic data of N relativistic particles [38]. Unfortunately, a generalization of this result to the periodic case is still unknown (see however [22]). The vacuum Liouville field configurations, used in the spiky strings, for n = 1 correspond to Möbius invariant ground state, which arises in boundary Liouville theory on a strip with Dirichlet conditions. The Hamiltonian description of such field configurations was considered in [39], and was an attempt to understand Liouville theory with Dirichlet boundary conditions as a limit of the theory with Neumann conditions [40]. The latter correspond to the elliptic monodromy [41] and its Euclidean version is given by FZZT branes [42]. This programme also needs further investigation. A possible candidate for quantum theory of the spiky strings in R 1,2 could be the 'Wick rotated' ZZ branes [43]. Another possible approach to quantum theory of singular Liouville fields is the geometric quantization on the Virasoro coadjoint orbits [44]. These orbits parameterize nilpotently gauged strings in SL(2, R) and spiky strings both in R 1,2 and SL(2, R). There is a renewed interest to Liouville theory caused by [45], and it is a challenge to understand whether the singular Liouville fields and spiky strings described in the present paper have any relation to it. Appendix A In this appendix we integrate the linear system (2.5) with u,ū and e α given by (2.11). We then, briefly consider the case of the Liouville gauge, discussed in subsection 2.3. Starting with the last equations in (2.5) Finally, using the notations c 1 ≡ e, c ≡ 2e − and c 2 ≡ e + , one obtains B,B and N in the form (2.12)-(2.13). In the Liouville gauge, where u andū are constants and e α is given by (2.39), we start again with the last equations in (2. The integration steps here are as before. At the final stage one gets the following relations for the integration constantsc 1 = −c 1 ,ǭc 2 = ǫ c 2 , and with the notations c 1 ≡ e, c ≡ 2e − and c 2 ≡ ǫe + one arrives at (2.40)-(2.41). Then, due to (B.8), ρ andρ have equal zero modes too. This proves the invariance of the first relation in (2.19). To prove the second relation, one has to compare the zero modes of f ρ andfρ (the first order terms in ε). Here, it is convenient to use (B.6), (B.8) and express f ρ through φ and its derivatives. Then, a part of the zero mode of f ρ vanishes due to the relation φ ′ 3 + 2φ ′′ φ ′ φ = (φ ′ 2 φ) ′ . The same is valid for the zero mode offρ. The rest parts of the integrals 2π 0 dz f (z)ρ(z) and 2π 0 dzf (z)ρ(z) coincide trivially. Appendix C The exponential V = e Appendix D In this appendix we present six pictures of spiky string configurations at a fixed time. They are constructed by eq. (2.60) for different values of n andn, with Λ = |nn|. This value of the scale factor Λ is chosen just for convenience. The values on n andn are indicated below the pictures. The left pictures correspond to nn < 0 and the rights to nn > 0. These pictures demonstrate the properties 1-5, discussed in subsection 2.3. The curvature with respect to the origin is positive if the scalar product ∂ 2 σσ X| N · X is negative and vice versa. Calculating the vector ∂ 2 σσ X from (E.5) and using (E.7), we obtain This equation proves the property 4 for an arbitrary τ . Finally, considering the property 5, we calculate the differential of the polar angle for the curve (2.60) and integrate it in the interval [σ m , σ m+1 ] defined by (2.61). This provides the rotation angle ∆φ = |nn(n +n)| |n −n| 2π 0 dθ 1 + cos θ n 2 +n 2 + 2nn cos θ , (E.9) where θ = |n−n|σ. Writing (E.9) as a contour integral over the unit circle with ζ = e iθ , and calculating the residues of the integrand at the poles inside the unit disk we obtain (2.64). Note that the integrand has three poles at ζ 1 = 0, ζ 2 = −n/n, ζ 3 = −n/n and only two of them are inside the unit disk. Appendix F Here we present some useful formulas for sl(2, R) algebra and SL(2, R) group. From eq. (3.7) follows a 2 = a a I, for any a ∈ sl(2, R). In particular, if a a = 0, then a is nilpotent a 2 = 0. Eq. a 2 = a a I helps to find a compact form of e a e a = cosh θ I + sinh θâ , with θ = a a ,â = a θ , if a a > 0 ; (F.1) e a = cos θ I + sin θâ , with θ = − a a ,â = a θ , if a a < 0 ; e a = I + a , if a a = 0 . From these equations follows that
2009-09-02T08:45:38.000Z
2009-09-02T00:00:00.000
{ "year": 2009, "sha1": "7cf6a3ccbe59a9e1060eb83128d25446ec1da23f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0909.0350", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7cf6a3ccbe59a9e1060eb83128d25446ec1da23f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
56159799
pes2o/s2orc
v3-fos-license
BIBECHANA Frequency Modulated (FM) signal transmitter is a small device that can transmit Frequency Modulated signal over short range. [1] This document consists of most simple and economical technique for building a FM transmitter using basic electronic components like resistor, capacitor, inductor etc. The FM transmitter receives human voice signals though microphone. It further amplifies it, modulate it over carrier and finally transmit it. Assuming favorable conditions, output of transmitter can be received by anyone who tunes it in frequency of our transmitter. Here, I have described Circuit diagram, its working, components required, uses of various components in our circuit, its practical applicability. The design is simulated using NI Multisim and is further implemented on bread-board. This design is capable of transmitting signal for distance of radius 20m, tuned at 97.1 MHz One could clearly hear sound produced at microphone of transmitter. Introduction FM transmitter is an electronic device, which produces frequency-modulated waves with the help of an antenna.A transmitter generates FM waves for various purposes such as communication, broadcasting a message etc.Furthermore, FM signals are less prone to interference as compared to AM signals due to higher bandwidth.Also, it is less susceptible to noise [1,2].The signal transmitted has a limited range for its reception, as we increase our distance from source, the signal received is merged with noise and further more noise component dominates signal transmitted and hence message cannot be received successfully after certain distance due to obstacles.The source of power is 9v dc battery, which starts discharging after constant power supply for around 6 hours [3].The information that is provided to the transmitter is in the form of an electronic signal.This includes audio from a microphone.The transmitter combines the information signal that is to be carried with the RF signal (the carrier).This is called modulation.In an FM transmitter, the information is added to the radio signal by slightly varying the radio signal's frequency. Materials & Methods The components used in our design are easy to find and easy to implement.The circuit designing and implementing part can also be done using breadboard but as it is not suitable high frequency circuits.Further Components are shown in Table 1. Discription of Circuit The circuit basically operates in 4 steps.Firstly a condenser microphone takes input, then amplifier does amplification, the amplified signal is modulated with frequency being generated by LC oscillator and finally antenna transmits the signal.The inductor L1 and capacitor C3 forms an oscillating tank circuit along with the transistor 2N3904.As long as the current exists across the inductor coil L1 and the capacitor C3, the tank circuit will oscillate at the resonant carrier frequency for FM modulation.Whereas Capacitor C1 acts as a negative feedback to the oscillating tank circuit.The modulated signal from the antenna is radiated as radio waves at FM frequency band.Antenna is nothing but a simple copper wire of 20 cm or more long. Amplification Stage: The sound signal converted by microphone to alternating signal has very low power, hence if we modulate it directly over the carrier and transmit it, it would not be possible to demodulate it and retrieve the original signal from it.The resistance R1 and R2 are used to bias the BJT, it is being illustrated in Figure 3. Final Transmission of FM signal Now, the signal after amplification by BJT is further modulated with carrier signal generated by tank circuit.After successful modulation the wave is transmitted using Antenna.It is displayed in Figure 5. Simulation using NI Multisim 10 FM transmitter can be simulated using tools from Multisim.This circuit is simulated using NI Multisim 10.Firstly, circuit is created using Multisim, using the tools and parts from library, dedicated for circuit designing.The main purpose of using Multisim to simulate the design is to affirm that values of inductor capacitor, etc. are under the bounds.After the circuit is designed it is further simulated and analyzed using Spectrum Analyzer and virtual oscilloscope present in Multisim.Also, applying probes at antenna shows the values of various parameters at output of circuit.The results of simulation are illustrated in Figure 6. Interpretation by Oscilloscope As it can be seen in Figure 6, the oscilloscope displays the output waveform of FM signal that is transmitted through antenna.The output wave can be clearly seen through oscilloscope.As our design is very basic, the effect of noise can be seen in output as well.Clearly it can be noted that the output waveform has time period of 10.4 ns which is approx.97 MHz.Slight variations are observed due to effect of noise. Interpretation by Spectrum Analyzer The spectrum of output waveform can be analyzed using Spectrum Analyzer.It can be clearly observed from the graph of spectrum analyzer of Figure 6, the peak of spectrum (bandwidth) is at 96.663 MHz, approximately 97 MHz.Also the effect of noise can be observed as the spectrum is not idle as it is expected to be of a sinusoidal wave. Interpretation by Probe Marker Using the Probe marker placed at output during simulation, the frequency of output signal can be clearly seen from Figure 6 as 97.0 MHz., along with various other parameters such as current, voltage etc. Implementation Further after all the simulation results are achieved as per desired, we further implement circuit designed on breadboard first.All the components listed in Table - Testing The implemented circuit is further tested.To test the transmitter, the tank circuit is tuned properly so that frequency generated can be easily modulated with message signal.Now, Both FM transmitter and receiver are switched on.The receiver is tuned to 97.1 MHz and voice signal is being transmitted.Instantly the voice signal is heard clearly at the receiver.Furthermore, to check the range of transmitter, a constant voice signal is applied at input and receiver is moved away from the transmitter slowly.As distance increased, the impact of noise started dominating our message signal.Also, as our transmitter is producing a low power signal, output can be obtained clearly in a radius of 20m around transmitter.The Figure 8 shows output of our design being verified using Digital Storage Oscilloscope (DSO). Conclusion The human voice transmitted, was received at output on 97.1 MHz frequency, provided conditions are favorable for wireless transmission.For extending the range and power of FM transmitter, one can apply another level of Amplification after Second stage, that would in turn further amplify the signal and then it can be transmitted, the more the power of signal transmitted, greater is its range and more noise immune it becomes.Also, to improve efficiency one may check the voltage of source applied and assure it is 9 V for above circuit.Furthermore, one should implement the design on PCB board as bread-board is not preferred for high frequency circuits. Transistor (2N3904): Transistor 2N3904 is a medium gain general-purpose transistor.It is widely used for low power amplification.It has transition frequency of around 300 MHz with a minimum current gain of 100.The Circuit Diagram is shown in Figure 1, which was stimulated in NI Multisim 10. 1 are further arranged and assembled in circuit as per design requirement of our FM transmitter.Implemented circuit on bread board is shown in Figure-7, along with battery source. Table 1 : [4]rophone: Microphone is a transducer, which converts sound energy to electric energy.The microphone used in our design is 'Electret Condenser microphone'[4]It consists of parallel plate capacitor which has one plate as fixed and other plate being movable.The movable plate is called diaphragm.When sound strikes diaphragm it starts moving, thus in turn changing capacitance of capacitor, which in turn results in flow of variable current. Inside the microphone, a capacitive sensor diaphragm is present.It vibrates according to the air pressure changes and generates AC signals.The capacitor C4 and C5 can be thought of as a frequency-dependent resistor (called reactance).The Capacitor C5 separates microphone from transistor.Speech consists of different frequencies and the capacitor impedes them.The net effect is that C4 modulates the current going into the transistor.Using a large value for C4 reinforces bass (low frequencies) while smaller values boost treble (high frequencies).The input stage of Multisim circuit is shown in Figure2.Here we used an A.C. voltage source instead of microphone for simulation purpose.
2018-12-07T23:18:45.802Z
2017-12-19T00:00:00.000
{ "year": 2017, "sha1": "43bc5c46a83ccedb8f51e6744413ad2ccb823184", "oa_license": "CCBYNC", "oa_url": "https://www.nepjol.info/index.php/BIBECHANA/article/download/18279/15205", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "43bc5c46a83ccedb8f51e6744413ad2ccb823184", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
237542628
pes2o/s2orc
v3-fos-license
Diagnostic Value of Presepsin in Elderly Patients with COPD Exacerbation Owing to their heterogeneity and the lack of available diagnostic laboratory tests, COPD exacerbations are Abstract Background: Chronic obst ruc t ive pulmonary disease (COPD) is associated wi th sys temic low -grade inf lammation which increases fur ther in exacerbat ion s tates. CD14 is a glycoprotein expressed on monocytes and macrophages and i ts soluble fract ion, presepsin, is present in blood and produced in associat ion wi th inf lammation and in fec t ions . often diagnosed based on clinical assessment, which is subjective and variable within and across physicians. 12 Instead, biomarkers may better reflect disease activity and fluctuate in accordance with disease state. Therefore, a "biomarker" is needed to achieve an objective verification of exacerbations. 13 It is now well known that a major factor in the pathogenesis of COPD is chronic inflammation of the small airways, caused by inhalation of particles and gases, which is further increased during acute exacerbations 14 and this inflammatory process spills over into the systemic circulation which may result in an "inflammatory signature" in blood related to COPD. 15,16 Accordingly, there has been great interest in developing biomarkers that are specific to the inflammatory process in COPD and can be related to important clinical health outcomes such as exacerbations and mortality. While levels of these biomarkers may be altered when comparing stable COPD patients to normal controls, further disturbances may be observed in the acute setting of an exacerbation. 17 Presepsin (sCD14-ST) is the soluble N-terminal fragment of the cluster of differentiation (CD) marker protein CD14. CD14 is a multifunctional glycoprotein expressed mainly on the membrane surface of monocytes/macrophages which serves as a specific receptor for complexes of lipopolysaccharides (LPSs) and LPS-binding proteins (LBPs). 18 When the proinflammatory signaling cascade against infectious agents is activated, soluble form of CD14 (presepsin) is produced and released into circulation either by secretion following phagocytosis or through proteolytic cleavage on activated monocytes. 19,20 Elevation of serum presepsin levels has been recognized as a specific, early phase biomarker for sepsis, especially in the field of critical care medicine. 18,21,22 However, to our knowledge, the effectiveness of presepsin as a biomarker in predicting COPD exacerbation has not been investigated so far. The aim of the current study was to assess the diagnostic accuracy of presepsin in elderly patients with COPD exacerbation. METHODS Study Design: Cross-sectional study was conducted in Ain-Shams University Hospital at outpatient clinics. The study included 120 older adults ≥ 60 yrs. old (males and females) who attended outpatient clinics. All the patients were able to walk independently and their condition was stable. Patients who had acute illness or overt dementia were excluded. METHODS: In this case control study, 90 elderly (60 years and older) participants were recruited from September 2017 to September 2019 from the community and from the inpatient wards and outpatient clinics of Ain Shams University Hospitals, Cairo, Egypt. The subjects were divided into three groups; Group A: 30 patients with COPD during exacerbation; Group B: 30 patients with stable COPD who had not been hospitalized for exacerbation of the disease for at least 2 months before; and Group C: 30 subjects as a control group who were lifelong nonsmokers, free from any lung disease, with normal spirometry and with no history of atopy. Diagnosis of COPD (for group A and B) was made according to GOLD criteria, as follows: Presence of symptoms of dyspnea, chronic cough and sputum production, and history of exposure to risk factors for the disease. Diagnosis is confirmed by spirometry showing post-bronchodilator FEV1/FVC < 0.70. 23 COPD exacerbations (for group A) was defined according to GOLD criteria as those with acute worsening of respiratory symptoms (worsening of dyspnea sensation, coughing, or sputum production that can become purulent) that result in additional therapy. 23 Participants who refused to participate or had any lung disease other than COPD or patients who had Conditions that affect presepsin levels such as recent major trauma or surgical intervention, any other site infection, autoimmune inflammatory disease, malignant cancer of any type, acquired immunodeficiency syndrome, end-stage liver disease and end-stage renal disease were excluded. Acceptance of ethical committee of faculty of medicine, Ain shams university, and informed oral consent from study subjects were taken. All participants were subjected to history taking and examination. Regarding COPD subjects; exacerbation history including previous hospital admissions was taken and symptom assessment was done through the Modified British Medical Research Council (mMRC) Questionnaire and COPD Assessment Test (CAT). Assessment of the current exacerbation presentation and short-term outcomes were done for group A (COPD exacerbation). Assessment of functional status for all subjects was done through Activities of Daily Living Scale (ADL). Pulmonary function tests using portable spirometer (FCC ID: TUK-MIR009) were performed for all subjects for the diagnosis of COPD and for staging of severity of airflow limitation according to GOLD guidelines; GOLD stage I [FEV1 ≥80%], stage II [50% ≤ FEV1 <80%], stage III [30% ≤ FEV1 <50%], and stage IV [FEV1 <30%]. 23 Presepsin was detected in the plasma of all subjects using the enzyme-linked immunosorbent assay (ELISA) with the commercially available Human Presepsin (sCD14-ST) ELISA Kit, according to the manufacturer"s instructions (Wuhan Fine Biotech Co., Ltd). The assay detection range is 0.156 -10 ng/ml. The technicians who measured the samples were blinded to the identity of the patient samples. Arterial blood gases (PaCO2 and PaO2), complete blood count (CBC), serum C-reactive protein levels and erythrocyte sedimentation rate (ESR) were also measured. Statistical Methods Descriptive statistics were done for quantitative data as minimum& maximum of the range as well as mean±SD (standard deviation) for quantitative normally distributed data, while it was done for qualitative data as number and percentage. Inferential analyses were done for quantitative variables using Shapiro-Wilk test for normality testing, independent t-test in cases of two independent groups with normally distributed data and ANOVA test. In qualitative data, inferential analyses for independent variables were done using Chi square test for differences between proportions and Fisher"s Exact test for variables with small expected numbers. Post hoc Bonferroni test was used to find out homogenous groups in multiple significant comparisons. ROC curve was used to evaluate the performance of different tests differentiate between certain groups. Linear regression model was used to find out independent factors affecting certain conditions. The level of significance was taken at P value < 0.050 is significant, otherwise is non-significant. The collected data were coded, tabulated, and statistically analyzed using IBM SPSS statistics (Statistical Package for Social Sciences) software version 18.0, IBM Corp., Chicago, USA, 2009. Ethical considerations The study was performed in adherence to the principles established by the Declaration of Helsinki and the study methodology was reviewed and approved by the Research Review Board of the Geriatrics and Gerontology Department, Faculty of Medicine, Ain Shams University. Informed verbal consent was obtained from all the participants because some of the participants were illiterate and could not provide a signed consent. The verbal consent was documented in the presence of a next of kin and a nurse. The ethics committee approved using of verbal consent. Results The current study is a case control study. The study population was 90 elderly (60 years and older) who were divided into three groups; Group A: 30 patients with COPD exacerbation; Group B: 30 patients with stable COPD; and Group C: 30 control subjects. The 3 study groups were matched for age and gender; Male subjects constituted 93.3%, 83.3% and 80% of the exacerbation, stable and control groups respectively while female subjects constituted 6.7%, 16.7% and 20%. The three studied groups were also matched regarding their functional status (ADL scale) and co-morbidities (DM, HTN, ISHD, OA, BPH and gastritis). The characteristics and comorbidities of participants in the 3 groups are outlined in Table 1. Also, in COPD subjects, there was no significant difference between exacerbation and stable groups regarding COPD duration, risk factors, smoking index, COPD severity, treatment, treatment duration, exacerbation frequency and previous hospitalizations. Similarly, there was no significant difference between the two groups regarding modified MRC scale and CAT score (P value > 0.05).(data not shown in tables) Regarding levels of inflammatory markers among the 3 study groups; TLC, CRP and presepsin were significantly higher in exacerbation group (p<0.001) with no significant difference between stable and control groups, while ESR was significantly different among the 3 study groups; highest in exacerbation, followed by stable and least in control (p<0.001) (data shown in table 2). After regression analysis of all studied confounding factors, it was found that prespsin, CRP and ESR were increased in stable COPD and exacerbation while TLC was increased in COPD exacerbation (data not shown in tables). The cutoff values of presepsin ≥0.5 and CRP ≥10.0 were found to have the highest diagnostic characteristics in differentiating exacerbation from stable COPD with sensitivity 100% and 96.7% respectively and specificity 93.3% and 100% respectively (figure 1). The presepsin levels were found to be significantly higher in COPD exacerbation subjects with pneumonia (p= 0.004), subjects admitted to the ICU (p= 0.016) and in those with respiratory failure (p= 0.024) (table 3), which denotes that increase in presepsin levels in serum of exacerbated COPD patients is correlated with the severity of exacerbation. Discussion The aim of our study was to assess the effectiveness of presepsin in predicting COPD exacerbations which to our knowledge, had not been investigated before. However, other markers were assessed in AECOPD such as CRP, TLC, ESR, fibrinogen, PCT (procalcitonin) and suPAR (soluble urokinase-type plasminogen activator receptor). 24,25,26,27,28 In this case control study, we recruited 90 elderly who were divided into 3 groups; Group A: 30 patients with COPD exacerbation; Group B: 30 patients with stable COPD; and Group C: 30 control subjects. The current study showed that after regression analysis of all studied confounding factors, prespsin, CRP and ESR were increased in stable COPD and exacerbation while TLC was increased in COPD exacerbation. Also, the cutoff values of presepsin ≥0.5 ng/ml and CRP ≥10.0 mg/L were found to have the highest diagnostic characteristics in differentiating AECOPD from stable COPD with sensitivity 100% and 96.7% respectively and specificity 93.3% and 100% respectively. COPD is independently associated with systemic lowgrade inflammation and this inflammatory activity increases further in exacerbation states. An acute COPD exacerbation can be viewed as an acute inflammatory event superimposed on chronic inflammation associated with COPD, with bacterial infection being the cause in approximately half of the exacerbations. 29 Our findings suggest that serum 27 where PCT levels were found to be significantly higher in AECOPD patients than in patients with stable COPD. Also, Gumus et al. (2015) 28 found that fibrinogen, CRP, and suPAR levels were significantly higher in patients with COPD exacerbation than in healthy controls. The finding that inflammatory markers were higher in stable COPD patients than in subjects without COPD is in accordance with other studies such as Mannino et al. 36 found that serum suPAR levels were significantly higher in stable COPD patients than in control subjects (P < 0.001). This study showed also that presepsin levels had significant positive correlations with length of hospital stay and respiratory rate and were significantly higher in patients with respiratory failure and in cases admitted to the ICU. This denotes that presepsin levels increase with the severity of COPD exacerbation. This could be attributed to the positive relation between disease severity and extent of inflammation in COPD, so it is expected that inflammatory markers show elevated levels with increased exacerbation severity (Heidari B, 2012) 37 . Also, in the current study, statistically significant difference was seen for presepsin, being as twice as high in COPD exacerbation patients with pneumonia (1.4±0.7 ng/ml) compared to those without (0.7±0.1 ng/ml). This raises the possibility that the higher the presepsin level, which is also correlated to higher CRP levels, the higher the likelihood of having bacterial infection as the cause of exacerbation. Both exacerbation and superinfection by pneumonia in COPD patients can cause serum elevation of inflammatory markers with different levels so they can be used to differentiate COPD exacerbation from pneumonia. 38 The study of Bafadhel et al. (2011) 38 showed that the biomarkers procalcitonin and CRP were elevated in patients with pneumonia compared with patients with exacerbations of asthma and COPD, suggesting that they can usefully guide antibiotic usage. Another two studies by Lacoma et al. (2011) 39 and Çolak et al. (2017) 40 also concluded that AECOPD patients with pneumonia had significantly higher PCT values than those without pneumonic involvement. The current study found that elevated serum presepsin levels in patients with symptoms of COPD exacerbation are correlated with the severity of exacerbation and that high presepsin levels may also alert physicians to the possibility of pneumonia. Antibiotics are used in the treatment of exacerbations of COPD but not all patients equally experience benefit from antibiotics. COPD exacerbation patients selected on the basis of evidence of bacterial infection or by the severity of exacerbation are more likely to benefit. Using presepsin as a biomarker for guiding treatment in COPD patients ould have important implications in clinical practice as overuse of antibiotics can be substantially decreased, thereby reducing antibiotic resistance and its related side effects and lower medical costs for hospital systems and the patient. Hence, we do recommend further studies to evaluate the role of presepsin as a biomarker for targeting management and therapy of COPD patients. Conclusion: The current study shows that elevated presepsin levels have potential value as a robust and independent biomarker of exacerbation in COPD. Further studies are needed to figure out its potential in guiding exacerbation therapy. Conflict of interest: The authors report no conflicts of interest in this work.
2021-09-16T20:01:15.836Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "b5703c30271f1a100cd9734f4522cc2500062e0f", "oa_license": null, "oa_url": "https://ejgg.journals.ekb.eg/article_139248_e51cb931b0e08dc1c004634b72312220.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "82df6f0905a2f1864fca8b421e46809061335c9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
123605324
pes2o/s2orc
v3-fos-license
SINGULAR MINIMIZERS FOR REGULAR ONE-DIMENSIONAL PROBLEMS IN THE CALCULUS OF VARIATIONS I()= I f(x,u(x),u(x))dx, Ja where [a,b] is a finite interval, ' denotes d/dx, f = f(x,u,p) is C°°, ƒ > 0 and fpp > 0 (regularity). We consider the problem of minimizing / in the set A of absolutely continuous functions u: [a,b] —• R satisfying u(a) = a, u(b) = (3, where a and (5 are given constants. As is well known, if a minimizer u of i" in A is Lipschitz continuous then u is smooth and satisfies the Euler-Lagrange equation and (ii) for any x G [a, b] the limit ,, . .. u(x + 6)-u(x) U W = /ïï& ~ 6 x+6e [a,b] exists as an element of the extended real line R. In particular u satisfies (EL) and (DBR) on [a,b]\E. As far as we are aware, our examples are the first showing that the Tonelli set E may be nonempty. (For a recent version of Tonelli's theorem for nonsmooth integrands see Clarke and Vinter [1983].) Details of the proofs and further results will be published elsewhere. EXAMPLE 1. Minimize It is easily verified that if 0 < e < e 0 = max (2*/3) 12 (l -t 3 )(13t 3 -7) = .002474..., t»€[7/13,l) the corresponding Euler-Lagrange equation has an exact solution u = kx 2 l z on (0,1] provided A; has either of two values k±,k2 with ^ < fcf < k\ < 1. The underlying reason for the existence of these exact solutions is the scale invariance property (*) /(Xs,X 2/3 ti,X-1/3 p) = X-2 / 3 /(z,u,p), X > 0, of the integrand. This invariance is also responsible for the existence of transformations, namely v = u 3 / 2 , z = v/x, q = v' and x = e*, converting (EL) into an autonomous first order system of ordinary differential equations in the first quadrant of the q, z plane. The critical points in q > 0, z > 0 of this system are precisely the points q = z = k\^2 (a sink) and q = z = fc 3^2 (a saddle). Furthermore, every smooth solution u of (EL) on [0,1] with u(0) = 0, u'(0) > 0 corresponds to a single orbit q(t), z(t) leaving the origin with slope 3/2. Provided e > 0 is sufficiently small it can be shown that this orbit is attracted &st-+oo to q = z = k\t 2 . It follows that for e> 0 sufficiently small there exists 6 = 6(e) > 0 such that if k > &2 -6 there is no smooth solution of (EL) on [0,1] satisfying the end conditions, and hence that / does not attain a minimum among Lipschitz functions. By applying the direct method of the calculus of variations and further analysis of the q, z phase portrait [in conjunction with a device due to Mania (cf. Example 2 below)], one concludes that for each k > ki -6i(e) there is a unique absolutely continuous minimizer u, such that u G C°°((0,1]) and u(x) ~ fc 2 x 2 / 3 as x -• 0 +. (If k = k 2 then u(x) = &2£ 2//3 .) Hence the Tonelli set E consists of the single point x = 0. Since u'(0) = oo it follows easily that f u , f x £ L x (0,1), so that u does not satisfy the integrated versions Jo of (EL) and (DBR). If A; > 0 is sufficiently small the minimizer is smooth and unique. If fc = fci then, for all 0 < e < eo, u(x) = fcix 2 / 3 is not the minimizer. EXAMPLE 2. Minimize Here m is a positive integer. Note first that when m = 13 the integrand has the invariance property (*), and that if e > 0 is sufficiently small there are two solutions, u = fci|x| 2 / 3 signx, u = fc2|x| 2 / 3 signx, of (EL) for i^O. For minimization problems such as in Example 1 the same phase portrait techniques are applicable. However, we now consider the case m > 13. We fix k G (0,1] and let e > 0 be sufficiently small. Then I attains a minimum, and any minimizer u satisfies u(0) = 0, u'(0) = +oo. Furthermore inf Here the Tonelli set E contains at least one interior point, namely x = 0, and (IEL) and (IDBR) are not satisfied for any choice of lower limit in the integrals of f ui f x . These results are proved by adaptation of the argument of Mania [1934] (see also Cesari [1983, p. 514], who is responsible for the resurrection of the remarkable Lavrentiev phenomenon from the literature). The Lavrentiev phenomenon can be viewed as a kind of 'uncertainty principle': one cannot simultaneously approximate the minimizer u and minimum value m of 7 arbitrarily closely by a Lipschitz function. EXAMPLE 3. We state this example as a proposition, since the explicit formula for the integrand is somewhat complicated. where g is a suitable function satisfying g\u) > 0 for u ^ 0, g'(0) = 0, by examining the values of ü,ü', and finally constructing an appropriate integrand f{u,p) > (g\u)p) 2 satisfying f pp > 0 and such that f{%u') = {g'(u)iï) 2 . Note that for integrands independent of x, (IDBR) always holds for a minimizer (cf. Tonelli [1934], Cesari [1983). If f pp > 0, f{u,p)/p -• oo for all u then an argument based on (IDBR) shows that any minimizer is smooth and satisfies (EL) on [-1,1], so that Example 3 is, in a sense, optimal. I{u)= I f{u,u)dx has a unique minimizer u in the set A of absolutely continuous functions on It would be interesting to determine if analogues of Examples 1-3 hold for multiple integrals with integrands independent of u, such as those occurring in nonlinear elasticity, under growth hypotheses ensuring that any minimizer is continuous. If so, then the appearance of singularities in the gradient of u could be related to the onset of fracture. Finally, we remark that because of the Lavrentiev phenomenon care must be taken in the interpretation of minimizers obtained numerically via finite element schemes. Most of the results concerning Examples 1 and 3 were obtained when Mizel was a U. K. Science and Engineering Research Council Visiting Fellow at Heriot-Watt University in 1981 and 1982. Further results were developed during a brief joint visit at the Institute for Mathematics and its Applications (University of Minnesota) and at the Mathematics Research Center (University of Wisconsin).
2017-07-30T00:59:27.445Z
1984-07-01T00:00:00.000
{ "year": 1984, "sha1": "a2e80aaae73e05a7c9566970f666608db2eb0045", "oa_license": null, "oa_url": "https://www.ams.org/bull/1984-11-01/S0273-0979-1984-15241-8/S0273-0979-1984-15241-8.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6d52a5eafdf892eb8e56d7f75603165357f5c75e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
17441018
pes2o/s2orc
v3-fos-license
Control of reactive oxygen species (ROS) production through histidine kinases in Aspergillus nidulans under different growth conditions Sensor histidine kinases (HKs) are important factors that control cellular growth in response to environmental conditions. The expression of 15 HKs from Aspergillus nidulans was analyzed by quantitative real-time PCR under vegetative, asexual, and sexual growth conditions. Most HKs were highly expressed during asexual growth. All HK gene-disrupted strains produced reactive oxygen species (ROS). Three HKs are involved in the control of ROS: HysA was the most abundant under the restricted oxygen condition, NikA is involved in fungicide sensing, and FphA inhibits sexual development in response to red light. Phosphotransfer signal transduction via HysA is essential for ROS production control. Introduction His-Asp phosphorelay signal transduction helps cells adapt to environmental changes and is common among bacterial and some eukaryotic cells. Sensor histidine kinases (HKs) recognize external signals, autophosphorylate on their own histidine residues, transfer phosphoryl groups to their own aspartic acid residues, and subsequently transfer the phosphate signals to histidine-containing phosphor transmitter (HPt). Finally, response regulators (RRs) receive the phosphate signals on their aspartic acid residues and regulate gene expressions directly or by controlling the downstream signal transduction pathways. Aspergillus nidulans is a model filamentous fungus that contains 15 HKs, 1 HPt, and 4 RRs [ 1 ]. Several HKs have been studied in attempts to characterize the roles of His-Asp phosphorelay systems in A. nidulans . TcsA is important for the formation of conidia during asexual development [ 2 ]. Meanwhile, TcsB is an ortholog of Sln1, which is an osmosensor in Saccharomyces cerevisiae [ 3 ]. NikA Table S1 lists the primers used for PCR. Error bars represent the standard deviations of at least 3 independent experiments. ABPU1 strain including ligD gene deletion to induce efficient homologous recombination [ 10 ]. ΔnapA was a kind gift from Dr. T. Mizuno [ 11 ]. A set of 14 HK deletion strains was purchased from Fungal Genetics Stock Center [ 12 ]. Construction of a hysA deletion and alcA promoter control strain A hysA -deletion plasmid, pKK001, was constructed and introduced into ABPU1 by protoplast transformation. The wild-type hysA gene was connected to an alcA promoter to create the plasmid, pTM003, which was introduced into the hysA -deletion strain. The details of plasmid construction are described in the Supplementary methods . Total RNA preparation Total RNA was prepared, and relative transcription levels were determined by quantitative real-time PCR as described in the Supplementary methods . Nitro-blue tetrazolium staining A. nidulans cultivation and nitro-blue tetrazolium (NBT) staining were performed as described previously [ 9 ] with some modifications. To observe germling growth, conidia were inoculated in liquid culture on cover glasses and cultivated for 12 h at 37 • C. To observe sporulating hyphae, each strain was grown beneath cover glass crossed with square holes on agar plates for 2 days at 37 • C. A. nidulans strains growing along the cover glass were drifted and sunk into NBT solution (0.1% Nitro Blue Tetrazolium [Wako], 100 mM Sodium phosphate buffer [pH 7.0]). After 4 h in the dark, blue-colored precipitate, which is the reduction product of NBT by superoxide anion, was observed under a microscope (OLYMPUS BX51). HysA protein purification The construction of expression plasmids, cultivation for protein induction, and purification of recombinant HysA HR and HysA H are described in the Supplementary methods . In vitro autophosphorylation experiment The in vitro autophosphorylation reaction of purified HysA proteins was carried out according to the method of Azuma et al. [ 13 ]. SDS, 6% glycerol, and 0.02% bromophenol blue). Samples were subjected to SDS-PAGE. The gel was dried and analyzed with an imaging scanner (BAS-2500, Fuji Film, Tokyo, Japan). Expression levels of HK proteins in A. nidulans To determine the roles of unknown signaling proteins in the cell development of A. nidulans , we analyzed the expression levels of all HK proteins at different stages of cell growth ( Fig. 1 ) [ 14 ]. Only a few HKs were expressed during vegetative growth ( Fig. 1 A), whereas most HKs were expressed with the progression of asexual development, even though their overall expression levels were low ( Fig. 1 B). These results indicate His-Asp phosphorelay signal transduction in A. nidulans mainly occurs during asexual development. However, under the restricted oxygen conditions by sealing the plates with tape (the induction of sexual development) ( Fig. 1 C), only 1 HK, originally called HK8-2, was markedly expressed; this implies a functionally important role of HK8-2 in response to low oxygen. Herein, we refer to HK8-2 as HysA ( hy poxia expressed s ensor protein A ) and continued to analyze its functions along with other characteristic HKs. Construction of the hysA-deletion strain ( ΔhysA) To determine the function of hysA in A. nidulans , the entire hysA region in the A. nidulans genome was replaced with auxotrophic marker, argB (see Supplementary methods ). The deletion was performed in wild-type ABPU1, and confirmed by PCR and Southern blotting (data not shown). Functional growth defects of A. nidulans due to deletion were investigated by comparison with BPU1, which was constructed by introducing wild-type argB into the ABPU1 strain. However, there were no differences between the BPU1 and ΔhysA strains with respect to growth even under the restricted oxygen condition (data not shown). ROS production in HK gene-deletion strains Since ROS are important signals for cell growth in A. nidulans like all organisms [ 9 ], we determined whether differences in NBT staining were detected using microscopy in ΔhysA , including differences in other HK gene disruptants. After 12 h of incubation at 37 • C in minimal medium liquid culture, cells were stained with NBT, which reacts with superoxide anion, one of ROS. Wild-type BPU1 produced ROS at the tip of growing hyphae ( Fig. 2 A) as reported previously [ 9 ]. PhkA and PhkB are thought to play roles in oxidative stress responses in A. nidulans because they have functional domains similar to those of Phk1-3, which are histidine kinases that function in response to oxidative stresses in Schizosaccharomyces pombe [ 1 ]. However, both ΔphkA and ΔphkB exhibited the same sensitivity to oxidative stress as the wild-type ( Fig. 2 D) and almost the same level of NBT staining as the wild-type ( Fig. 2 A). On the other hand, ΔnikA [ 4 , 5 ] and ΔhysA exhibited abnormal NBT staining at various parts of growing hyphae, indicating both NikA and HysA are involved in the control of ROS production ( Fig. 2 A). Approximately 50 growing cells were observed for each strain and categorized into 8 different NBT staining patterns ( Fig. 2 B, left 1-8, black representing NBT-stained areas). The resultant distribution of ROS production is summarized in Fig. 2 B (right graphs). It indicates that ΔnikA and ΔhysA produce ROS at any parts of cells during the germling growth. A. nidulans also developed conidiophores after hyphal growth on minimal medium agar plate. Wild-type BPU1 minimally produced ROS in some conidia and aerial hyphae ( Fig. 2 C). The ΔnikA strain exhibited poor formation of conidia as described previously [ 4 , 5 ] but still produced ROS at several parts of the hyphae. ΔhysA strongly produced ROS at both developed conidiophores and aerial hyphae ( Fig. 2 C). The oxidative stress sensitivity of cells is thought to be linked to cellular ROS production. However, in the present study, the ΔhysA strain exhibited the same sensitivity as the wild-type but growth patterns different from those of ΔnikA on the oxidative stress plates To investigate the effects of other HKs on ROS production, we obtained 14 HK gene-deletion strains listed in the Fungal Genetics Stock Center (FGSC) except ΔphkA , which was not included in the list [ 12 ]. In addition, we observed specific ROS production in ΔfphA strain ( Fig. S1 ). These are the first findings indicating HKs are involved in ROS production as directly observed by NBT staining. ROS production in napA gene-disrupted strain As NapA is a transcription factor that controls several genes for oxidative stress response, the deletion strain ( ΔnapA ) exhibited sensitivity to oxidizing regents [ 11 ]. Curiously, the ΔnapA strain exhibited ROS production similar to that of ΔhysA during the germling ( Fig. 2 A and B) but not at the conidiophore development ( Fig. 2 C). These results indicate HysA and NapA play different roles in ROS production during the development of A. nidulans . To determine whether the involvement of HysA in ROS production is dependent on the presence of oxygen in growing cells, we cultivated ΔhysA under the restricted oxygen condition and observed the resultant ROS production. ROS levels were reduced mostly in the hyphae ( Fig. 2 E) but not at the tips in ΔhysA . It must be mentioned that other HK gene-disrupted strains and the ΔnapA strain including the wiltype did not exhibit different ROS production when oxygen was restricted, indicating they maintain their characteristic ROS production. Since ROS are mainly generated through the respiratory reactions at mitochondria under the presence of oxygen, HysA might be involved in the regulation of mitochondria. These results of NBT staining are summarized in Table 1 . Functional importance of the His-Asp phosphorelay system in the control of ROS production To test whether His-Asp phosphorelay signal transduction via HysA is essential for controlling ROS production in A. nidulans , we constructed a strain in which hysA expression was controlled under the alcA promoter [ 15 ] by introducing the alcA(p)::hysA fusion at the pyroA locus in the ΔhysA strain (OPHysA). Histidine residue at position 566 (His566) of HysA is supposed to be an autophosphorylation site based on the homology among HKs, and aspartic acid residue at position 1134 (Asp1134) is a phosphate-receiving site. HysA HQ and HysA DN mutant strains were also constructed by means of 2 alcA(p)::hysA fusions: alcA(p)::hysAHQ , which included 1 point mutation (His566 → Glu), and alcA(p)::hysADN , which included another mutation (Asp1134 → Asn). These 3 strains were cultivated in minimal medium liquid culture and stained by NBT, and ROS production was observed by microscopy ( Fig. 3 ). OPHysA produced ROS only on the tips of hyphae like the wild-type. However, HysA HQ and HysA DN exhibited the same abnormal ROS production as ΔhysA . In this experiment, we cultivated all strains in the non-inducing condition of alcA promoter including glucose, because the original expression of hysA promoter was similar to that of alcA promoter in this condition (data not shown). The induction condition of alcA promoter also resulted in similar NBT staining. Autophosphorylation of purified HysA protein in vitro Since it was unclear how HKs transmit phosphate signals in A. nidulans , we first examined the autophosphorylation activity of purified HysA protein in vitro . Plasmids were constructed in order to induce HysA HR and HysA H, i.e., the entire wild-type HysA protein and the HysA protein lacking the RR domain ( Fig. 4 A). Other plasmids were also prepared in order to introduce an amino acid change at the autophosphorylation site (i.e., His566) in HysA HR and H. The mutant proteins were purified in the same manner as the wild-type proteins ( Fig. 4 B). The phosphorylation reaction mixture included 2 mM DTT ( Fig. 4 C, lanes 1-4) or 10 mM GSH ( Fig. 4 C, lanes 5-8). 32 P-labeled protein bands were detected for HysA HR and HysA H ( Fig. 4 C, lanes 1, 3, 5, and 7) but not for the HysA HR (HQ) or H (HQ) mutant proteins ( Fig. 4 C, lanes 2, 4, 6, and 8). These in vitro results indicate HysA autophosphorylates its histidine residue at position 566. The autophosphorylation activities of HysA HR and HysA H were not detected without any reducing reagent ( Fig. 4 C, lanes 9 and 10). Because HysA controls ROS production, it is possible some redox conditions affect HysA activity. Evidence of phosphotransfer from HysA to YpdA in vitro YpdA is a histidine-containing phosphate transmitter in A. nidulans . YpdA (HQ) includes an amino acid change at His85 and does not receive phosphate signals from any histidine kinases or upstream sensor proteins [ 13 ]. We examined the in vitro phosphate signal transmission from HysA to YpdA in detail ( Fig. 5 ). When purified HysA HR was mixed with YpdA ( Fig. 5 A, HysA HR + YpdA), a radiolabeled protein band of YpdA was observed, and it was confirmed by the lack of a protein band for YpdA (HQ) at the same position as YpdA ( Fig. 5 A, HysA HR + YpdA(HQ)). HysA H does not include the C-terminal RR domain of HysA HR. Therefore, HysA H exhibits autophosphorylation activity ( Fig. 4 C, lanes 1 and 5) but does not transfer its phosphate signal to YpdA ( Fig. 5 B, HysA H + YpdA). On the other hand, HysA HR (HQ) includes an amino acid change at a histidine residue for autophosphorylation but still has the ability to receive the phosphate signal at the aspartic acid residue in the C-terminal RR domain. As expected, HysA HR (HQ) did not phosphorylate YpdA (data not shown). However, a radiolabeled protein band of YpdA was detected when it was mixed with both HysA H and HysA HR (HQ) ( Fig. 5 B, HysA H + HR (HQ) + YpdA), indicating HysA H can transfer a phosphate signal to YpdA via the RR domain of HysA HR (HQ). These in vitro results collectively indicate phosphotransfer including wild-type HysA occurs from His (i.e., the HK domain of HysA) to Asp (i.e., the RR domain of HysA) and then to His (i.e., YpdA). Conclusions ROS are generated during cell growth in the presence of oxygen in all organisms. It makes sense that His-Asp signal transduction is involved in ROS production, because it should be an important system that controls cell growth and development. The lack of a direct link between ROS production and oxidative sensitivity was evidenced by the HK deletion mutant strains. Therefore, HysA might be a key HK controlling ROS production in response to redox conditions. Further studies are required to elucidate signal transduction after HysA-YpdA as well as the genes under the control of HysA.
2018-04-03T05:31:56.967Z
2014-01-07T00:00:00.000
{ "year": 2014, "sha1": "d6d1cdcecce2193fc18879934d132eabb2e1bb01", "oa_license": "CCBY", "oa_url": "https://febs.onlinelibrary.wiley.com/doi/pdfdirect/10.1016/j.fob.2014.01.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6d1cdcecce2193fc18879934d132eabb2e1bb01", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
260265803
pes2o/s2orc
v3-fos-license
Hybrid Mesoporous TiO2/ZnO Electron Transport Layer for Efficient Perovskite Solar Cell In recent years, perovskite solar cells (PSCs) have gained major attention as potentially useful photovoltaic technology due to their ever-increasing power-conversion efficiency (PCE). The efficiency of PSCs depends strongly on the type of materials selected as the electron transport layer (ETL). TiO2 is the most widely used electron transport material for the n-i-p structure of PSCs. Nevertheless, ZnO is a promising candidate owing to its high transparency, suitable energy band structure, and high electron mobility. In this investigation, hybrid mesoporous TiO2/ZnO ETL was fabricated for a perovskite solar cell composed of FTO-coated glass/compact TiO2/mesoporous ETL/FAPbI3/2D perovskite/Spiro-OMeTAD/Au. The influence of ZnO nanostructures with different percentage weight contents on the photovoltaic performance was investigated. It was found that the addition of ZnO had no significant effect on the surface topography, structure, and optical properties of the hybrid mesoporous electron-transport layer but strongly affected the electrical properties of PSCs. The best efficiency rate of 18.24% has been obtained for PSCs with 2 wt.% ZnO. Introduction Rapid advances in materials technology are creating many novel solutions for energyefficient applications in solar energy [1][2][3][4][5]. The selection of materials is the foundation of all engineering applications and design. This selection process can be defined by application requirements, possible materials, physical principles, and selection [6][7][8][9]. The intended results of the material selection process lead to the identification of one or more materials with properties that meet the functional requirements of a product. The material selection process is one of the basics of design and engineering [10,11]. The second half of the 20th century brought significant progress in the field of material science, nanotechnology, and materials processing. These developments have led to the production of materials targeted at providing solutions in various key areas, including photovoltaics. Progress in this area is additionally due to the increasing awareness of potential ecological collapse, energy insecurity, and the rising cost of living. Emerging solar-cell technologies that use advanced • n refers to n-type-electron transport layer (ETL), • i refers to the perovskite optical absorption layer, • p refers to p-type-hole transport layer (HTL). The electron transport layer (ETL) in PSCs plays an indispensable role in collecting and transporting photogenerated electron carriers and serves as a hole-blocking layer, thus realizing effective charge separation and suppressing charge-carrier recombination. The ETL in PSCs should possess [25][26][27][28][29][30][31]: • a proper energy-level alignment with the perovskite layer; ETL should have the lowest unoccupied molecular orbital (LUMO) and highest occupied molecular orbital (HOMO), lower than the perovskite active layer. The cascading energy structure ETL can improve electron transport to the cathode, suppress back recombination, and enhance device effectiveness. • a wide bandgap to ensure good transmittance in the visible light range; • high electron mobility (>2.5 × 10 −5 cm 2 V −1 s −1 ) for efficient electron transport within the ETL; • a good photochemical stability. Moreover, the ETL in n-i-p geometry serves as a nucleation site for the perovskite, which affects the crystal growth and hence the PSC efficiency [32,33]. Nowadays, typical PSCs are generally fabricated with a mesoscopic or planar architecture. In a planar architecture, each layer is deposited as a dense, thin film. While in a mesoscopic architecture, perovskite is adsorbed on a mesoporous scaffold. The perovskite grain growth is limited by the pore size of the mesoporous layer, but the thicker perovskite layer provides better light harvesting. If the scaffold layer is involved in the electron transfer, it is referred to as active (e.g., TiO 2 , ZnO, and SnO 2 ) and named mesoporous electron transport layer; otherwise, it is passive (e.g., SiO 2 , Al 2 O 3 , ZrO 2 ) [43][44][45][46]. TiO 2 is the most widely used electron-transport material for the n-i-p structure of PSCs. Titanium dioxide nanostructures play a crucial role in the extraction of photoinduced electrons from the perovskite and then in their transport to the electrode, both as a compact and mesoporous layer. This semiconductive material has a good band alignment with perovskite, which enables faster electron injection from the active layer. Nevertheless, TiO 2 suffers from low electron mobility and high defect-state density, which limits the overall device performance [47,48]. A potential n-type semiconducting material is zinc oxide (ZnO) from group II-VI with a band gap energy of 3.37 eV and an exciton binding energy of 60 meV at room temperature [49,50]. Moreover, ZnO has unique chemical and physical properties, such as a good thermal and chemical stability, a high electrochemical coupling coefficient, piezoelectricity, and a broad scope of radiation absorption and high photostability, which provide a wide range of applications in various fields and make it one of the crucial technological materials among all metal oxides. ZnO crystallizes in two main forms, cubic zinc-blende and hexagonal wurtzite (B4). The latter is the most thermodynamically stable crystal structure and the most common in ambient conditions. The zinc-blende form can be stabilized by growing ZnO on substrates with a cubic lattice structure [49,51,52]. Recent studies have shown a variety of ZnO nanostructures, such as nanotetrapods, nanomultipods, nanobelts, nanotubes, nanoparcicles, nano-flowers, nanowires, nanorods, nanoribbons, nanorings, nanoneedles, nanosheets, and shuttle-and comb-like. Over the last few years, scientists have focused on the fabrication and application of one-dimensional (1D) nanostructure materials, such as nanowires and nanorods, because of their fundamental importance and the wide range of potential applications, e.g., in nanodevices [53][54][55]. It is essential to fabricate an electron-transport layer with a suitable composition to improve charge-carrier extraction and transportation for achieving a higher efficiency of solar cells. In this study, we introduce the ZnO nanopowder, consisting of various shape nanostructures, to a TiO 2 solution for the fabrication of the mesoporous electron transport layer of PSCs. The regular (n-i-p) mesoscopic architectures of perovskite solar cells were studied. In order to produce the mesoporous hybrid TiO 2 /ZnO-layer precursor was prepared by dissolving TiO 2 paste in ethanol. Then, contents of different weights of ZnO nanostructures (0, 1, 2, 3, 4 and 8 wt.%) were added to the TiO 2 solution. The prepared mesoporous ETLs were characterized by scanning electron microscopy, atomic force microscopy, X-ray diffraction, and UV-Vis spectroscopy. The effects of using ZnO nanostructures with various shapes and dimensions on the electrical properties of PSCs were also investigated. Technology of Perovskite Solar Cells In the present study, we investigated the perovskite solar cell in a structure of FTO/ compact TiO 2 (c TiO 2 )/mesoporous TiO 2 (mp-TiO 2 ) with the addition of ZnO nanostructures/FAPbI 3 /2D perovskite/Spiro-OMeTAD/Au. The perovskite devices were prepared according to the procedure developed in the Institute of Metallurgy and Materials Science of the Polish Academy of Sciences. Fluorine-doped tin-oxide-coated glass was ultrasonically cleaned sequentially in an aqueous solution of 2% Hellmanex, deionized water, isopropanol for 5 min in each solvent, and then dried. The TiO 2 dense layer have been deposited from tetraethyl orthotitanate, dissolved in a mixture of ethanol and hydrochloric acid, using spin-coating method. Then, the TiO 2 thin film was dried at 200 • C and heated at 500 • C. In order to produce the mesoporous layer, a TiO 2 precursor solution was prepared by dissolving 30 NR-D paste in ethanol. Different weight contents of ZnO nanostructures (x = 0, 1, 2, 3, 4 and 8 wt.% to weight of TiO 2 ) were added to the TiO 2 solution. The spincoating method was used to create the mesoporous layer of TiO 2 /ZnO. The samples were dried at 200 • C and heated at 500 • C. The FAPbI 3 perovskite precursor was produced in a glovebox filled with nitrogen. The precursor solution was developed by mixing lead iodide (PbI 2 ), formamidinium iodide (FAI) and methylammonium chloride (MACl) in a co-solvent of DMF/DMSO (4:1 v/v). The FAPbI 3 perovskite layer was deposited using the anti-solvent method with ethyl acetate. The prepared perovskite precursor solution was deposited on the meso-TiO 2 /ZnO-coated substrate at 8000 rpm (with an acceleration of 2000 rpm). Diethyl ether was dropped onto the substrate during the 10th s of the spin-coating. The perovskite film layer was annealed at 150 • C for 10 min. to allow the formation of a black phase FAPbI 3 . To fabricate a 2D perovskite, 0.04 M octylammonium iodide (OAI) solution was prepared by dissolving OAI in IPA. OAI solution was deposited on the perovskite substrate by spin-coating at 3000 rpm for 15 s, and then the substrate was heated at 100 • C. Spiro-OMeTAD was used as the hole transport layer (HTL) material. Spiro-OMeTAD was dissolved in chlorobenzene and mixed with LiTFSI solution (prepared by dissolving LiTFSI in acetonitrile) and 4-tert-butylpyridine (tBP). Spiro-OMeTAD solution was spincoated on the perovskite layer at 2000 rpm for 30 s. After that, the samples were taken out of the glove box, masked, and coated with gold by thermal evaporation. Au electrodes had a surface of 0.25 cm 2 and a thickness of approximately 80 nm. Figure 1 presents the morphology of the ZnO nanopowder used to fabricate the hybrid TiO 2 /ZnO mesoporous electron-transport layer. The ZnO nanopowder consists of nanostructures of various shapes, including nanoparticles and nanorods/nanowires (Figure 1b-e). The EDS analysis of the chemical composition certified the purity of the ZnO nanopowder and showed the presence of the two elements zinc and oxygen. The applied ZnO material is described in detail in the manuscripts [50,[54][55][56][57]. The XRD analysis carried out for the nanopowder showed the existance of sharp crystalline peaks for 2θ angles: [54]. In addition, electron diffraction on a selected area (SAED) was carried out using a transmission electron microscope, which confirmed the results of the study of the structure of the ZnO nanopowder obtained using XRD and showed that the examined nanostructures were single crystals [54]. The analysis of the morphology of the tested ZnO semiconductor nanopowder, based on the recorded TEM and SEM images, showed the spherical and oblong shape of the tested nanostructures, where their diameters ranged from about 50 nm to 350 nm, and lengths reached about 500 nm [50,54,55]. Studies of the optical properties of the ZnO nanostructures employed were made using UV-Vis spectroscopy. Analysis of the absorption spectrum as a function of the wavelength showed a sharp absorption edge-fall at 360 nm wavelength, while the absorption maximum fell at 340 nm wavelength, which is confirmed with the results obtained for pure one-dimensional ZnO nanostructures shown in [56]. The energy gap (E g ) analysis based on the obtained UV-Vis spectrum showed that the investigated ZnO nanostructures were characterized by an E g value of about 3.2 eV [57]. Spiro-OMeTAD was used as the hole transport layer (HTL) material. Spiro-OMeTAD was dissolved in chlorobenzene and mixed with LiTFSI solution (prepared by dissolving LiTFSI in acetonitrile) and 4-tert-butylpyridine (tBP). Spiro-OMeTAD solution was spincoated on the perovskite layer at 2000 rpm for 30 s. After that, the samples were taken out of the glove box, masked, and coated with gold by thermal evaporation. Au electrodes had a surface of 0.25 cm 2 and a thickness of approximately 80 nm. (021), respectively. These peaks indicate the existance of hexagonal ZnO phase characteristzed by the P63mc space group (ICDD PDF4+ 98-018-5827) [54]. In addition, electron diffraction on a selected area (SAED) was carried out using a transmission electron microscope, which confirmed the results of the study of the structure of the ZnO nanopowder obtained using XRD and showed that the examined nanostructures were single crystals [54]. The analysis of the morphology of the tested ZnO semiconductor nanopowder, based on the recorded TEM and SEM images, showed the spherical and oblong shape of the tested nanostructures, where their diameters ranged from about 50 nm to 350 nm, and lengths reached about 500 nm [50,54,55]. Studies of the optical properties of the ZnO nanostructures employed were made using UV-Vis spectroscopy. Analysis of the absorption spectrum as a function of the wavelength showed a sharp absorption edge-fall at 360 nm wavelength, while the absorption maximum fell at 340 nm wavelength, which is confirmed with the results obtained for pure one-dimensional ZnO nanostructures shown in [56]. The energy gap (Eg) analysis based on the obtained UV-Vis spectrum showed that the investigated ZnO nanostructures were characterized by an Eg value of about 3.2 eV [57]. The surface topography of a deposited hybrid TiO 2 /ZnO layer was also studied using an atomic-force microscope in non-contact mode (Figure 3). A quantitative representation of the surface topography of the pure mesoporous TiO 2 layer with the addition of ZnO nanostructures is represented by the roughness coefficients: root mean square (RMS) and the average arithmetic deviation of the profile from the average line (Ra) ( Table 1). The surface topography of a deposited hybrid TiO2/ZnO layer was also studied using an atomic-force microscope in non-contact mode (Figure 3). A quantitative representation of the surface topography of the pure mesoporous TiO2 layer with the addition of ZnO nanostructures is represented by the roughness coefficients: root mean square (RMS) and the average arithmetic deviation of the profile from the average line (Ra) ( Table 1). The mesoporous TiO2 layer is reflected in the relatively high RMS and Ra values. The AFM images clearly indicate a structure composed of nanoparticles and its agglomerates with a size above 100 nm, forming three-dimensional complex structures with a high specific surface area. The large specific surface area, in turn, plays an important role in the penetration of the perovskite solar-radiation absorber, and thus improves the efficiency of the final solar cell. Results and Discussion It can be seen that the share of nanostructural ZnO nanoadditives does not significantly affect the surface area of the layers obtained in the case of 1-4% share. It can be noticed that the layer with the highest content of ZnO nanoaddition shows significantly higher values of Ra and RMS coefficients, which may improve the charge transport in the perovskite device. It was found that the hybrid mesoporous TiO2/ZnO electron-transport layer with extreme ZnO content (1% and 8% ZnO) and one with intermediate content (3% ZnO) selected for testing are characterized by identical diffraction patterns. This proves the lack of structural differences in the tested layers. Example diffraction patterns obtained for three samples with different compositions at different GIXD angles are shown in Figure 4. The top layer (α = 0.1°) does not differ from the layers tested at the angle α = 0.1°, which indicates the lack of additional oxide layers that could be formed due to the atmosphere on the material's surface. (a) Bragg-Brentano geometry The mesoporous TiO 2 layer is reflected in the relatively high RMS and Ra values. The AFM images clearly indicate a structure composed of nanoparticles and its agglomerates with a size above 100 nm, forming three-dimensional complex structures with a high specific surface area. The large specific surface area, in turn, plays an important role in the penetration of the perovskite solar-radiation absorber, and thus improves the efficiency of the final solar cell. It can be seen that the share of nanostructural ZnO nanoadditives does not significantly affect the surface area of the layers obtained in the case of 1-4% share. It can be noticed that the layer with the highest content of ZnO nanoaddition shows significantly higher values of Ra and RMS coefficients, which may improve the charge transport in the perovskite device. It was found that the hybrid mesoporous TiO 2 /ZnO electron-transport layer with extreme ZnO content (1% and 8% ZnO) and one with intermediate content (3% ZnO) selected for testing are characterized by identical diffraction patterns. This proves the lack of structural differences in the tested layers. Example diffraction patterns obtained for three samples with different compositions at different GIXD angles are shown in Figure 4. The top layer (α = 0.1 • ) does not differ from the layers tested at the angle α = 0.1 • , which indicates the lack of additional oxide layers that could be formed due to the atmosphere on the material's surface. The mesoporous TiO2 layer is reflected in the relatively high RMS and Ra values. The AFM images clearly indicate a structure composed of nanoparticles and its agglomerates with a size above 100 nm, forming three-dimensional complex structures with a high specific surface area. The large specific surface area, in turn, plays an important role in the penetration of the perovskite solar-radiation absorber, and thus improves the efficiency of the final solar cell. It can be seen that the share of nanostructural ZnO nanoadditives does not significantly affect the surface area of the layers obtained in the case of 1-4% share. It can be noticed that the layer with the highest content of ZnO nanoaddition shows significantly higher values of Ra and RMS coefficients, which may improve the charge transport in the perovskite device. It was found that the hybrid mesoporous TiO2/ZnO electron-transport layer with extreme ZnO content (1% and 8% ZnO) and one with intermediate content (3% ZnO) selected for testing are characterized by identical diffraction patterns. This proves the lack of structural differences in the tested layers. Example diffraction patterns obtained for three samples with different compositions at different GIXD angles are shown in Figure 4. The top layer (α = 0.1°) does not differ from the layers tested at the angle α = 0.1°, which indicates the lack of additional oxide layers that could be formed due to the atmosphere on the material's surface. (a) Bragg-Brentano geometry Figure 5 presents the transmittance plot of tested mesoporous titanium oxide layers with the addition of zinc oxide nanostructures in 0%, 1%, 2%, 3%, 4% and 8% deposited on FTO glass with 70 nm-thick blocking TiO 2 . It was found that the incorporation of ZnO nanostructures into the mesoporous TiO 2 layer does not significantly affect light transmission. All produced layers show a transmittance above 60% for wavelengths in the range of 366-900 nm. It was observed that for higher contents of ZnO (2%, 4% and 8%), for the wavelength below 500 nm, there is a peak shift. This may indicate an increase in the thickness of the spin-coated layers. The Tauc plot method was used to determine the bandgap energy of manufactured films. The absorption coefficient (α) was calculated from the formula: α = −(ln(T/(1 − R))/d where: d-thickness of the tested layer, T-transmission at a given wavelength, R-reflection coefficient. In the Tauc plot, the direct optical band gap can be determined from the intercept of the leading-edge linear extrapolation with hν axis, as displayed by lines in Figure 6. The calculated E g is around 3.78 eV for all tested samples. This response is dominated by FTO or glass substrate, but no important differences were visible for any ZnO addition layers. The analysis of transmittance spectra and extracted optical band gaps suggests that the addition of ZnO nanostructures to the mesoporous TiO 2 does not worsen the optical properties but might even be slightly beneficial. It confirms that a hybrid TiO 2 /ZnO layer is a promising candidate as the mesoporous ETL, owing to its high transparency and appropriate band gap that is well fitted into the energy structure of the perovskite solar cell. Molecules 2023, 28, x FOR PEER REVIEW 9 of 15 Figure 5 presents the transmittance plot of tested mesoporous titanium oxide layers with the addition of zinc oxide nanostructures in 0%, 1%, 2%, 3%, 4% and 8% deposited on FTO glass with 70 nm-thick blocking TiO2. It was found that the incorporation of ZnO nanostructures into the mesoporous TiO2 layer does not significantly affect light transmission. All produced layers show a transmittance above 60% for wavelengths in the range of 366-900 nm. It was observed that for higher contents of ZnO (2%, 4% and 8%), for the wavelength below 500 nm, there is a peak shift. This may indicate an increase in the thickness of the spin-coated layers. The Tauc plot method was used to determine the band-gap energy of manufactured films. The absorption coefficient (α) was calculated from the formula: α = −(ln(T/(1 − R))/d where: d-thickness of the tested layer, T-transmission at a given wavelength, R-reflection coefficient. In the Tauc plot, the direct optical band gap can be determined from the intercept of the leading-edge linear extrapolation with hν axis, as displayed by lines in Figure 6. The calculated Eg is around 3.78 eV for all tested samples. This response is dominated by FTO or glass substrate, but no important differences were visible for any ZnO addition layers. The analysis of transmittance spectra and extracted optical band gaps suggests that the addition of ZnO nanostructures to the mesoporous TiO2 does not worsen the optical properties but might even be slightly beneficial. It confirms that a hybrid TiO2/ZnO layer is a promising candidate as the mesoporous ETL, owing to its high transparency and appropriate band gap that is well fitted into the energy structure of the perovskite solar cell. Figure 5 presents the transmittance plot of tested mesoporous titanium oxide layers with the addition of zinc oxide nanostructures in 0%, 1%, 2%, 3%, 4% and 8% deposited on FTO glass with 70 nm-thick blocking TiO2. It was found that the incorporation of ZnO nanostructures into the mesoporous TiO2 layer does not significantly affect light transmission. All produced layers show a transmittance above 60% for wavelengths in the range of 366-900 nm. It was observed that for higher contents of ZnO (2%, 4% and 8%), for the wavelength below 500 nm, there is a peak shift. This may indicate an increase in the thickness of the spin-coated layers. The Tauc plot method was used to determine the band-gap energy of manufactured films. The absorption coefficient (α) was calculated from the formula: α = −(ln(T/(1 − R))/d where: d-thickness of the tested layer, T-transmission at a given wavelength, R-reflection coefficient. In the Tauc plot, the direct optical band gap can be determined from the intercept of the leading-edge linear extrapolation with hν axis, as displayed by lines in Figure 6. The calculated Eg is around 3.78 eV for all tested samples. This response is dominated by FTO or glass substrate, but no important differences were visible for any ZnO addition layers. The analysis of transmittance spectra and extracted optical band gaps suggests that the addition of ZnO nanostructures to the mesoporous TiO2 does not worsen the optical properties but might even be slightly beneficial. It confirms that a hybrid TiO2/ZnO layer is a promising candidate as the mesoporous ETL, owing to its high transparency and appropriate band gap that is well fitted into the energy structure of the perovskite solar cell. The measurement of the current-voltage (J-V) characteristics is the most important step for quality control and optimization of the fabrication process in research and industrial production of solar cells. A comparison of the J-V characteristics of the PSCs with a hybrid mesoporous TiO 2 /ZnO layer is shown in Figure 7. The electrical properties such as power conversion efficiency (PCE), fill factor (FF), short-circuit current density (J sc ) and opencircuit voltage (V oc ) of fabricated PSCs are summarized in Table 2. In this study, four series of perovskite solar cells were fabricated. After the first series, PSCs with an ETL layer containing 8% ZnO were abandoned due to a significant decrease in efficiency. layer has a major impact on the efficiency of PSCs. The highest efficiency of 18.24% demonstrated solar cells with the addition of 2% ZnO, which increased by 1.13% compared to devices with pure TiO2 mesoporous layer. The obtained research results indicate that the efficiency of PSCs is mainly determined by the open-circuit voltage. The addition of ZnO can result in better band alignment with perovskite, which makes it possible for faster electron injection from the active layer. The highest fill factor of 0.67 was obtained for PSCs with pure TiO2 and with the addition of 2% ZnO nanostructures. The short-circuit current density rises slightly (by 0.3 mA/cm 2 ) with an increasing amount of ZnO addition to 2% from 25.09 (without ZnO) to 25.39 mA/cm 2 (2% ZnO). This may indicate that the formation of local TiO2-ZnO heterostructures promotes faster electron transport and increases the number of carriers. Despite the significant improvement in efficiency by the addition of ZnO, it was noted that its excessive concentration deteriorated the photovoltaic performance of PSCs. This may be because higher ZnO concentration increases the resistance to the electron transport and charge transition, which promotes the charge-recombination process. It was found that the incorporation of ZnO nanostructures into the mesoporous TiO 2 layer has a major impact on the efficiency of PSCs. The highest efficiency of 18.24% demonstrated solar cells with the addition of 2% ZnO, which increased by 1.13% compared to devices with pure TiO 2 mesoporous layer. The obtained research results indicate that the efficiency of PSCs is mainly determined by the open-circuit voltage. The addition of ZnO can result in better band alignment with perovskite, which makes it possible for faster electron injection from the active layer. The highest fill factor of 0.67 was obtained for PSCs with pure TiO 2 and with the addition of 2% ZnO nanostructures. The shortcircuit current density rises slightly (by 0.3 mA/cm 2 ) with an increasing amount of ZnO addition to 2% from 25.09 (without ZnO) to 25.39 mA/cm 2 (2% ZnO). This may indicate that the formation of local TiO 2 -ZnO heterostructures promotes faster electron transport and increases the number of carriers. Despite the significant improvement in efficiency by the addition of ZnO, it was noted that its excessive concentration deteriorated the photovoltaic performance of PSCs. This may be because higher ZnO concentration increases the resistance to the electron transport and charge transition, which promotes the chargerecombination process. As we have mentioned earlier, four series of perovskite solar cells were prepared. Figure 8 presents the repeatability and reproducibility analysis of manufactured perovskite solar cells by means of box and whisker plots. It was observed that the highest repeatability of FF and efficiency were found for the PSC with 1% and 3% ZnO, respectively. Solar cells with the addition of 3 and 4% ZnO seem to show a significant dispersion in the obtained short-circuit current density compared to other devices. However, the standard deviation for them does not exceed 1.99%. Moreover, the close distribution of the J sc and V oc (<2.42%) may also indirectly indicate the homogeneity of the deposited layer. The values of the electrical properties of the devices showed only a slight deviation from the average, which proves the high repeatability of the produced PSCs. Molecules 2023, 28, x FOR PEER REVIEW 11 of 15 As we have mentioned earlier, four series of perovskite solar cells were prepared. Figure 8 presents the repeatability and reproducibility analysis of manufactured perovskite solar cells by means of box and whisker plots. It was observed that the highest repeatability of FF and efficiency were found for the PSC with 1% and 3% ZnO, respectively. Solar cells with the addition of 3 and 4% ZnO seem to show a significant dispersion in the obtained short-circuit current density compared to other devices. However, the standard deviation for them does not exceed 1.99%. Moreover, the close distribution of the Jsc and Voc (<2.42%) may also indirectly indicate the homogeneity of the deposited layer. The values of the electrical properties of the devices showed only a slight deviation from the average, which proves the high repeatability of the produced PSCs. X-ray measurements of selected samples containing 1%, 3% and 8% (weight) of ZnO were performed on a PANalytical Empyrean Diffractometer (Malvern, UK), using copper radiation (Cu Kα = 1.5418 Å) and a PIXcell counter. The diffraction patterns of each sample were measured in the Bragg-Brentano geometry (as a reference, substrate measurement) and in the Grazing Incident X-ray Diffraction geometry for different angles of incidence of the X-ray beam for the analysis of individual layers. The effective penetration depth was estimated on the basis of the following formula: where: z-effective penetration depth, µ-sum of linear X-ray absorption coefficients of chemical compounds present in the tested material, α-angle of incidence of the X-ray beam, 2θ-angle at which the diffraction peak was recorded, taking into account the copper radiation, the appropriate angle of incidence of the radiation, the absorption coefficients of the components of the layers (Table 3). Estimated radiation-penetration depths may differ from the actual ones due to the porosity of the material, heterogeneity of layers, and local differences in material density. For this reason, measurements were made for a number of angles, not limited to the expected layer thicknesses. A Lambda 950 S UV-Vis spectrophotometer from Perkin Elmer (Waltham, MA, USA) was used to determine the optical properties of the manufactured hybrid TiO 2 /ZnO electron-transport layers in the wavelength range of 300-900 nm. The electrical parameters of manufactured perovskite solar cells with a hybrid mesoporous TiO 2 /ZnO electron transport layer were characterized by measurements of current-voltage (I-V) characteristics using PV Test Solutions Tadeusz Zdanowicz Solar Cell I-V Tracer System (Wroclaw, Poland), Keithley 2400 source meter (Cleveland, OH, USA), Photo Emission Tech (Ventura, CA, USA) AAA class solar simulator under standard AM 1.5 radiation and light intensity 1000 W/m 2 . Conclusions Due to growing energy consumption, uncertainty about its supply, and the limited resources of fossil fuel, it is becoming necessary to convert energy from renewable sources. Solar power is an important resource because of its inexhaustibility and pollution-free character. Moreover, solar energy is the cleanest and most abundant renewable energy source available in the world. The answer to these challenges lies in solar cells that directly convert solar energy into electricity. Perovskite solar cells have attracted tremendous attention thanks to their good stability and rising power-conversion efficiency. In this paper, TiO 2 nanoparticles with the addition of various ZnO nanostructure, including nanoparticles and one-dimensional nanoadditives such as nanorods and nanowires, were proposed as a hybrid mesoporous electron transport layer of PSCs. The introduction of ZnO nanostructures with various shapes and dimensions into the mesoporous TiO 2 layer does not negatively affect its transparency. X-ray diffraction analysis does not reveal a change in the structure of layers with the addition of ZnO nanopowder (1, 3, 8 wt.%). The highest efficiency of 18.24% was obtained for perovskite solar cells with the addition of 2 wt.% ZnO, which is 1.13% higher compared to devices without ZnO nanostructures. This is probably due to the higher electron mobility compared to the TiO 2 material and the smooth path for electron transport in one-dimensional structures. The addition of ZnO can result in better band alignment with perovskite, which makes it possible for faster electron injection from the active layer. The statistical analysis showed good reproducibility and repeatability of the photovoltaic parameters of the manufactured devices. These results indicate that the ZnO/TiO 2 composite has demonstrated to be a promising candidate as a mesoporous electron transport layer for high-performance mesoscopic perovskite solar cells.
2023-07-29T15:13:10.234Z
2023-07-26T00:00:00.000
{ "year": 2023, "sha1": "83d2cbb16e6481093c4dfecd74939b8082d38966", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ea22af95c54bd797c27b3dd35d324670f7cb53b3", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
15447597
pes2o/s2orc
v3-fos-license
Rationale for treating oedema in Duchenne muscular dystrophy with eplerenone. Recently we reported a cytoplasmic sodium overload to cause a severe osmotic oedema in Duchenne muscular dystrophy (DMD). Our results suggested that this dual overload of sodium ions and water precedes the dystrophic process and persists until fatty muscle degeneration is complete. The present paper addresses the questions as to whether these overloads are important for the pathogenesis of the disease, and if so, whether they can be treated. As a first step, we investigated the effects of various diuretic drugs on a cell model of DMD, i.e. rat diaphragm strips previously exposed to amphotericin B. We found that both carbonic anhydrase inhibitors and aldosterone antagonists were able to repolarise depolarised muscle fibres. Since carbonic anhydrase inhibitors are known to have acidifying effects and this might be detrimental to the ventilation of DMD patients, we mainly concentrated on the modern spironolactone derivative, eplerenone. This drug had a very high repolarizing power, the parameter considered by us as being most relevant for a beneficial effect. In a pilot study we administered this drug to a 22-yr-old female DMD patient who was bound to an electric wheelchair and has had no corticosteroid therapy before. Eplerenone decreased both cytoplasmic sodium and water overload and increased muscle strength and mobility. We conclude that eplerenone has beneficial effects on DMD muscle. In our opinion the cytoplasmic oedema is cytotoxic and should be treated before fatty degeneration takes place. Introduction A very conspicuous pathognomonic sign of early states of Duchenne muscular dystrophy (DMD) is the disproportion between size and strength of the skeletal muscles, being most prominent in the enlarged calves. Duchenne himself addressed this sign in all his descriptions of the disease in the decade of 1860-1870, and finally decided to name the disease "paralysie musculaire pseudo-hypertrophique" (1). Today, 150 years later, the reason and origin of this enlargement is still a matter of debate. Previously a muscle oedema that could contribute to the enlargement was reported (2). Etiologically it was widely attributed to an interstitial inflammation. Our own group became interested in this problem after we had identified an increased sodium and water content in chronically weak muscles of patients having hypokalemic periodic paralysis (HypoPP) (3). In our search for oedemas we used Magnetic Resonance Imaging (MRI) with a Short-Tau Inversion Recovery (STIR) 1 H MR sequence. We also had tackled the question as to whether these oedemas were of an interstitial or an intracellular kind by equipping our set-up with a 23 Na MRI Inversion Recovery sequence (Na-IR) which partially suppresses the signal raised by free sodium in the extracellular fluid and thus mainly represents cytoplasmic sodium (4). In the HypoPP patients the nature of the oedema turned out to be cytotoxic. With our experience in Paramyotonia and HypoPP patients we applied the same MRI techniques to Duchenne patients in order to unravel the question of pseudo-hypertrophy in this disease. A pilot study on 11 DMD patients suggested that also in this disease muscular sodium and water content is increased, thus causing an osmotic oedema (5). The main aim of the present study was to find a drug for the treatment of the oedema in DMD. Again, we were guided by our findings with periodic paralysis patients: Rationale for treating oedema in Duchenne muscular dystrophy with eplerenone chronic weakness of HypoPP patients was improved by acetazolamide (3) while episodic weakness of HyperPP patients was relieved by both hydrochlorothiazide and acetazolamide (6). Acetazolamide repolarised electrically depolarised muscle fibres and this in vitro effect was considered to be responsible for the in vivo effects on MRI and on muscle strength (3). Since acetazolamide is a carbonic anhydrase inhibitor, it exerts acidifying effects resulting in respiratory depression. Therefore carbonic anhydrase inhibitors might be contraindicated in DMD. Similarly inappropriate might be hydrochlorothiazide because of its K + wasting effects which would contribute to muscle weakness. Therefore we were searching for another diuretic agent. Guided by the experience that spironolactone has favourable effects on episodic (7) and chronic weakness (3) in HypoPP, an aldosterone antagonist was taken into consideration. As eplerenone has a higher affinity to the mineralocorticoid receptor and a lower to sexual hormone receptors than spironolactone, it was taken for further testing. Before administering eplerenone to a patient we first tested the repolarizing drug on a cellular DMD model. Since the results with the model were very promising, we treated the marked oedema of a female, wheelchair-bound DMD patient who never had corticosteroid medication. Patients A 24-yr-old female patient with genetically proven DMD gave written informed consent to treatment with eplerenone. The study was approved by the local review board and conducted according to the declaration of Helsinki in the present form. To determine the time duration of the ion and water imbalance until dystrophy, the results published by Weber et al. (5) on 10 DMD boys were revisited. MR imaging protocol The imaging protocol of the lower legs comprised axial T1-weighted turbo spin-echo for the detection of fatty muscle degeneration and axial short-tau inversion recovery (STIR) 1 H MR sequences for the identification of the oedema. The muscle oedema was normalized to the background signal. A 23 Na pulse inversion recovery weighted the sodium signal towards intracellular 23 Na by partially suppressing the signal received from the extracellular space (4). Two reference phantoms were additionally investigated for control reasons. One was filled with 51.3 mM NaCl solution to mimic Na + with unrestricted mobility (e.g. within extracellular fluid), the other one was filled with 51.3 mM NaCl in 5% agarose to mimic Na + with restricted mobility as in the myoplasm. For normalization of the 23 Na signals, the values of the soleus muscles were divided by the signal intensity of the agarose in which NaCl was trapped. The cross-sectional area of the calves was measured on T1-weighted MR images using a predefined tool which calculates the area when the boundaries are outlined (Picture Archiving and Communication System, PACS). The area contained not only muscle tissue but also the oedema as well as the tibial and fibular bones and excluded subcutaneous fat tissue (8). Measurement of resting membrane potentials on excised rat muscle specimens Female Wistar rats were sacrificed by CO 2 asphyxiation and their diaphragms removed and divided into several strips intact from tendon to tendon. The strips were prepared and stored in a solution containing 108 mM NaCl, 4.5 mM KCl, 1.5 mM CaCl 2 , 0.7 mM MgSO 4 , 26.2 mM NaHCO 3 , 1.7 mM NaH 2 PO 4 , 9.6 mM Na-gluconate, 5.5 mM glucose, and 7.6 mM sucrose. At 37°C the pH was adjusted to 7.4 by gassing with 95% O 2 and 5% CO 2 ; osmolality was adjusted to 290 mosmol/l by NaCl variation. Before eplerenone testing was started, the specimen to be tested next was incubated for 30 min in a solution that differed from the dissecting solution by containing 6 instead of 4.5 mM KCl. This high K + concentration allowed the fibres to take up potassium. Then the specimen was incubated in the experimental solution after another period of 30 min. In accordance with previous measurements (3), the experimental solution contained amphotericin B (10 µM) as cation ionophore to cause a bimodal distribution of resting membrane potentials, i.e., polarised and depolarised, and to mimic dystrophic muscle. As the AB-induced sodium leak causes a subsarcolemmal ATP deletion, K ATP channels become activated thereby causing a repolarization of an unpredictably large fraction of the specimen. To avoid the K ATP channel activation, the specific K ATP channel blocker glibenclamide (4 µM) was added to the experimental solution. The K + concentration was 2.0 mM to facilitate the paradoxical depolarization. To test the effect of eplerenone on the resting membrane potential E m and to determine its EC 50 , various eplerenone concentrations were used. Independent of the given eplerenone concentration, the dissolvent DMSO in the experimental solution always had a constant concentration of 2 ml/l. E m was recorded at room temperature, using microelectrodes (7-11 MΩ) and a voltage amplifier. Histograms of the potentials were smoothed by density estimation. The potentials exhibited a two-peak distribution of polarised fibres (defined as -70 mV and more negative) and depolarised fibres (-69 mV and less negative), displayed as probability density. Concentration-response curves were fitted to the measured potentials according using the equation frf = frf min + (frf max -frf min ) / (1 + 10^((logEC 50 -[drug])n)) with frf as the fraction of repolarised fibers, frf max and frf min as the maximum and minimum effects, and n as the Hill coefficient. In all statistical tests, an effect was considered to be statistically significant if the p-value was 0.05 or less. Results were expressed as mean ± standard deviation (SD) for normally distributed data. Results History and all medical reports of the female patient are given in the Appendix. A severe clinical course requiring spondylodesis ( Fig. 1) and assisted ventilation at night, a chromosome X-to-17 translocation (breakpoint at Xp21, and immune histochemistry of a mosaicism had led to the diagnosis of DMD. As no medication had yet been administered, therapy options (glucocorticoids such as prednisolone or deflazacort versus the steroid eplerenone) were thoroughly discussed. The patient preferred the offlabel use with eplerenone that was administered in daily alternating doses of 25 and 50 mg. The usual dose of 50-100 mg/d was avoided because of the low body weight of 35 kg. Originally we had planned a 4-weeks trial, however the absence of adverse effects and the general well-being under medication made the patient continuing the off-label use. At the time of submitting this paper she is still taking eplerenone, i.e. for 13 months now. All involved experts agreed to this extension since clinical re-assessments showed stable or improving cardiac, respiratory and motoric functions. Clinical and MRI assessment of the female DMD patient before treatment (at age 22) Before medication had been started, the female adult was examined clinically and by conventional and specific MRI. The DMD patient presented with a very low body mass index of 11, a severe generalized muscle atrophy including facial muscles, scapulae alatae, and swallowing difficulties, and borderline ability to stand or to walk. Circumferences of the arms and forearms were smaller on the right (15 and 14 cm) than on the left side (21 and 19 cm). The motoric abilities were just sufficient for a safe transfer from wheelchair to bed and vice versa. All limb muscles were weaker on the right side (proximally MRC 2, distally MRC 3) than on the left (proximally MRC 3, distally MRC 4). The deep tendon reflexes could be elicited only in the triceps surae muscles. Details on the clinical status are given in the Appendix. According to the asymmetric clinical impairment, the MR images of the lower legs displayed major side differences regarding degeneration and oedema (Fig. 2). Both sodium signals and water content were strikingly increased (Fig. 2, Table 1). Clinical features and MRI following treatment with eplerenone for 5 and 11 months Beyond the effects of eplerenone on muscle sodium and oedema, the following clinical alterations have been documented at the assessment following treatment with eplerenone for 11 months: the patient reported no adverse effects; speaking was improved, the circumferences of both forearms were increased by 1 cm while those of both upper and lower legs were smaller by 1 cm; of the 19 muscle groups the strength of which was evaluated manually, 10 were slightly stronger (by 1-2 MRC grades) and 4 were slightly weaker (by 1 MRC grade), the remaining 5 were unaltered. A handgrip dynamometer revealed constant maximum grip force of 20 kg upon the three MRI appointments. The serum potassium was always in the normal range. The total protein and quantitative albumin were at the upper normal limit; however protein was not determined at the first assessment. In the MRI of the lower legs, both sodium and water content displayed some improvement, determined after 5 and 11 months of the treatment with eplerenone (Fig. 2, Table 1). In concordance with the MRI changes, muscle strength was slightly increased (Table 1). Re-visiting the results of the DMD boys reported by Weber et al. (2011) Of the 10 boys published by Weber et al. (5), all were older than 5 years. They all presented with a severe cytoplasmic sodium and water overload. The youngest boy presenting with obvious fatty muscle degeneration of the triceps surae muscles in the T1-weighted 1 H-MR images was age 9 while all boys without degeneration were younger than 8 years. Despite the absence of dystrophy, they exhibited a reduced extensibility of hip, knee, and ankle joints and reduced leg abduction due to flexor muscle weakness and muscle contractures. The lower legs of one of them, aged 7, are shown by T1-weighted 1 H-MRI, T2-weighted STIR 1 H-MR, and 23 Na-MRI (Fig. 3). Figure 2. MRI of the lower legs of the female DMD patient without and with eplerenone. In the T1-weighted MRI sequences ( 1 H T1-w), the patient´s muscles showed a moderate fatty degeneration with a markedly pronounced (pseudo) hypertrophy before treatment with eplerenone (left). The fat-suppressed MRI sequences ( 1 H STIR) displayed an oedema that is reduced after treatment for 11 months (right). The 23 Na inversion recovery sequence exhibited higher signal intensities before treatment (left). The quantitative values for oedema and Na-signals are given in Table 1. Note that the right side of the patient is more affected than the left (smaller circumference, pronounced oedema). The bright circles in 23 Na-IR reflect the 23 Na signal of 51.3 mM NaCl trapped in 5% agarose while the signal of the contralateral tube containing an aequimolar NaCl solution is suppressed in the 23 Na-IR sequence but bright in the 1 H-STIR sequence. Effects of eplerenone and carbonic anhydrase inhibitors on resting membrane potentials Em In the presence of subtherapeutical concentrations of eplerenone, the rat diaphragm muscle fibres kept in a solution for a dystrophic muscle model (see Methods) revealed a broad distribution of stable membrane potentials (Fig. 4A). Most fibres (70%) were depolarised (E m less negative than -70 mV) and paralyzed as consequence of the depolarization (membrane inexcitability due to inactivation of the voltage-gated sodium channels) (3). The remaining fibres had resting potentials even more negative than -70 mV or (polarised fraction). The addition of eplerenone in therapeutic concentrations shifted many fibres from the depolarised to the polarised state and thereby increased the fraction of polarised fibres (Fig. 4B). The concentration-response curve showed half-maximal effects (EC 50 ) at approximately 15 mg/L eplerenone (36 µM, Fig. 4C). Since acetazolamide had been found to repolarise depolarised fibres and to increase twitch force (3), we also tested dichlorophenamide, a carbonic anhydrase inhibitor considered to have a higher potency. A comparison of the repolarisation power of the three drugs showed that eplerenone (70%) was superior dichlorophenamide (60%) and acetazolamide (50%). EC 50 values were 40 µM for acetazolamide and about 25 µM for dichlorophenamide (Fig. 4D). As the first membrane potential measurements of the strips did not differ from much later measured values, the effects of all three drugs must have occurred within the preincubation period of 30 minutes. Figure 3. MRI of the lower legs of DMD and control, both age 7. Like in other DMD boys ≤ 7 years, the muscles revealed no fatty degeneration in the T1weighted 1 H-MRI sequence ( 1 H T1-w). Compared to the control, his muscles are (pseudo)hypertrophic. As in all other DMD boys of any age, the fat-suppressed STIR 1 H-MRI sequence displayed a marked oedema ( 1 H STIR). The 23 Na inversion recovery sequence exhibited markedly higher signal intensities in the muscles of the DMD boy than in the control ( 23 Na-IR). The reference tubes show the same behaviour as in Fig. 2. Rapid effects of eplerenone on muscle in vitro The endogenous and exogenous ligands of the mineralocorticoid receptor have been known for long time to regulate sodium-potassium homeostasis in kidney, colon and salivary glands by transcriptional and translational effects on genes encoding the Na + /K + ATPase (10) and the epithelial sodium channel, ENaC. Recently additional non-genomic effects of the ligands of the mineralocorticoid receptor in the skeletal muscle via kinases have been reported (11). The in vitro effects of eplerenone described here are to our knowledge the first examples of both a direct and rapid (within few minutes) effect of a mineralocorticoid receptor antagonist on a tissue, on top of that on the skeletal muscle which has not ever been discussed as a potential target of aldosterone or its antagonists. Eplerenone at the EC 50 repolarised more fibres than acetazolamide or dichlorophenamide. Acetazolamide has been found to improve muscle strength, cytoplasmic Discussion The role of the oedema in DMD and its reduction by eplerenone The marked oedema that is already detectable in boys before fatty degeneration takes place may at least partially explain the typical pseudohypertrophy of DMD calves. This "paralysie musculaire pseudohypertrophique" describes the disproportion between size and strength of the muscles (1). The oedema is mainly caused by the elevated cytoplasmic Na + concentration and therefore is osmotic (5) and cytotoxic and not primarily interstitialinflammatory as usually assumed. The term 'transient oedema' should be avoided since the oedema is regularly observed, even in older DMD boys as long as muscle tissue has not been completely replaced by fat and fibrosis (5). DMD is characterized by a gonosomal mode of transmission and therefore female DMD patients have rarely been reported (9). While most heterozygous female carriers of DMD mutations are asymptomatic, a few initially present with mild thigh weakness, myalgia or muscle cramps whereby later onset of symptoms suggests less severe disease (9). In the present patient, the early onset was in agreement with a severe DMD phenotype although a constant asymmetry in muscle atrophy and weakness is unusual for this diagnosis. The pronounced asymmetry in this female patient may be related to the mosaic pattern of muscle dystrophin. The treatment of our female patient with eplerenone resulted in a reduction of the strikingly increased cytoplasmic sodium and water signals and in an increased strength and mobility. The reduction of the circumferences of the legs could reflect a decreased muscle mass or a decreased oedema. As the leg muscles became rather stronger than weaker and showed much less water content and no progression in dystrophy, we interpreted the reduced circumferences as result of washing out the oedema. Note that the addition of eplerenone in therapeutic concentrations shifted many fibres from the depolarised to the polarised state and thereby increased the fraction of polarised fibres. The eplerenone exposure started 30 min prior to the first potential measurement and remained for about one hour that was required for the measurement of a strip. c: Histograms of the fibres in the polarised state for various eplerenone concentrations, yielding an EC 50 of about 15 mg/L. D: Concentration-response curves for acetazolamide and dichlorophenamide (3-8 strips à 35 fibres for each concentration). Note that the fraction of polarised fibres is lower than for eplerenone. sodium overload and oedema in HypoPP patients (3), and dichlorophenamide is currently being tested in a phase III trial on periodic paralysis patients. An increased sodium conductance is primarily responsible for the depolarised membrane in both our cell model and DMD muscle (3,12), the consequence is a cytoplasmic sodium overload that, if the sodium accumulation is osmotically relevant, causes an oedema. A membrane repolarization should recover ion and water homoeostasis and reconstitute membrane excitability and force. A possible endogenous mechanism to reduce sodium overload in muscle may be the sodium proton exchanger (NHE). In the heart muscle, NHE was identified as a mineralocorticoid-regulated inducer of inflammation and fibrosis (13). This regulation contributes to the beneficial effects of the steroid receptor blocker spironolactone which preserved cardiac and skeletal muscle function in mdx mice (14). The effects of eplerenone on the resting potential of cells mimicking DMD and our patient might be pathogenetically related to the effects of spironolactone. Rationale for the preference eplerenone over spironolactone Although both drugs are aldosterone antagonists, there are striking differences. Eplerenone, a 9α,11αepoxy-derivative of spironolactone, is an effective and selective mineralocorticoid receptor antagonist. It is approved by the FDA for left-ventricular dysfunction following heart infarction. As an add-on to optimal medical therapy for patients with acute myocardial infarction complicated by left ventricular dysfunction, it reduced morbidity and mortality (15). Other potential indications are antifibrotic effects in cardiac and smooth muscle, sarcopenia, cardiomyopathies, arterial hypertension, atherosclerosis, hepatic fibrosis, panic attacks, and cognitive impairment (16)(17)(18)(19)(20)(21). Although eplerenone has not yet been tested for antifibrotic effects on skeletal muscle the positive results on cardiac and smooth muscle support a putative beneficial effect. While spironolactone and its active metabolites (e.g. canrenone) have a high affinity to progesterone and androgen receptors, eplerenone has a very low affinity to these and other steroid receptors (22). This lesser affinity was achieved by replacing the 17-alpha-thioacetyl group of spironolactone with a carbomethoxy group (23). This reduced affinity to progesterone and androgen receptors makes eplerenone very appropriate as drug for DMD. Eplerenone is further distinguished from spironolactone by its shorter half-life and the fact that it does not have any active metabolites. In the absence of a liver dysfunction and doses > 100 mg/d, the risk of hyperkalemia is much lower than with spironolactone (15). We conclude that eplerenone is a promising treatment in DMD, either as alternative or as add-on to glucocorticoids in DMD. History and medical reports of the female patient The female DMD patient had reached unassisted free walking as early as at age of 11 months. However she did not like to walk as she had frequent falls and difficulties in getting up. At age 2 she frequently complained of muscle pains, particularly in the thighs, during walking and even more so during longer periods of sitting. At age 6 she had a fracture of the forearm with subsequently reduced muscular force and atrophy. She did not participate in gymnastics at school. The neuropaediatric examination at age 7 yielded a marked pseudo-hypertrophy of the calves, frequent falls when running and subsequently Gowers' manoeuvre. Heel gait was not possible; the muscle stretch reflexes were weak. Serum CK 1,852 U/l, LDH 431 U/l, GOT 58 U/l, GPT 126 U/l. The EMG displayed fibrillations, markedly shortened motor unit potentials, and a reduced amplitude at maximal innervation. Histology, enzyme histochemistry and morphometry of the muscle identified muscle fibre atrophy, hypertrophic fibres, fibrosis, necrosis, phagocytosis, and central nuclei in agreement with an active and chronic dystrophic process. Immune histochemistry revealed a mosaicism with some fibers having continuous membrane immunostaining, other fibers uniformly unstained, and some fibers with discontinuous or partial dystrophin staining. Molecular genetics identified a chromosome X-to-17 translocation (breakpoint at Xp21, for details see below). From age 12 on the patient developed kyphoscoliosis with rapid progression. At age 14, when the lowermost right rib was touching the pelvis, a spondylodesis of the spinal column was performed. The stabilization of the spinal column was not complete and did not enable the patient to sit in an upright position. Since the patient complained of ischialgia and pain at the right sciatic bone, an MRI was performed that excluded compression of the sciatic nerve. From age 15 on, the patient used an electric wheelchair and, aged 16, assisted ventilation at night. Glucocorticoid therapy was discussed but refused by the mother. From age 20 on the patient used an orthesis for the right foot at night because of a developing talipes. Difficulties with drinking commenced and arthrosis and limited function of the right hand appeared. Multimodal pain therapy is performed because of back pain and ischialgia. The seat of her wheelchair needed repeated adjustments. Shortly prior to medication, hospitalisation was required because of a broncho-pulmonar infection with retention of mucus. Prophylactic administration of an ACE-inhibitor, e.g. ramipril 1.25 mg/d, was discussed but not executed. Extended molecular genetics Cosmids contained exons 3, 5-7, 44, 45, 46-47, 48 of the dystrophin gene hybridized to the translocated chromosome 17 and the normal X-chromosome. Exon 1 hybridized to both translocated chromosomes and the normal X-chromosome. Therefore, we assume that the translocation occurred somewhere in exon 1 of the dystrophin gene. Since an autosomal translocation usually stops the translocated X-chromosome from being inactivated, the normal X-chromosome was most likely inactivated. An inactivation test was performed from leucocyte DNA for the androgen receptor locus. Allele 1: 0-5% methylation, allele 2: 95-100% methylation. Skewed inactivation supports the inactivation of the normal X-chromosome, which explains the presence of DMD in the female. The mother of the patient displayed normal chromosomes. The father was unavailable for testing, but we assume he would have been symptomatic if the translocation had been present. The translocation was concluded to be a de-novo mutation. IPAP 14, EPAP 0, frequency 12/min, I/E 1/1.7, trigger 2, ramp 2, Vtmin 250 ml. The cardiac MRI results were: no indication of cardiac involvement; systolic function of left ventricle at lower normal limit; myocardiac oedema excluded; no late enhancement after application of gadolinium as indicator of precedent myocardic infarction. Body plethysmography: VC 0.98 l (25%), FEV 1 0.96 l (28%), Tiffeneau test 97%, TLC 4.2 l (78%). Details on the clinical assessment of the female DMD patient before treatment The female DMD patient presented with a very low body weight of 35 kg, height 169 cm, and foot swelling. She used a straw for drinking water because of swallowing difficulties; daily maximum intake was approximately one liter. The serum potassium was in the normal range. Assisted ventilation at night: Legendair, UM NS, aPVCV,
2017-07-24T21:15:25.924Z
2012-05-01T00:00:00.000
{ "year": 2012, "sha1": "0ea1d1ada7b257e7061bfcde75193a123c52d678", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "0ea1d1ada7b257e7061bfcde75193a123c52d678", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4987209
pes2o/s2orc
v3-fos-license
Characterization of interferences to in situ observations of δ 13 CH 4 and C 2 H 6 when using a cavity ring-down spectrometer at industrial sites Due to increased demand for an understanding of CH4 emissions from industrial sites, the subject of cross sensitivities caused by absorption from multiple gases on δCH4 and C2H6 measured in the near-infrared spectral domain using CRDS has become increasingly important. Extensive laboratory tests are presented here, which characterize these cross sensitivities and propose corrections for the biases they induce. We found methane isotopic measurements to be subject to interference from elevated C2H6 concentrations resulting in heavier δCH4 by +23.5 ‰ per ppm C2H6 / ppm CH4. Measured C2H6 is subject to absorption interference from a number of other trace gases, predominantly H2O (with an average linear sensitivity of 0.9 ppm C2H6 per % H2O in ambient conditions). Yet, this sensitivity was found to be discontinuous with a strong hysteresis effect and we suggest removing H2O from gas samples prior to analysis. The C2H6 calibration factor was calculated using a GC and measured as 0.5 (confirmed up to 5 ppm C2H6). Field tests at a natural gas compressor station demonstrated that the presence of C2H6 in gas emissions at an average level of 0.3 ppm shifted the isotopic signature by 2.5 ‰, whilst after calibration we find that the average C2H6 : CH4 ratio shifts by+0.06. These results indicate that, when using such a CRDS instrument in conditions of elevated C2H6 for CH4 source determination, it is imperative to account for the biases discussed within this study. Introduction With increasing efforts to mitigate anthropogenic greenhouse gas emissions, opportunities to reduce leaks from fossil fuel derived methane (ffCH 4 ) are of particular importance as they currently account for approximately 30 % of all anthropogenic methane emissions (Kirschke et al., 2013).At present, technically feasible mitigation methods hold the potential to half future global anthropogenic CH 4 emissions by 2030.Of this mitigation potential more than 60 % can be realized in the fossil fuel industry (Hoglund-Isaksson, 2012).However, for effective implementation, sources, locations and magnitudes of emissions must be well known. The global increase in the production and utilization of natural gas, of which methane is the primary component, has brought to light questions in regards to its associated fugitive emissions, i.e. leaks.Recent estimates of CH 4 leaks vary widely (1-10 % of global production; Allen, 2014) and US inventories of natural gas CH 4 emissions have uncertainties of up to 30 % (US EPA, 2016).In addressing this issue, the ability to distinguish between biogenic and different anthropogenic sources is of vital importance.For this reason methane isotopes (δ 13 CH 4 ) are commonly used to better understand global and local emissions, as demonstrated in a number of studies (Lamb et al., 1995;Lowry et al., 2001;Hiller et al., 2014).The discrimination of sources with relatively close isotopic composition such as oil-associated gas and natural gas, which can have isotopic signatures separated by only ∼ 4 ‰ (Stevens and Engelkemeir, 1988), requires precise and reliable δ 13 CH 4 measurements.Ethane (C 2 H 6 ) is a secondary component in natural gas and can be used as a marker to distinguish between different CH 4 sources.Use of the C 2 H 6 : CH 4 ratio provides a robust identifier for the gas of interest.Recent findings in the US found coal bed C 2 H 6 : CH 4 ratios ranging between 0 and 0.045, while dry and wet gas sources displayed differing ratios of < 0.06 and > 0.06 respectively (Yacovitch et al., 2014;Roscioli et al., 2015). Laser spectrometers, especially those based on cavity ringdown spectroscopy (CRDS), are now a common deployment for site-scale CH 4 measurement campaigns (Yvon-Lewis et al., 2011;Phillips et al., 2013;Subramanian et al., 2015).However, with the advent of such novel technologies, there is a risk of unknown interference from laser absorption which can create biases in measurements.Some examples of this are discussed in Rella et al. (2015) and many others (e.g.Malowany et al., 2015;Vogel et al., 2013;Nara et al., 2012).Using a CRDS instrument we show that the presence of C 2 H 6 causes significant interference to the measured 13 CH 4 spectral lines, thus resulting in shifted reported δ 13 CH 4 values.We propose a method to correct these interferences and test it on measurements of natural gas samples performed at an industrial natural gas site. The CRDS instruments used throughout this study are Picarro G2201-i analysers (Picarro INC, Santa Clara, USA) which measure gases including CH 4 , CO 2 , H 2 O, and, although not intended for use by standard users, C 2 H 6 .This model measures in three spectral ranges: lasers measuring spectral lines at roughly 6057, 6251 and 6029 cm −1 are used to quantify mole fractions of 12 CH 4 , 12 CO 2 and 13 CO 2 , and 13 CH 4 , H 2 O and C 2 H 6 respectively.The spectrograms are fit with two non-linear models in order to determine concentrations; the primary fit excludes the model function of C 2 H 6 while the second includes this function, thus adding the ability to measure C 2 H 6 (Rella et al., 2015).Such a method for measuring C 2 H 6 concentrations is crude, thus the uncalibrated C 2 H 6 concentration data are stored in private archived files which until now have been used primarily for the detection of sample contamination.The measurements of δ 13 CH 4 and δ 13 CO 2 are calculated using the ratios of the concentrations of 12 CH 4 , 13 CH 4 , 12 CO 2 and 13 CO 2 respectively. An experimental procedure is presented here which corrects the interference caused by C 2 H 6 on the retrieval of Therefore an open split is included to vent additional gas and retain ambient pressure at the inlets. δ 13 CH 4 using such a CRDS instrument for application to in situ or continuous measurements of δ 13 CH 4 strongly contaminated by C 2 H 6 , i.e. in the vicinity of ffCH 4 sources.The step-by-step procedure of the experimental methods developed to quantify the cross sensitivities and the proposed calibration for δ 13 CH 4 and C 2 H 6 are depicted in Fig. 1 and presented in detail in Sect. 2. Section 3 encompasses a discussion of the results, including an analysis of the instrumental responses for two spectrometers with an evaluation of the stability and repeatability of the suggested corrections.Finally, field measurements were performed at a natural gas compressor station where the aim was to identify emissions between two natural gas pipelines.In Sect. 5 the importance of the corrections for field measurements is demonstrated by applying our methods to data retrieved during this period while also revealing the instruments' potential to measure C 2 H 6 . Methods The purpose of laboratory tests was to characterize the instruments' response to concentration changes in gases found at fossil fuel sites (e.g.gas extraction or compressor stations), specifically, the cross sensitivities of CO 2 , CH 4 and H 2 O on C 2 H 6 and of C 2 H 6 on δ 13 CH 4 .Presumably there are additional gases with the potential for interference; this study focuses on those reported to have a significant effect on C 2 H 6 and δ 13 CH 4 measurements by Rella et al. (2015).We also define and describe a new procedure to calibrate both C 2 H 6 and δ 13 CH 4 . In the following chapter the general set-up used for the majority of experiments is described, after which we enter a more detailed description of the processes involved in each step. of 52 ppm in nitrogen 2.1 Experimental set-up Method Each cross sensitivity is measured by creating a gas dilution series designed to control the concentrations of the gas responsible for the interference in steps while keeping concentrations of the other gas components constant (in particular the component subject to interference).The instrument response was evaluated for a large range of concentrations and different combinations of gas components.An example of such a measurement time series can be seen in Fig. S1 in the Supplement.The experimental set-up used includes two CRDS instruments (Picarro G2201-i) running in parallel in a laboratory at ambient conditions (25 • C, 100 m above sea level; a.s.l).The instruments were used in iCO 2 -iCH 4 auto switching mode, of which we consider only the "high precision" mode of δ 13 CH 4 throughout the study.For the dilution series, a working gas is diluted in steps using a set-up of two mass flow controllers (MFC; El-flow, Bronkhorst, Ruurlu, the Netherlands), as shown in Fig. 2. A T-junction splits the gas flow to both instruments; the total flow is greater than the flow drawn into the instruments.Hence to maintain an inlet pressure close to ambient, the set-up includes an open split to vent additional gas.In order to assess variability and error, each experiment is repeated a minimum of three times consecutively.To detect instrumental drift between experiments, a target gas is measured before commencing each dilution sequence.An overview of each targeted cross interference, with information on the gases used and ranges spanned in laboratory tests, can be found in Table 1. Gases Throughout the experiments, four categories of gas were used: a zero air gas with measured residual concentrations of To alter the water vapour content of a sample, the experimental set-up described in Fig. 2 was modified by incorporating a humidifier. The humidifier consists of a liquid flow controller (Liquiflow, Bronkhorst, Ruurlu, the Netherlands) and a mass flow controller (El-flow, Bronkhorst, Ruurlu, the Netherlands) fed into a controlled evaporator mixer (CME) (Bronkhorst, Ruurlu, the Netherlands).The tube departing the CME contains a gas flow of 2 L min −1 and is heated to 40 • C to prevent any condensation.A short description and diagram of the humidifying bench can found in Laurent et al. (2015). The H 2 O interference on C 2 H 6 was measured by using the humidifier to vary the H 2 O content of zero air gas in the range of 0.25-2.5 % H 2 O, representing the range of real world conditions.The humidifier set-up cannot reliably reach humidity below 0.2 % H 2 O, a range frequently reached when measuring gas cylinders or dried air.This low range was attained using a H 2 O scrubber (Magnesium Perchlorate, Fisher Scientific, Loughborough, UK) connected to the CRDS instrument inlet while measuring ambient air.As the efficiency of the scrubber decreases over time, a slow increase of H 2 O spanning low concentrations in the range of 0-0.5 % can be observed. The CH 4 interference on C 2 H 6 was measured by creating a dilution series of variable CH 4 content using zero air and a working gas of 6 ppm CH 4 , 360 ppm CO 2 , 310 ppb N 2 O and 50 ppb CO in natural air.Methane concentrations ranged from 0 to 6 ppm.To keep other causes of interference at a minimum, the gas mixture passed through two scrubbers: the first a CO 2 scrubber (Ascarite(ii), Acros Organics, USA) and the second a H 2 O scrubber (Magnesium Perchlorate, Fisher Scientific, Loughborough, UK).As an independent check on the linearity of the response functions, each dilution sequence was repeated at two humidities (0 % H 2 O and 1 % H 2 O) and four C 2 H 6 concentrations (between 0 and 1.5 ppm). The CO 2 interference on C 2 H 6 was measured with a dilution series ranging 0-1500 ppm CO 2 created by mixing zero air and a working gas of 2000 ppm CO 2 , 1.7 ppm CH 4 and 50 ppb CO in natural air.Any interference due to CH 4 was accounted for during data processing.This test was repeated at four water vapour levels (0, 0.5, 1 and 1.5 %) and five C 2 H 6 concentrations (between 0 and 2.5 ppm). C 2 H 6 calibration set-up In order to correctly use the C 2 H 6 data from CRDS instruments, the data must be calibrated to an internationally recognized scale.To achieve this, the set-up described in Sect.2.1 was modified to include the filling of removable samples (1 L glass flasks), the concentrations of which could be independently verified, as shown in Fig. 2. A gas mixture using the C 2 H 6 standard and an ambient air cylinder was created via two MFCs before passing through the flask on its way to the instruments' inlets.Each step in the dilution series requires an individual flask, which was flushed for 20 min and then analysed for 10 min with an average precision of 0.02 ppm C 2 H 6 on the CRDS instrument.The flask is subsequently sealed and removed for analysis on a gas chromatograph (GC) (Chrompack Varian 3400, Varian Inc, USA) which uses National Physics Laboratory (NPL) standards and has an uncertainty better than 5 %.The system is described in more detail in Bonsang and Kanakidou (2001). In total 17 flasks were filled with gas mixtures spanning from 0 to 5 ppm C 2 H 6 , covering the range expected near a leak of ffCH 4 (Gilman et al., 2013;Jackson et al., 2014).In order to calibrate the linearity of the response at very high concentrations which may be expected from pure natural gas samples, we conducted a measurement at 100 % of the C 2 H 6 standard (52 ppm ± 1 ppm). Determining the correction for δ 13 CH 4 Measured δ 13 CH 4 is altered in the presence of C 2 H 6 .To understand the magnitude of this effect, experiments were conducted using the method described in Sect.2.1.The dilution series uses the C 2 H 6 standard and a cylinder filled with ambient air, i.e. with a negligible C 2 H 6 mixing ratio (< 1 ppb), to create concentration values spanning from 0 to 4 ppm C 2 H 6 .As there is only one source of CH 4 in the experiment, the addition of C 2 H 6 should not affect the value of δ 13 CH 4 ; hence any change seen is an apparent shift of δ 13 CH 4 due to C 2 H 6 interference.This concentration range was chosen as it encompasses a C 2 H 6 : CH 4 ratio of 0 to 1, well within the likely range to be measured from fossil fuel sources (Yacovitch et al., 2014). Calibration of δ 13 CH 4 The reported δ 13 CH 4 was calibrated to Royal Holloway University of London (RHUL) scale using four calibration gases spanning −25 to −65 ‰ that were created by different dilutions of pure CH 4 and CO 2 with ambient air.The aliquots were measured multiple times by isotope ratio mass spectrometry (IRMS) at RHUL.The precision for δ 13 CH 4 , obtainable with this IRMS, is reported as 0.05 ‰ -detailed information on the measurement system can be found in Fisher et al. (2006).The calibration factor is determined from a linear regression and calibrations were performed once a day for 3 consecutive days before and after the laboratory experiments.A target gas was measured regularly to track any drift in δ 13 CH 4 as an independent check on the calibration quality. Results and discussion This study focuses on determining a reliable correction and calibration scheme for a Picarro G2201-i when measuring methane sources with C 2 H 6 interference.Findings from the experiments described in Sect. 2 are discussed in detail here. In order to calibrate δ 13 CH 4 and C 2 H 6 values, there are a series of corrections that must take place beforehand (see Fig. 1).The initial correction to be applied is on C 2 H 6 due to interference from CH 4 , CO 2 and H 2 O. Particular emphasis is placed on this correction due to the discovery of significant non-linear behaviour in the presence of H 2 O, CH 4 and CO 2 in the sample gas.Once the C 2 H 6 has been corrected, the calibration of C 2 H 6 using independent GC measurements, the C 2 H 6 interference correction on δ 13 CH 4 and finally the calibration of δ 13 CH 4 can be effected. For our results to be applicable to future studies we examine the inter-instrument variability and stability over time, compare our results to current literature and discuss the uncertainties attributed to our results.Throughout this study we refer to raw, uncorrected C 2 H 6 and δ 13 CH 4 concentrations as "reported" to highlight that they may be influenced by interferences and are uncorrected.Within this section negative C 2 H 6 concentrations are often mentioned.We note that this is the "reported" C 2 H 6 concentration by the instrument.Unless otherwise stated, the standard deviation reported is calculated from 1 min averages and depicted as error bars within figures. Correcting reported C H 2 O content was found to be the dominating source of interference to reported C 2 H 6 ; its presence decreases the reported concentration of C 2 H 6 with increasing H 2 O concentration.Furthermore, the response function exhibits a hysteresis effect, which, although small, can be considerable when changing from dry to undried air samples (e.g. between dry calibration gas and undried ambient air).There are two distinct instrumental responses, depending on whether dried or undried ambient air are being measured during the night preceding the experiment, which are depicted in Figure 3 by dark and light blue markers respectively.When the CRDS instrument measures dry air prior to the experiment, a discontinuity is observed at 0.16 % H 2 O. Figure 4 shows this effect in more detail; prior to 0.16 % H 2 O the response function exhibits a stable linear response.The correction within this low range was found to be the same for both instruments, 0.44 ± 0.03 ppm C 2 H 6 / % H 2 O.After passing the 0.16 % H 2 O threshold, the response exhibits a discontinuity with a magnitude and subsequent slope that are also dependent on the air moisture beforehand.This is seen in Fig. 4 whereby the discontinuity of two repetitions (A and B depicted by dark and light blue markers respectively) differs in magnitude by 0.1 ppm reported C 2 H 6 .The discontinuity occurs when the instrument passes the 0.16 % H 2 O threshold, both when moving from dry to wet air and vice versa (see Fig. S2).If measuring undried air before the experiment, the interference due to H 2 O can be described well by a linear response (light blue markers in Fig. 3) and potentially causes large biases from the true C 2 H 6 .For example, if measuring at 1 % H 2 O, both instruments display a change in reported C 2 H 6 of approximately −0.9 ppm.The response function calculated for instruments CFIDS 2072 and 2067 differed, showing −0.72 ± 0.03 ppm C 2 H 6 / % H 2 O and −1.00 ± 0.01 ppm C 2 H 6 / % H 2 O with R 2 values of 0.98 and 0.99 respectively.The hysteresis effect is evident when measuring with undried air; the slope was seen to shift after each repetition, in total by 0.1 ppm C 2 H 6 / % H 2 O. CO 2 interference on C 2 H 6 For both instruments an increase in the CO 2 concentration results in lower reported values of C 2 H 6 , and it is furthermore apparent that the magnitude of this interference is dependent on air humidity.For a dry sample gas (H 2 O < 0.16 % -demonstrated in the left-hand column of Fig. 5), the interference for both instruments is found to be highly stable and well characterized by a linear slope of 1 × 10 −4 ± 1 × 10 −5 ppm C 2 H 6 / ppmCO 2 with a R 2 value of 0.9.There was no measurable difference in slope at any of the C 2 H 6 concentrations tested (see Fig. S3).In contrast, for water vapour levels ≥ 0.5 % H 2 O (see right- hand column of Fig. 5), measurements exhibit a higher scatter between repetitions.This is mainly attributed to a drifting intercept; however the experiments also show a smaller R 2 of 0.8.We calculate a characteristic linear slope of 3.8 × 10 −4 ± 1 × 10 −5 ppm C 2 H 6 / ppm CO 2 and 3.9 × 10 −4 ± 1 × 10 −5 for ≥ 0.5 % water vapour for instruments CFIDS 2072 and 2069 respectively.Therefore, when measuring undried ambient air, the presence of CO 2 at a level near 400 ppm will induce a shift in the reported C 2 H 6 of approximately −0.15 ppm C 2 H 6 , whereas if the air is dried the reported shift is much smaller, at approximately −0.04 ppm C 2 H 6 . CH 4 interference on C 2 H 6 The CH 4 effect on C 2 H 6 , as shown in Fig. 6, is less prominent by at least an order of magnitude than both the H 2 O and CO 2 interferences.At dried ambient CH 4 concentrations a typical change in reported C 2 H 6 of approximately −0.008 ppm is observed within both instruments.Dried air experiments show a high scatter of points between repetitions, and R 2 values of 0.4 and 0.6 for instruments CFIDS 2072 and 2067 respectively are calculated.Despite its large uncertainty, the data suggest that both instruments display a similar response with a statistically significant slope within the range of C 2 H 6 concentrations tested (see Fig. S3).In light of this we use a weighted mean to calculate a linear response of 9 × 10 −3 ± 2 × 10 −3 ppm C 2 H 6 / ppm CH 4 for dry air measurements for CFIDS 2067, and ± 5 × 10 −3 ppm C 2 H 6 / ppm CH 4 for CFIDS 2072.The results obtained at 1 % H 2 O show little correlation (as shown in the right-hand column of Fig. 6), with both instruments displaying a R 2 value of 0.2.An ANOVA test suggests the slopes are not significantly different from zero; thus we omit a CH 4 correction for this case. Combining the CO 2 , CH 4 and H 2 O correction on C 2 H 6 To fully take into account all (known) C 2 H 6 crosssensitivities, the corrections to reported C 2 H 6 need to be combined.Due to the non-linearity of the discontinuity in reported C 2 H 6 at 0.16 % H 2 O and its subsequent slope we choose to report correction coefficients for the two found linear regimes, i.e. for continuous measurements with sample humidities below 0.16 % and sample humidities above 0.16 %.Within each range the proposed correction formula is given as follows: If the humidity is limited to less than 0.16 % before and during measurements, Both instruments demonstrated good agreement for all the correction factors calculated at < 0.16 % H 2 O. Corrections for measurements undertaken at concentrations higher than or equal to 0.16 % H 2 O are A = ± 2 × 10 −5 ppm C 2 H 6 / ppm CO 2 for CFIDS 2067. C 2 H 6 calibration To make use of the corrected C 2 H 6 it should be calibrated to match an internationally recognized scale.This is achieved by measuring whole-air samples by CRDS and independently on a calibrated gas chromatograph, as discussed within Sect. 2. The calibration factor is determined by comparing the corrected C 2 H 6 resulting from CRDS and C 2 H 6 as confirmed by the GC and plotted in Fig. 7a.The relationship was found to be linear throughout the range of 0-5 ppm C 2 H 6 with a slope of 0.505 ± 0.007 and 0.52 ± 0.01 for instruments CFIDS 2072 and 2067 respectively.The results are reported in Table 2 from which we can see the intercept of the calibration for instrument CFIDS 2072 shifts between the experiment in February and that in October, while the slope remains constant throughout the measured time period.The change in the intercept is attributed to a C 2 H 6 baseline drift which we have monitored over time using regular target gas measurements; an example is given in Fig. 7b.To account for this drift and any elevated baselines (such as that of CFIDS 2067 -see Table 2), a regular measurement of a working gas is necessary, from which the instrument offset can be calculated.For the full calibration, we thus suggest using Eq. ( 2), Figure 8.During a dilution sequence of ambient gas with C 2 H 6 , the CH 4 concentration decreases from its nominal concentration 1948.7 ppb ± 0.32 ,ppb as the contribution from C 2 H 6 is increased.Thus both 12 CH 4 and 13 CH 4 undergo a similar decrease as the gas is diluted.However, what is observed is an increase in the reported value of 13 CH 4 , suggesting C 2 H 6 interference.The 12 CH 4 axis is plotted to the left in light green, whereas the 13 CH 4 axis is plotted to the right in dark green at a different scale.Error bars represent the standard deviation, the 12 CH 4 markers are larger than their associated error bars. where D is the calibration factor (slope) for the instrument, i.e. for CFIDS 2072 D = 0.505 ± 0.007 and (WGS) the baseline drift determined using the working gas. (2) δ 13 CH 4 correction By measuring the shift of the reported δ 13 CH 4 in C 2 H 6contaminated samples, we have observed that the instrument reports heavier values of δ 13 CH 4 in the presence of C 2 H 6 . The shift is a result of increased reported 13 CH 4 in samples containing C 2 H 6 (see Fig. 8).This is most likely caused by the overlapping of spectral lines within the 6029 wave number region (Rella et al., 2015).We calculate the δ 13 CH 4 correction by taking the slope of δ 13 CH 4 (the difference between the reported δ 13 CH 4 and the initially reported one of the C 2 H 6 -free gas) and the corrected C 2 H 6 to CH 4 ratio. The ratio is used to permit the calculation of the δ 13 CH 4 response function per ppm CH 4 as the magnitude of interference is dependent on CH 4 concentration (Rella et al., 2015). The significance of the interference on δ 13 CH 4 concentrations is illustrated in Fig. 9; as the C 2 H 6 : CH 4 ratio increases, the change in the reported δ 13 CH 4 increases linearly.Results obtained from tests carried out throughout the year, for both instruments are noted in Table 3 and plotted in Fig. 9.The correction equation can be expressed as follows: These corrections contain the inherent δ 13 CH 4 offset of the instrument.When calibrating the δ 13 CH 4 to a known scale (as described in Sect.2.5) any instrumental offset will be incorporated within the calibration.Therefore, the correction equations can be simplified to Also highlighted in Fig. 9 is the typical measurement range for the majority of ffCH 4 sources related to dry and wet natural gas relative to calibrated C 2 H 6 / CH 4 ratios given on the upper abscissa, whereby dry gas refers to natural gas that occurs in the absence of condensate/liquid hydrocarbons (C 2 H 6 : CH 4 = 1-6 %) while wet gas typically contains higher concentrations of complex hydrocarbons (C 2 H 6 : CH 4 > 6 %; Yacovitch et al., 2014).It is clear that within this range the bias on methane isotopic signatures is significant; dry gas will alter the reported δ 13 CH 4 by 0.8-4 ‰, while wet gas can cause a shift of up to 13 ‰ depending on its C 2 H 6 : CH 4 ratio. δ 13 CH 4 calibration Full instrument calibrations as described in Sect.2.4 were performed once in 2014 and once in 2015.The δ 13 CH 4 values obtained for the calibration gases by RHUL are measured by IRMS and are therefore not subject to interferences.The calibration gas aliquots were measured with an average standard deviation of 0.03 ‰.To calibrate δ 13 CH 4 CORRECTED , the δ 13 CH 4 CORRECTED was calculated for each calibration gas and used within the linear regression.The calibrations were linear with R 2 > 0.99 on both occasions and no change (within our uncertainties) was observed between the two tests.By measuring an ambient air target regularly, we later detected a shift in the δ 13 CH 4 baseline.Two further calibrations were performed in 2016 to assess this incident which confirmed that the offsets of the linear regressions were significantly shifted, while the slopes agreed well with previous calibrations.Therefore, to account for a baseline drift, it is important to measure a target gas regularly and amend the offset of the calibration equation accordingly. Typical instrumental performance and uncertainties In order to characterize the repeatability of the C 2 H 6 measured by the CRDS instrument, we have measured several targets and monitored the changes of the reported C 2 H 6 signal over time.The raw signal is a measurement every 3 s, which displays on average a standard deviation of 90 ppb.By aggregating the data to 1 or 30 min intervals, the precision can be improved and a standard deviation of 20 or 8 ppb is reached.Furthermore, the 1 min standard deviation at 52 ppm C 2 H 6 is 180 ppb.Thus by assuming a linear relationship the typical performance for 1 min averages is 20 ppb ±0.3 % of reading. Of course, there are some substantial uncertainties attributed with the C 2 H 6 correction and calibration which need to be accounted for when discussing the uncertainty of the calibrated C 2 H 6 concentrations.With regards to the C 2 H 6 correction for 1 min averages, if measuring dried ambient air the propagation of uncertainties are negligible with respect to the raw instrumental precision (20 ppb).However, if using 30 min averages the uncertainty augments from 8 to 10 ppb.Elevated CH 4 , CO 2 and H 2 O signals (> 5 ppm, > 1000 ppm, > 0.2 % respectively) will induce increased C 2 H 6 uncertainty regardless of aggregation time.After calibration, the correction factor increases to 2 1/2 times that of the corrected C 2 H 6 , so at ambient air concentrations calibrated C 2 H 6 has an uncertainty of 30 ppb. The repeatability of δ 13 CH 4 for 1 min averages on our instrument is a standard deviation of 0.66 ‰.The standard deviation is reduced to 0.29 and 0.09 ‰ by aggregating the raw data for 5 and 30 min respectively.For the correction of δ 13 CH 4 due to C 2 H 6 , error propagation of the factors applied in Eq. ( 4) must be taken into account.Therefore, at ambient concentrations, the uncertainty of a 1 min average will increase to 0.9 ‰. Generalizability of corrections and calibrations The experiments in this study were repeated multiple times and performed on two instruments to better understand how the instrument responses change over time and how they vary between instruments.The C 2 H 6 correction and calibration, and δ 13 CH 4 correction experiments were repeated on CFIDS 2072 over the course of a year to determine any temporal drifts. The coefficients of the C 2 H 6 correction were examined over a 4-month period.Methane, carbon dioxide and water vapour coefficients for dried gas displayed no noticeable variation over this time frame.Both CH 4 and CO 2 coefficients for undried gas also showed good stability throughout this period; however the undried H 2 O coefficient is seen to vary significantly (±0.1 ppm C 2 H 6 / % H 2 O).As discussed previously, the H 2 O correction is subject to a hysteresis effect, which makes analysis of its long-term variation difficult.As we did not find a clear temporal pattern of the variations, we therefore suggest that this coefficient is not likely to be time dependent. The calibration of C 2 H 6 was calculated twice within a 9month period (see Table 2).No variation of the slope of the response function is observed within this time frame.The intercept is prone to drift in time as discussed previously. The δ 13 CH 4 correction has been examined three times throughout a 6-month period (see Table 3).The variability of the slope observed over 6 months is 1 ‰ ppm C 2 H 6 / ppm CH 4 .Given that the error attribution of each experiment is approximately ±1 ‰ ppm C 2 H 6 / ppm CH 4 , this variability is not statistically significant.The intercepts show good agreement with no variation outside the expected uncertainties. The comparison of both CRDS instruments showed good agreement for all calculated C 2 H 6 correction coefficients, with the exception of the undried H 2 O coefficient at > 0.16 % H 2 O.For this coefficient we calculate a difference of 0.3 ppmC 2 H 6 / % H 2 O between that of CFIDS 2072 and CFIDS 2067.The variance may be the consequence of spectrometer differences, a long-term hysteresis effect or differences in their past use (mostly dried samples on CFIDS 2072 and mostly undried samples for CFIDS 2067). The slopes derived for the C 2 H 6 calibration of both instruments correspond well, with no significant difference seen between the two.The intercepts differ by approximately 0.6 ppm, thus suggesting a distinct difference between intrainstrumental C 2 H 6 baselines. The slopes of the δ 13 CH 4 correction were found to be in good agreement between the two instruments.Where the instruments differ is with regards to their δ 13 CH 4 baselines, thus causing the observed disparity in intercept (seen in Table 3) of approximately 3 ‰. To the best of our knowledge, at this time there is only one published study reporting on a correction due to C 2 H 6 interference on an isotopic Picarro analyser.Rella et al. (2015) have studied the interference using a Picarro G2132-i, a high-precision CH 4 isotope-only CRDS analyser which uses similar analysis algorithms and spectral regions to that of the Picarro G2201-i.Rella et al. (2015) obtained C 2 H 6 correction parameters of A = 0.658 ppm C 2 H 6 / ppm H 2 O, B = 5.5 ± 0.1 × 10 −3 ppm C 2 H 6 / ppm CH 4 , C = 1.44 ± 0.02 × 10 −4 ppm C 2 H 6 / ppm CO 2 in 2015.Factors B and C for CH 4 and CO 2 respectively agree well with the dried air coefficients attained within this study.The H 2 O coefficient, as suggested by Rella et al. (2015) differs from both that of CFIDS 2072 and CFIDS 2067 but confirms the variability of this factor between instruments when measuring undried air samples.Lastly, Rella et al. (2015) report a correction factor for δ 13 CH 4 of 35 ‰ ppm CH 4 / ppm C 2 H 6 which indicates a different response to C 2 H 6 contamination of the different instrument series. 4 Source identification at a natural gas compressor station In order to quantify the effect of C 2 H 6 contamination in a real world situation, we have applied the corrections and calibrations discussed in this paper to measurements taken at a natural gas site, with the aim of distinguishing emissions between two natural gas pipelines.In the following section we demonstrate the effect of C 2 H 6 interference on δ 13 CH 4 at a fossil fuel site and discuss the alternative approach of using calibrated C 2 H 6 : CH 4 ratios to distinguish source signatures, a method which has not been previously tested on a Picarro G2201-i. Site description Located in an industrial park in northern Europe, the campaign took place at a natural gas compressor station in summer 2014.Such stations serve the distribution of natural gas; their key purpose is to keep an ideal pressure throughout the transmission pipelines to allow continuous transport from the production and processing of natural gas to its use.The visited compressor site comprises two major pipelines with their corresponding compressors.The two pipelines carry gas of different origins to the site, where after pressurization, they are combined for further transmission.The site topography is flat and open with the surrounding area being predominantly farmland and in close proximity to a major road.FFCH 4 emissions were expected to emanate from various sources on site such as the compressors, methane slip from turbines and fugitive emissions due to the high pressure of gas (Roscioli et al., 2015).Other possible methane sources in the nearby region were identified as traffic and agriculture, including a livestock holding situated less than 500m south-west of the site. Continuous measurements of CH Two instruments were utilized for continuous measurements throughout the 2-week field campaign: a CRDS instrument (CFIDS 2072, characterized in detail in previous sections) and an automatic gas chromatograph with a flame ionization detector (GC-FID; Chromatotec, Saint-Antoine, France) measuring VOCs (light fraction C 2 -C 6 hydrocarbons), described in detail in Gros et al. (2011).They were located at a distance of approximately 200-400 m from the pipelines and compressors. The air measured by the CRDS instrument was dried consistently to < 0.16 % H 2 O using a Nafion (Perma Pure LLC, Lakewood, USA).The δ 13 CH 4 was calibrated using the method described previously in Sect. 2. Every two days, 20 min measurements of two calibration gases were made to calibrate the CH 4 and CO 2 data and to track any drift in the isotopes.A C 2 H 6 free working gas was measured every 12 h and used simultaneously as a target gas for the calibration of CH 4 and CO 2 , and to track any drift in the C 2 H 6 baseline for the calibration of C 2 H 6 . The GC-FID was calibrated at the beginning and end of the campaign using a certified standard gas mixture (NPL, National Physics Laboratory, Teddington, UK).The sampling time is a 10 min average every half an hour; 10 min of ambient air is measured after which the following 20 min are used to analyse the input.Grab samples of pure natural gas were taken of both pipelines, with the aim of characterizing the two differing gas supplies.The 0.8 L stainless steel flasks were evacuated prior to sampling to a pressure of the order of 10 −6 mbar, after which they were filled to ambient pressure when sampling.The flasks were measured independently in the laboratory with a manual GC (described in Sect.2.4) and, after dilution with zero air, by the CRDS instrument. 4.2 Impact of C 2 H 6 on δ 13 CH 4 observations at the field site To quantify the effect of C 2 H 6 interference on δ 13 CH 4 a total of 16 events were selected from the 2-week field campaign, with criteria defined as a peak exhibiting both increasing CH 4 concentrations and a change in δ 13 CH 4 signature for a minimum of 1 h.Two such events are plotted in Fig. 10.Event 1 represents the majority of events measured during the field campaign, in which CH 4 and C 2 H 6 are well correlated.This particular event has a maximum concentration of 11 ppm CH 4 and 0.6 ppm C 2 H 6 .On average the selected events have peak concentrations of 5 ppm CH 4 and 0.3 ppm C 2 H 6 .The methane isotopic signature was char- acterized using the Miller-Tans method (Miller and Tans, 2003), in which δ 13 CH * 4 CH 4 values are plotted against CH 4 to calculate the isotopic signature of the methane source in situations where the background is not constant.In order to avoid bias stemming from using ordinary least squared (OLS) regression, the York least squares fitting method was implemented, thus taking into account both the X and Y errors (York, 1968).All events excluding one were found to have δ 13 CH 4 signatures characteristic of natural gas, corresponding on average to −40 ‰.A single event (Event 2 plotted in Fig. 10) was detected with a δ 13 CH 4 signature of −59 ‰ ± 1.5 ‰.Such a signature suggests a biogenic source and, due to the south-westerly wind direction throughout the event (where the livestock holding is located), suggests the source is likely to originate from livestock, either as ruminant or manure emissions. If the data are left uncorrected, sources containing C 2 H 6 substantially bias the calculated isotopic signature of CH 4 events.This is demonstrated in Fig. 10c where, for Event 1, the slope of points after C 2 H 6 correction (in blue) is shifted in comparison to the slope derived from points left uncorrected (in red), signifying a modification of the δ 13 CH 4 signature.Corrected δ 13 CH 4 suggests a signature of −40.0 ‰ ± 0.1 ‰, while uncorrected values imply −37.8 ‰ ± 0.08 ‰.When no C 2 H 6 is present, i.e.Event 2, there is no disparity between the raw and corrected δ 13 CH 4 slope, resulting in a δ 13 CH 4 signature of −59 ‰ ± 1 ‰ for both methods.For the 15 natural-gas-related events, the average shift induced due to uncorrected data is 2 ‰.Consequently the bias in isotopic signatures due to C 2 H 6 means that uncorrected data will always overestimate the source when a simple two endmember mixing model is applied. Continuous field measurements of ethane As an independent verification of the CRDS performance we compared two time series of C 2 H 6 which were measured simultaneously by the CRDS and GC-FID during the natural gas field campaign by using a co-located air inlet.The CRDS data were averaged to identical time stamps as the GC-FID, i.e. a 10 min average every 30 min.From which we calculated a root mean squared error (RMSE) of 13 ppb.Given the precision of C 2 H 6 measured by the CRDS instrument is 10 ppb for 10 min averages, and the uncertainty on the GC-FID is 15 %, we conclude that this is an extremely good agreement. Furthermore, the flask samples, taken on the 4 July 2014, were measured by the CRDS to have a C 2 H 6 : CH 4 ratio of 0.074 ± 0.001 ppm C 2 H 6 / ppm CH 4 and 0.046 ± 0.003 ppm C 2 H 6 / ppm CH 4 for the gas within Pipeline 1 and Pipeline 2 respectively.On the same day gas quality data from the on-site GC recorded a C 2 H 6 : CH 4 ratio of 0.075 ppm C 2 H 6 / ppm CH 4 and 0.048 ppm C 2 H 6 / ppm CH 4 respectively.Although the error associated with the later figures is unknown, the strong agreement between the two verifies our correction and calibration strategy of C 2 H 6 . Use of continuous observations of C 2 H 6 : CH 4 by CRDS The instruments' capability to now measure interferencecorrected and calibrated C 2 H 6 opens the door for using another proxy for source apportionment, namely the C 2 H 6 : CH 4 ratio (Yacovitch et al., 2014, Roscioli et al., 2015, Smith et al., 2015).The C 2 H 6 : CH 4 ratio that characterizes each source is determined by the slope of the C 2 H 6 to CH 4 relationship.This method was applied to the 16 events identified within the natural gas field campaign, again using the York linear regression method, taking into account both X and Y error.Two examples of this method are displayed in the bottom panel of Fig. Combined method for CH 4 source apportionment To distinguish which pipeline the emissions originate from, we compare both the δ 13 CH 4 signature and the C 2 H 6 : CH 4 ratio source apportionment methods.The two pipelines were characterized from the whole-air samples taken on 4 July 2014; although the gas within the pipelines is subject to change as incoming gas varies, we assume here that this did not occur throughout the short duration of the campaign (24 June to 4 July 2014).The data collected from the aforementioned 16 events are compiled within Fig. 11, which illustrates the distribution of δ 13 CH 4 signature vs. C 2 H 6 : CH 4 ratios.The results from the flask measurements, i.e. characteristics of Pipeline 1 and 2, are plotted as dashed purple and red lines.Both methods clearly identify the biogenic source, seen as an outlier in the bottom left corner of the plot.Furthermore, both methods are able to distinguish between the two pipelines.The isotopic signatures of the natural gas events (on average 40.2 ‰ ± 0.5 ‰) are clustered near the isotopic signature of Pipeline 1, which has a δ 13 CH 4 signature of 40.7 ‰ ± 0.2 ‰, thus suggesting the majority of the measured methane is an emission from this pipeline.When considering the C 2 H 6 : CH 4 ratio a similar conclusion may be drawn as the mean C 2 H 6 : CH 4 ratio is 0.069 ± 0.002 ppm C 2 H 6 / ppm CH 4 , much like that of Pipeline 1 at 0.074 ± 0.003.A future study will address the shift in measured events to the left of Pipeline 1 in Fig. 11 by using additional VOC data from the GC-FID to aid source identification.The uncorrected 16 events are also plotted in Fig. 11 as circular markers.These are found in the top righthand corner of Fig. 11 and do not correspond well with either of the pipelines, thus reconfirming the importance of the corrections. Concluding remarks This study focuses on measurements of C 2 H 6 contaminated methane sources by a CRDS (Picarro G2201-i), with emphasis on correcting δ 13 CH 4 and (although not intended for use by standard users) C 2 H 6 for cross-interferences before calibration.Our extensive laboratory tests suggest that CRDS instruments of this model are all subject to similar interferences (as expected as they scan the same spectral lines) and that they can have a significant impact on reported concentrations and isotopic signatures if not accounted for properly when measuring industrial natural gas sources.For now, we suggest using constant, instrument-specific correction factors if possible or the ones found in this study (summarized in Fig. 12).As our study period only encompasses 1 year it is clear that the stability of the correction over the full life-time needs to be monitored further.To fully exploit the reported C 2 H 6 data, we suggest drying gas samples to < 0.16 % H 2 O, calibrating the instrument and taking frequent measurements of a working gas (or set of working gases) to monitor and correct for the instrumental baseline drift. The results of our field campaign demonstrate the extent of the interferences of C 2 H 6 on δ 13 CH 4 for a real world application and also support the validity of our C 2 H 6 correction and calibration through the comparison with an independently calibrated GC-FID.In our case, when measuring wet gas emissions we detected an average shift in isotopic signature of 2.5 ‰ due to C 2 H 6 interference; however the extent of this bias will vary according to the contribution of C 2 H 6 , therefore affecting each ffCH 4 source to a different degree which can cause problems for source determination.The results reported here are important for all future work of CRDS in fos-sil fuel regions (where sources consist of a C 2 H 6 : CH 4 ratio between 0 and 1 ppm C 2 H 6 / ppm CH 4 ) to create awareness of such interferences and correct for them accordingly.Our CRDS instrument is sufficient for measurements of strongly variable C 2 H 6 sources, where if using calibrated 1 min C 2 H 6 data, concentration variations above 150 ppb are required to achieve a signal-to-noise ratio of 5. Thus for industrial natural gas sites it offers a new opportunity to use continuous C 2 H 6 : CH 4 observations as a means of source determination that is independent from δ 13 CH 4 methods.The recently released G2210-i analyser is dedicated to C 2 H 6 : CH 4 ratio measurements and as such achieves a higher precision, making it suitable for a wider variety of ethane sources. Finally, we successfully combined both the δ 13 CH 4 and C 2 H 6 : CH 4 ratio source apportionment methods.At the natural gas compressor site both methods clearly distinguish biogenic sources from that of natural-gas-based sources.Combining those two independent methods yields a better fingerprint of the source and spurious C 2 H 6 or δ 13 CH 4 can be more easily identified.Lastly, by characterizing both the δ 13 CH 4 and C 2 H 6 : CH 4 ratio of our source, we gain insight into the formation and source region of the gas (Schoell, 1983). Figure 1 . Figure 1.Flow chart illustrating the steps involved to calibrate C 2 H 6 and δ 13 CH 4 .The number in the top right-hand corner corresponds to the subsection in which the methods of each step are explained in detail. Figure 2 . Figure 2. General set-up.The dilution and working gas are connected via two MFCs to two CRDS instruments in parallel.In red is the placement of an optional glass flask used for the C 2 H 6 calibration only.The flow is greater than that of the instruments' inlets.Therefore an open split is included to vent additional gas and retain ambient pressure at the inlets. Figure 3 . Figure 3.An example of the results from a H 2 O interference experiment spanning the range 0-1 % H 2 O.The reported C 2 H 6 is altered due to the addition of water vapour when measuring zero air (< 1 ppb C 2 H 6 ).Dark and light blue markers signify the response when dried and undried ambient air have been measured overnight by the instrument prior to the experiment respectively.Error bars signify the standard deviation of each measurement. Figure 4 . Figure 4.The discontinuity seen for instrument CFIDS 2072 for two repetitions denoted by different colours.After the discontinuity at 0.16 % the subsequent slope clearly differs between the two repetitions.Both instruments display a discontinuity at 0.16 % H 2 O.Each point represents a 1 min average, the error bars represent the standard deviation of the raw data. Figure 5 . Figure 5. Relationship between reported C 2 H 6 and concentration changes of CO 2 for instruments CFIDS 2072 and 2067 at varying values of H 2 O, at 0 ppm C 2 H 6 (within our instrumental precision).For each plot the bottom axis indicates the concentration of the targeted gas (CO 2 ).Plots (a) and (b) are at 0 % H 2 O, (c) and (d) are experiments at varying humidities, distinguishable by colour.The legend denotes repetitions of the experiment.The error bars in each plot denote the standard deviation of each measurement.The R 2 values for the experiments at 0 % H 2 O are 0.9 and 0.8 for all other H 2 O experiments for both instruments. Figure 6 . Figure 6.Relationship between reported C 2 H 6 and concentration changes of CH 4 for both instruments at 0 ppm C 2 H 6 (within our instrumental precision).For each plot, the bottom axis indicates the increase in concentration of the targeted gas.The vertical bars in each plot denote the standard deviation of each point.The legend denotes repetitions of the experiment.Plots (a) and (b) are at 0 % H 2 O.The R 2 values are 0.4 and 0.6 for instruments CFIDS 2072 and 2067.Plots (c) and (d) show the response at 1 % H 2 O.These two plots have a R 2 value of 0.2. Figure 7 . Figure 7. (a) Ethane calibration calculated from measurements of flask samples by both the GC and CRDS.The x-axis is the corrected C 2 H 6 (C 2 H 6 COR ) using the corrections described previously.The y-axis is the C 2 H 6 as measured by a manual GC.The error bars indicate the standard deviation of each flask measurement, for certain flasks error bars are smaller than their respective markers.(b) 30 min target measurements over a period of 4 days, from 13 to 16 November 2015.The standard error of each target is smaller than the plotted marker.The baseline C 2 H 6 is seen to drift with time. Figure 9 . Figure 9.The effect of C 2 H 6 on reported δ 13 CH 4 .The slopes of reported δ 13 CH 4 vs. the C 2 H 6 CORRECTED : CH 4 ratio are shown for three tests taken throughout the course of 1 year.Triangular markers imply whole-air sample measurements, while square markers are derived from direct measurements.Error bars indicate the standard deviation.In the presence of C 2 H 6 the instrument reports heavier values of δ 13 CH 4 .The typical range of (calibrated) C 2 H 6 : CH 4 of dry and wet gas are highlighted in pink and green respectively, corresponding to the top axis. 4.1.3Grab sample measurements of CH 4 , δ 13 CH 4 & C 2 H 6 in pure natural gas samples Figure 10 . Figure 10.Ethane and methane content of two selected peaks.Methane and ethane 1 min averaged time series is shown in (a) and (b) for Event 1 and (e) and (f) for Event 2. Miller-Tans plots of the corresponding peaks are shown in (c) and (g), blue for the corrected δ 13 CH 4 due to C 2 H 6 , and red representing uncorrected δ 13 CH 4 .Event 1 includes elevated C 2 H 6 emissions and thus displays a difference between the slope before and after C 2 H 6 correction, corresponding to a shift in isotopic signature.Event 2, with no C 2 H 6 shows no alteration in slope.The slopes of C 2 H 6 vs. CH 4 are shown in (d) and (h), signifying the C 2 H 6 : CH 4 ratio of the emission.Errors of both the isotopic and C 2 H 6 : CH 4 signatures are calculated from the standard error of the slope. Figure 11 . Figure 11.Distribution of 16 events according to their C 2 H 6 : CH 4 ratios and isotopic signature.The red and purple dashed lines signify the characterizations of Pipeline 1 and 2 respectively as measured by the CRDS instrument from flask samples taken on the 4 July 14.For corrected and calibrated data (square markers), both the isotopic signature and C 2 H 6 : CH 4 ratios identify the biogenic source (bottom-left point) and suggest the natural gas emissions emanate from Pipeline 1. Circular markers represent the uncorrected data which does not agree with the flask sample measurements of pipelines 1 or 2. The error bars indicate the standard error of the slope calculated from Miller-Tans and C 2 H 6 vs. CH 4 plots for δ 13 CH 4 signature and C 2 H 6 : CH 4 ratio respectively. Figure 12 . Figure12.Flow chart illustrating the steps and the corresponding equations to calibrate C 2 H 6 and δ 13 CH 4 as determined from this study.The coefficients are the mean of both CRDS instruments tested.We suggest removing H 2 O from gas samples prior to analysis. Table 1 . Description of the gas mixtures used to determine the cross-sensitivities of the interference of CH 4 , H 2 O and CO 2 on C 2 H 6 and the interference of C 2 H 6 on δ 13 CH 4 .The respective ranges spanned during laboratory tests, and the typical range at a natural gas site are noted on the right-hand side. Table 2 . Summary of C 2 H 6 calibration factors calculated for both instruments CFIDS 2072 and 2067. Table 3 . The various response functions calculated for the δ 13 CH 4 correction due to C 2 H 6 . * Flask measurement.(δ 13CH 4 ) CORRECTED = (δ 13 CH 4 ) RAW − E * C 2 H 6 CORRECTED /CH 4 + F, (3)where E is the slope of the response function and F is the intercept.E and F are +23.6 ± 0.4 ‰ ppm CH 4 / ppm C : CH 4 ratio calculated is significantly shifted by approximately +0.06.The average raw C 2 H 6 : CH 4 ratio for the 15 natural gas events is 0.132 ± 0.007 ppm C 2 H 6 / ppm CH 4 , while the biogenic events C 2 H 6 : CH 4 ratio calculated is negative and thus impossible. 6
2018-04-21T14:48:24.380Z
2016-12-19T00:00:00.000
{ "year": 2016, "sha1": "5adf21704883f68de9d56097fc2ce52d10315602", "oa_license": "CCBY", "oa_url": "https://www.atmos-meas-tech.net/10/2077/2017/amt-10-2077-2017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5adf21704883f68de9d56097fc2ce52d10315602", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
255096663
pes2o/s2orc
v3-fos-license
AMDET: Attention Based Multiple Dimensions EEG Transformer for Emotion Recognition Affective computing is an important subfield of artificial intelligence, and with the rapid development of brain-computer interface technology, emotion recognition based on EEG signals has received broad attention. It is still a great challenge to effectively explore the multi-dimensional information in the EEG data in spite of a large number of deep learning methods. In this article, we propose a deep learning model called Attention-based Multiple Dimensions EEG Transformer (AMDET), which can leverage the complementarity among the spectral-spatial-temporal features of EEG data by employing the multi-dimensional global attention mechanism. We first transform the original EEG data into 3D temporal-spectral-spatial representations and then the AMDET would use spectral-spatial transformer blocks to extract effective features in the EEG signal and focus on the critical time frame with the temporal attention block. We conduct extensive experiments on the DEAP, SEED, and SEED-IV datasets to evaluate the performance of AMDET and the results outperform the state-of-the-art baseline on three datasets. Accuracy rates of 97.48%, 96.85%, 97.17%, 87.32% were achieved in the DEAP-Arousal, DEAP-Valence, SEED, and SEED-IV datasets, respectively. Based on AMDET, we can achieve over 90% accuracy with only eight channels, significantly improving the possibility of practical applications. I. INTRODUCTION Emotion is a comprehensive psychological and physiological response of human beings to an external event or stimulus. It can greatly impact a person's behavior and thoughts, and in some cases even affect health and lead to illness. There is no doubt that emotions play an important role in life. Thus, on a matter of such significance, emotion recognition technology has been widely introduced and used in life, such as mental illness detection, fatigue driving detection, and human-computer interaction. Therefore, more and more researchers have devoted to this research. Emotion recognition can be broadly classified into two categories, one of which is based on human external responses, such as facial expressions [1], gestures [2] and voice intonation [3], etc. The other is based on human physiological signals, such as breathing, heart rate, body temperature, electroencephalogram (EEG) and so on [4]. The former is easier to collect but more subjective. To elaborate, people may fake their facial expressions and behavior, or deliberately speak loudly to pretend they are angry. Even in the same mood, different people behave differently. In contrast, EEG detection is more objective because humans struggle to control their physiological signals to fake emotions. EEG is the bioelectric activity detected on the surface of the human scalp. It can be collected by portable and relatively inexpensive devices. The amplitude of EEG signal in normal humans ranges from 10µV-200µV [5], the frequency is 0.2Hz-90Hz [6]. EEG has a high temporal resolution, which allows for the recording of brain activity with a resolution of milliseconds. However, it is limited in its spatial resolution, which refers to the ability to accurately locate brain activity within specific regions of the brain. This limitation is caused by the physical constraints of the EEG collection device, as well as the interference of the electric field between different areas of the brain. Despite these limitations, EEG remains a valuable tool for studying brain activity and has contributed significantly to our understanding of the brain. Numerous studies have shown that EEG has the ability to accurately reflect an individual's emotional state to a certain degree. The characteristics of EEG in the time domain, space domain, and frequency domain are all highly correlated with human emotional states. For example, in the frequency domain, alpha waves are enhanced when people are in a calm state, beta waves are intensified when the brain is active and highly focused, and gamma waves are associated with hyperactivity in the brain [7]. Researchers often use the ratio of beta waves to alpha waves as an indication of brain activeness [8], [9], and assess whether a person is in a happy state according to the power of theta waves [10]. Therefore, the power characteristics such as power spectral density (PSD) [11] and differential entropy (DE) [12], [13], are often used as a feature of the EEG signal in many studies of emotion recognition [14]. The spatial properties of EEG are reflected in the close correlation between each emotional state and some specific areas of the brain. The brain can be divided into four areas, which are the frontal lobe, which are the frontal lobe, parietal lobe, temporal lobe, and occipital lobe. The main functions of the frontal lobe are cognitive thinking and emotional demands. The parietal lobe responds to the tactile sense. It is also related to the body's balance and coordination. The temporal lobe is mainly responsible for auditory and olfactory sensations as well as associated with emotional and mental activity. Finally, the occipital lobe is in charge of processing visual information [7]. People's emotions would trigger activities in specific brain areas. For example, the activity of the left frontal lobe of the brain is activated when people feel happy [15] and suppressed when people feel fearful [16]. Li et al. combined functional connectivity networks with local activation to validate the activities of local brain regions responsive to emotions and the interaction between the brain regions involved in the activities [17]. It is evident that the EEG signal holds promising characteristics that can be examined in the spatial domain. EEG has a high temporal resolution and therefore contains a lot of information in the time domain which should not be neglected. Some studies use statistic features to analyze the EEG signal, such as calculating the mean, standard deviation and difference etc. over a time window. These characteristics can indicate whether the EEG has oscillated smoothly or changed drastically during the time window period. In the field of emotion recognition, EEG's the first difference (1ST) is commonly used as a feature, which is defined as the mean of absolute values of the first second of the raw signal [18]. Stationarity is also worth being considered when analyzing sequential signals. One of the measurements is the Lyapunov exponents, which are used to determine the stability of any steady-state behavior, including chaotic solutions [19]. Fourier transform is a common way to analyze the frequency domain. However, it does not exhibit any time-domain characteristic, so researchers have proposed the Short Time Fourier Transform to compensate for this defect. Wavelet transform is another solution to the lack of time domain information, which is also commonly applied to analyze EEG. It employs wavelets of different scales to model the signal that maximizes the preservation of time-domain information. These extended transform methods precisely illustrate the necessity of the EEG temporal features. As mentioned above, EEG contains a potential abundance of information in the frequency, space and time domains. Therefore, how to extract and make full use of this information becomes the greatest challenge. The main contributions of this paper are described below. 1) We proposed a model named AMDET which is excellent at extracting features of EEG signals by employing the multi-dimensional global attention mechanism. AMDET outperforms other state-of-the-art methods on DEAP, SEED, and SEED-IV datasets. We also conducted an ablation experiment to demonstrate the necessity to use all the information in the three domains of time, space, and frequency. 2) We conducted extensive visualization experiments using a Grad-CAM based algorithm to reveal the focus of the model on channels and figure out the brain region that contributes more to emotion recognition. 3) We further investigated the redundancy of EEG signals by reducing the number of channels in the experiment. And we also validate the effectiveness of our model with only a few EEG channels. AMDET can achieve high performance even when using less than 20 percent of the EEG channels, which offers the possibility of practical applications. The remainder of this article is organized as follows. Related works are described in Section II. Then, Section III introduces the details of the proposed AMDET, including EEG signal preprocessing, multidimensional features extraction and the classification algorithm. The experiments presented in Section IV are designed to prove the effectiveness of the proposed AMDET. Section V shows the experiment results and discussion. Section VI draws conclusions and future work. II. RELATED WORK The current EEG-based emotion recognition will be divided into two main methods. One of them is to extract distinguishable features first, and then use the traditional machine learning method for classification. Another is to use the end-to-end deep learning method, which completes feature extraction and classification simultaneously. Deep learning has outperformed traditional machine learning methods in some areas, such as computer vision and natural language processing. Atkinson et al. combined statistical-based feature selection methods and support vector machine (SVM) emotion classifiers and achieved decent results [20]. Wavelet transform is a widely used method for time-frequency domain analysis as well as feature extraction of EEG [21]. Li et al. used Discrete Wavelet Transform to divide EEG signals into four frequency bands and calculated their entropy and energy as the features of the k-nearest neighbor classifier (KNN) [22]. Subasi et al. also employed Tunable Q Wavelet Transform (TQWT) as a feature extractor and then exercised rotation forest ensemble as a classifier, which utilized different classification algorithms such as KNN, SVM, artificial neural network, random forest, and other four different types of decision tree algorithms [23]. It is difficult to find representative and valid features in complex cognitive processes due to the great differences among subjects. Compared to traditional machine learning algorithms, deep learning does not require prior knowledge and manual feature extraction allowing it to directly extract features from complex data. Within Deep Learning, Convolution Neural Networks (CNN) can extract local characteristics of the data, recurrent neural networks (RNN) excel at extracting information from time-series data, and Transformer focuses its attention on the more influential parts of the data. Designing the network structure for the model to fully extract the information from the EEG signal is a critical step. Du et al. employed self-attention mechanism in time and space domains to extract the critical EEG features [24]. Li et al. first extracted the DE features of each channel and then arranged these features into a two-dimensional signal according to their position on the brain surface, then utilized a hierarchical convolutional neural network (HCNN) to extract and classify the spatial representation [25]. In some research, Long-Short Term Memory (LSTM) was used to learn temporal features (PSD, DE) Fig. 1. The framework of our AMDET model for EEG emotion recognition, which consists of a spectral transformer block, a spatial transformer block, a temporal attention block, and a fully connected (FC) layer. The inputs of the model are 3D tensor containing differential entropy (DE) and power spectral density (PSD) extracted from each EEG channel and different time segments in multiple frequency bands including theta , alpha , beta , gamma1 , and gamma2[50-75Hz]. The spectral transformer block and the spatial transformer block are used to discover and focus on the significant part of the input tensor in spectral and spatial domain separately. Similarly, the temporal attention block would aggregate all the frames and figure out the critical frame. Finally, a FC layer is used for classification. The figure description is made by the toolkit of BrainNet Viewer [31] from EEG signals [26]. Tao et al. introduced the self-attention mechanism into their network model to assign weights to each channel and used CNN and RNN to obtain the time-domain and space-domain features of EEG, respectively, and finally achieved good results [27]. Jia et al. designed a 3D attention mechanism to realize the complementarity among the spatialspectral-temporal features and discriminative local patterns in all features [28]. Xiao et al. proposed a four-dimensional attention-based neural network, which fuses information on different domains and captures discriminative patterns in EEG signals [29]. III. METHOD To fully capture the EEG signals' abundant information in the frequency, space, and time domains, we introduce the global attention mechanism [30] into our model. Fig. 1 shows an overview of AMDET. It contains a spatial attention block, a spectral attention block, a temporal attention block, and a classification layer. The preprocessing of the EEG signal will be first introduced. A. Pre-processing As shown in Fig. 2, the original EEG data need to be preprocessed to generate EEG samples with three dimensions. To exploit the time-domain information of the EEG data, a 3-second window is first used to perform a non-overlapping segmentation of the raw EEG data to generate samples. Then, each sample is divided without overlap using a 0.5-second window, and the EEG signal in each window is considered as a frame in the sample. A single EEG sample contains multiple consecutive frames to preserve the time-domain characteristics of the EEG signal. In addition, since EEG signals are collected from multiple channels, and different channels represent different brain regions, we reserve the EEG channels to retain spatial information. Moreover, it has been shown that the high-frequency part of the EEG signal has a greater effect on emotion recognition than the other frequency parts. [32]. Therefore, we divide the EEG signal into multiple frequency bands and extract features on each band. For each frame in a sample, it is filtered on theta , alpha , beta , gamma1 , gamma2[50-75Hz], respectively. Since PSD and DE features have been shown to be effective in EEG emotion identification [12], [33], we extract PSD and The pre-processing of original EEG signals and the generation of the 3D tensor. We segments T seconds original EEG signals into N frames with a 0.5-second non-overlap sliding window. For each frame, we decompose it into five frequency bands according to fourier transform firstly, and then extract DE and PSD features from each frequency band and C channels. DE features for each frame on all five frequency bands for each channel separately. The PSD is defined as: where x denotes a random signal, i.e., the EEG signal in one frame. DE is a generalized form of Shannon's information entropy on continuous variables and can be used to measure the amount of information. DE is defined as: where p(x) denotes the probability density function of the signal. When the random variables approximately obey the Gaussian distribution N (µ, σ 2 ) the DE calculation can be simplified by the following equation: where µ and σ denote the mean and standard deviation of the signal x respectively, and e denotes the Euler constant. Therefore, each sample x ∈ R 2T ×2f ×C has three dimensions after feature extraction. Where T represents the time length of the sample, f represents the number of frequency bands, and C represents the number of channels of the EEG data. Finally, z-score normalization is employed for each sample. B. Spectral attention block We extracted the PSD and DE features of the EEG signals at different frequency bands. The EEG signals on different frequency bands reflect different physiological states of human beings. For example, low-frequency EEG signals are often seen when humans are sleeping or resting, while high-frequency EEG signals are usually seen when people are anxious or subject to strong emotional fluctuations [7]. Therefore, EEG signals can be discriminative for emotion in the frequency domain. To extract the spectral characteristics, we perform cross-band and cross-feature attention calculations on the features extracted on different frequency bands in the spectral attention block. The spectral attention block can be expressed as: where E f a pos ∈ R 2f ×C denotes the position encoding of the frequency domain features, t denotes the frame in the sample, and l denotes the number of layers. As shown in Fig.1, we employ multi-head attention (MHA) [30] to the attention calculation of the EEG signal, and then add a multi-layer perceptron (MLP) after this in the transformer encoder. Residual connection [34] and layer normalization (LN) [35] are employed to both MHA and MLP block for accelerating network training. To learn the common features in EEG signals over different time periods, we use the same spectral attention block to train different frames. Meanwhile, this can also greatly reduce the number of parameters of the model. In other words, for different frames in the same sample, the transformer encoders in the spectral attention block share the same parameters. C. Spatial attention block The channel of the EEG signals represents the location of the brain sampled by the electrodes. Similar to EEG frequency characteristics reflecting the different physiological states of humans, each brain region is also responsible for different functions. For example, the frontal part of the cerebral cortex is generally responsible for human physiological emotions [7]. Correlation between EEG signals from different channels reflects the functional connectivity between different brain regions. Therefore, we perform self-attention calculations on channels to explore the functional connectivity of the brain which contains available information in space. The spatial attention block can be expressed as: h sa t,l = LN (M HA(z sa t,l−1 ) + z sa t,l−1 ) l = 1, 2, · · · , L t = 1, 2, · · · , 2T (8) where E sa pos ∈ R C×2f denotes the position encoding of spatial information and tran() denotes the transpose operation. Similar to the spectral attention block, the transformer encoder is employed and the parameters of the encoders in this block are shared for different frames in the same sample. D. Temporal attention block EEG signals are time-series and acquired by sampling at different times. When humans are stimulated or have emotional fluctuations, it can be reflected in the changes in EEG signals over time. Therefore, EEG signals also carry a large amount of useful information in the time domain. Since emotional fluctuations may only occur at a specific period, not each frame in the sample is critical to the analysis. Hence, in the temporal block, the model assigns an attention score to frames within the sample to reflect the importance of each frame, as shown in Fig. 1. The critical frames are then emphasized and retained by having a large weight in the weighted summation. To calculate the attention score of each frame, the output of the previous spatial attention block is flattened from Z sa L ∈ R 2T ×2f ×C to Z ta ∈ R 2T ×2f C . The computation is as follows: where W T tem and b tem are learnable parameters, and O ta denotes the output of the temporal attention block. E. Classification layer After the raw EEG signals have been passed through the spectral attention block, the spatial attention block, and the temporal attention block, the output is a representation that integrates all the available information on multiple dimensions. In order to fuse the global information of the representation and output a final classification result, a classifier layer is employed. The classification layer is a single layer of a fully connected neural network. After flattening the output of the temporal attention block into a 1D vector, the classification layer is used to obtain the final results, and the whole neural network is optimized with the cross-entropy loss function: where N denotes the number of batch sizes and C denotes the number of categories. y c n andŷ c n are the one-hot label and predicted probability of the corresponding categories, respectively. A. Dataset DEAP is a public dataset for EEG emotion recognition [36]. 32 Subjects were asked to watch 40 1-minute music videos and record their emotion level of valence and arousal from 1 to 9 based on an online self-assessment. Depending on the above level, we divided the dataset into two classes with a threshold value of 5. The EEG signals were acquired according to the 10/20 system at 512 Hz with 32 channels of EEG. Then the data was downsampled to 128Hz and passed to a filter between 4 and 45 Hz. It is worth noting that each trial consists of 3-second pre-trial baseline and 60-second emotion related signals. Following the previous work [29], we calculated DE features by subtracting baseline DE features from pre-trial signals. The SEED [33] and SEED-IV [37] datasets were collected by the BCMI lab at Shanghai Jiao Tong University and have been widely used in emotion recognition research. The SEED dataset contains three emotions: positive, negative, and neutral. Subjects were asked to watch videos of the three emotions to capture the corresponding EEG signals. A total of 15 subjects participated in the experiment. Each subject watched 15 videos, 5 videos for each emotion, and each video was about 4 minutes long. There was a 45-second self-assessment period and a 15-second break set between videos. The data was collected with the 62-channel ESI NeuroScan System, downsampled to 200Hz and filtered with a bandpass frequency filter from 0-75Hz. SEED-IV contains the emotions of happy, sad, neutral, and fear. In the same way, 15 subjects participated in the experiment and were asked to watch corresponding emotional film clips. Each subject's experimental task contained 24 trails, each consisting of a 5-second hint of start, a 2-minute video, and a 45-second self-assessment. As with SEED, the EEG signals were acquired using the ESI NeuroScan System, which consists of 62 channels and downsampled to 200 Hz. After downsampling, a 1-75 Hz bandpass filter was employed to remove noises. B. Experiment Design In order to evaluate our model in the emotion recognition task, we designed the following experiments for a comprehensive comparison. Firstly, we compared AMDET with other current state-of-the-art models. Then, we designed ablation experiments to explore the effect of each part of our model. At last, a visualization experiment was conducted to investigate the characteristic of EEG data. Below is a description of our experiments: 1) Baseline models: • SVM [38]: A generalized linear classifier that solves the maximum margin hyperplane for samples • BiHDM [39]: A recurrent neural network-based model of left-and right-hemisphere differences for EEG emotion recognition • RGNN [40]: A regularized graph neural network considering the biological topology among different brain 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Accuracy (%) Subject id Arousal Valence Fig. 3. The performance of AMDET on DEAP dataset. We conducted the five-fold cross-validation experiment for each subject. regions to capture both local and global relations among different EEG channels. • 4D-CRNN [41]: A convolutional recurrent neural network that extracts spatial, spectral and temporal domain features of EEG signals for emotion recognition. • SST-EmotionNet [28]: An attention-based two-stream CNN that simultaneously integrates spatial-spectraltemporal features in a single network framework. • 4D-aNN [29]: A 4D attention-based neural network consisting of a CNN and a bidirectional LSTM. 2) Ablation Experiment: In our approach, we customize the model with different structures in each of the three dimensions to capture the abundant features of the EEG signal. In order to investigate the role of each part of the model, we conducted ablation experiments to explore the performance of the model by removing the spectral attention block, the spatial attention block, and the temporal attention block, respectively. 3) Visualization and EEG channel selection: We design our model based on attention mechanism which makes the model learn and focus on the significant part in spectral, spatial and temporal domains. As a result, the AMDET achieve state-ofthe-art performance. But in the meantime, it is of value to know what the trained model has learned for EEG emotion recognition, i.e., the interpretability of the deep model, for instance, to explore the correlation between emotions and each channel or each frequency band, or to find some specific time domain characteristics of EEG. We adopt Grad-CAM [42] to visualize where the model attention is. Grad-CAM (Gradientweighted Class Activation Mapping) uses gradient to measure the influence of the elements in feature extracted by the model on the prediction results. It is able to highlight the important regions in the image for predicting the concept. Different channels in EEG data represent different regions of the cerebral cortex, and different brain regions are responsible for different physiological functions. On the SEED dataset, the number of channels is 62, which were obtained from different brain positions. However, an excessive number of channels not only increases the computational effort but also makes the practical application of brain-computer interaction difficult. Therefore, it is of great importance to reduce the number of channels used when analyzing EEG data. We aim to identify crucial brain regions or channels for emotion recognition. Based on this, we further made a selection of EEG channels. We reduced the number of EEG channels used for model training, from 62 to 32, 16, and 8 channels, respectively, and discussed the effects of the number of channels on the recognition performance. C. Experiment Detail All experiments in this paper were conducted on an NVIDIA TITAN Xp GPU. The number of layers in the spectral attention block and the number of layers in the spatial attention block are set to 1 and 1, respectively. The number of heads in the spectral attention block and the number of heads in the spatial attention block are set to 2 and 2, respectively. The number of frequency bands are set to 4 for DEAP dataset (theta , alpha , beta , gamma1 ) and set to 5 for SEED dataset and SEED-IV dataset (theta [ AdamW optimizer with learning rate, weight decay, and batch size of 1e-3, 1e-6, and 16, respectively, to optimize the neural network. For DEAP dataset, we used only DE feature. We conducted experiments on each subject. For SEED and SEED-IV datasets, we calculated the average accuracy of each subject in the 3 experiments. We used five-fold cross-validation for all experiments. A. Compared with baseline In order to compare our model with baseline models, we conducted experiments on DEAP-Arousal, DEAP-Valence, SEED, and SEED-IV datasets. Table I shows the experimental results. The experimental results show that the deep learning methods are generally better than the traditional machine learning methods, and the accuracy of SVM is only 89.33%, 89.99%, 83.99%, and 56.61%. RGNN and BiHDM explore the spatial properties of EEG signals and achieve 94.24%/79.37% and 93.12%/74.35% accuracy on SEED and SEED-IV datasets respectively. 4D-CRNN does not just focus on the features in the spatial domain, but extracts the spectral-spatial-temporal features in EEG by CNN and RNN, reaching a better accuracy of 94.22% on SEED datasets. In addition, it achieves 94.22% and 94.58% accuracy on DEAP datasets. SST-EmotionNet and 4D-aNN tried to integrate the attention mechanism into their models in combination with CNN and LSTM. They also fused the features of EEG signals on all domains, and finally achieved 96.02%/84.92% and 96.25%/86.77% results on SEED and SEED-IV datasets, respectively. Our proposed model utilizes a transformer-based method to extract the frequency and spatial features of the EEG signal, then uses a temporal attention block to help the model focus on significant frames. The final result outperforms all baseline models and reaches 96.85%, 97.48%, 97.17% and 87.32% on four datasets. It is worth noting that the results of the approaches focusing on multiple domains are superior to those that study only a single domain, which illustrates the value of exploring the multidimensions characteristics of the EEG signal. At the same time, the comparison with the similar attention-based models SST-EmotionNet and 4D-aNN indicates that Transformer is more appropriate than CNN and LSTM for detecting critical and discriminative features on different domains. AMDET also has the lowest standard deviation compared to baseline models, which means it is more adaptable to different people. Fig. 3, Fig. 4, and Fig. 5 demonstrate each subject experiment results individually on DEAP, SEED, and SEED-IV datasets. For the DEAP dataset, there are total 32 subjects and 2 experiments, arousal and valence classification, almost all of the accuracy achieve above 95%, except for subject 5, 22, and 32. Their accuracy is 95.5%/92.25%, 92.5%/91.625%, and 94%/94% on arousal and valence classification. The SEED dataset includes 15 subjects, and each subject has three days of experimental data. For classification on SEED dataset, there are 6 subjects, subject 1, 2, 3, 4, 7, and 14, performed below the average accuracy of 97%. As for SEED-IV dataset, it includes 15 subjects and 3 days data as SEED dataset, and has an extra emotion fear for classification. For the classification on SEED-IV dataset, there are 5 subjects whose accuracy is below 85%, and they are subject 1, 2, 7, 9, and 11. In the SEED-IV dataset, the accuracy for the three days varied greatly, with the 2nd day's accuracy rate being about 6% lower than the 1st day's and the 3rd day's being about 4% lower. There may be some accidents during the data collection on the 2nd and 3rd days. Experiment results prove that amdet can achieve excellent performance of higher accuracy and lower standard deviation on DEAP, SEED, and SEED-IV datasets. It also indicates the effectiveness and necessity of fusing multiple dimensional information of EEG for classification. Fig. 6. Results of the ablation study on DEAP-Arousal, DEAP-Valence, SEED, and SEED-IV datasets. We remove the spectral transformer block, the spatial transformer block, and the temporal attention block separately to investigate the role of each module. B. Ablation Study In our method, the model has three blocks for feature extraction of EEG signals, which are used to calculate attention in three different dimensions. To explore the role of different blocks of our model in the classification, we remove the spectral attention block, the spatial attention block and the temporal attention block, respectively, and reserve only the remaining two blocks. The results are shown in Fig. 6. We found that the spectral layer is more important than the spatial and temporal layers. For arousal and valence classification on the DEAP dataset, the performance of the model decreased significantly when the spectral attention block is removed, by 13.53% and 13.16%, respectively. In comparison, the model accuracy dropped slightly when the spatial attention block is removed, by 0.83% and 0.81%, respectively. Similarly, for the SEED and SEED-IV datasets, the accuracy decreased the most when the frequency attention layer was removed, by 4.57% and 11.28%, respectively. The spatial attention layer affected the accuracy the least on the SEED-IV dataset, with only a 4.83% decrease after removing. On the SEED dataset, the impact of the spatial attention block and the temporal attention block were not significantly different after removal, decreasing by 1.3% and 1.16%, respectively. Therefore, we consider the features on different frequency bands to be the most important for the emotion recognition task. In other words, the model mainly focuses on the frequency domain features to classify different emotions. After extracting DE and PSD features, EEG signals are more discriminative. The spatial and temporal features of EEG data, on the other hand, have less importance in emotion recognition compared to the frequency domain features and do not play a decisive role in emotion recognition. C. EEG channel visualization and channel selection To explore what the trained model learns, we adopt Grad-CAM [42] to visualize the concern of the model. We use the features map and gradient to generate a heatmap that shows the import part having a greater impact on the prediction. In order to identify the critical channels for emotion recognition, we need to investigate the influence of each channel on the prediction of the trained model. After training the model with all 62 channels, we employed Grad-CAM for visualization. The channels were ranked according to their weights for emotion classification and those with the greater weights were selected for subsequent experiments. Fig. 7 shows the top 32, top 24, top 16 and top 8 channels for the 8th subject on the SEED datasets. It can be seen that the top 8 channels are P6, C5, TP8, F6, FP2, PO8, FC5 and F5, which are roughly concentrated in the temporal lobe and the parietal lobe in terms of cerebral location. The result shows that these regions of the cerebral cortex have a greater effect on the results of emotion recognition than the other parts. Accuracy (%) S01 S04 S08 S12 Fig. 8. Results of channel reduction experiment of subject 1st, subject 4th, subject 8th and subject 12th on SEED dataset. We reduce the number of channels of the input tensor from 62 channels to 2 channels in order of the heat map. In addition to the channel visualization of EEG signals, we further investigated the impact of reducing the number of channels. We believe that there may be too much redundant information between EEG channels. On the one hand, reducing the number of channels can make the calculation time shorter; on the other hand, redundant channels for the emotion recognition task can produce noise effects. Therefore, we conducted EEG channel reduction experiments based on previous channel visualization results. We reduced the number of channels sequentially starting from 62 with a stride of 4. The experiment results are shown in Fig. 8. After selecting the 32 channels with the highest importance weights, the accuracy of emotion recognition decreased by only 1% compared to that of all channels used. As a result, it is clear that these selected 32 channels contain the majority of the information needed for the emotion recognition task, while the remaining channels reflect human emotion less and have an inconsiderable effect on the final task. When the number of channels was reduced to 24, 16, and 8, respectively, the accuracy of emotion recognition started to decrease gradually, by 2%, 3%, and 10%. It can be seen that even when the number of channels is reduced to 8, our model could still achieve an accuracy of about 90%. However, reducing the number of channels to less than 8 had a substantial impact on the task, yielding a large decrease in accuracy. It would lower the cost of time and computation when the number of channels is reduced. The number of parameters and FLOPs are 0.30M and 0.03G when using 62 channels, while that of 8 channels are only 0.078M and 0.0039G. In addition, fewer channels in inference are of great importance and use for putting EEG-based applications into reality. It means a smaller and more portable collection device. VI. CONCLUSION In this paper, we propose a transformer-based model, namely AMDET, for EEG emotion recognition. AMDET achieved state-of-the-art results by extracting and fusing temporal-spatial-frequency features in the EEG signal. Without CNN or RNN enhancing the transformer model, our model is based on a self-attention mechanism, which illustrates the potential for transformers in EEG pattern recognition tasks. The results of the ablation experiments show that information in all three domains is necessary to obtain favorable results on the EEG task, while the information in the frequency domain is of particular significance. Finally, we conduct a channel reduction experiment that selects the channels that contribute the most to the results by visualizing the focus of the model. This reduces the computational effort while ensuring the accuracy of the recognition. On the one hand, the experiment results demonstrate the strong feature extraction ability of our model, which has excellent performance even with few channels in the EEG, and on the other hand, this may also indicate the large redundancy of the EEG signal in the channel dimension. Currently, the visualization is implemented based on Grad-CAM, yet deep learning visualization methods that are more appropriate for EEG need to be devoted to more research, which could explore the role of different channels on different EEG paradigms. The improvement in visualization methods will not only make EEG devices more portable by reducing the number of electrodes but also has the potential to contribute to the development of neuroscience.
2022-12-26T06:41:07.304Z
2022-12-23T00:00:00.000
{ "year": 2024, "sha1": "58a4812beaf476050afc03d477b5997082842fe9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "58a4812beaf476050afc03d477b5997082842fe9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
259074795
pes2o/s2orc
v3-fos-license
Hematoloechus sp. attachment shifts endothelium in vivo from pro-to anti-in fl ammatory pro fi le in Rana pipiens : evidence from systemic and capillary physiology This prospective, descriptive study focused on lung fl ukes ( Hematoloechus sp., H ) and their impact on systemic and individual capillary variables measured in pithed Rana pipiens , a long-standing model for studies of capillary physiology. Three groups were identi fi ed based on Hematoloechus attachment: no Hematoloechus (No H ), Hematoloechus not attached ( H Not Att), and Hematoloechus attached ( H Att). Among 38 descriptive, cardiovascular, and immunological variables, 18 changed signi fi cantly with H . Symptoms of H included weight loss, elevated immune cells, heart rate variability, faster coagulation, lower hematocrit, and fl uid accumulation. Important capillary function discoveries included median baselines for hydraulic conductivity ( L p ) of 7.0 (No H ), 12.4 ( H Not Att), and 4.2 ( H Att) (cid:1) 10 (cid:3) 7 cm·s (cid:3) 1 ·cmH 2 O (cid:3) 1 ( P < 0.0001) plus seasonal adaptation of sigma delta pi [ r ( p c – p i ), P = 0.03]. Pro-and anti-in fl ammatory phases were revealed for L p and plasma nitrite/nitrate concentration ([NO x ]) in both H Not Att and H Att, whereas capillary wall tensile strength increased in the H Att. H attachment was advantageous for the host due to lower edema and for the parasite via a sustained food source illustrating an excellent example of natural symbiosis. However, H attachment also resulted in host weight loss: in time, a conundrum for the highly dependent parasite. The study increases overall knowledge of Rana pipiens by revealing intriguing effects of H and previously unknown, naturally occurring seasonal changes in many variables. The data improve Rana pipiens as a general scienti fi c and capillary physiology model. Diseases of in fl ammation and stroke are among the clinical applications INTRODUCTION For over a century, Rana pipiens (North American leopard frog) has been a reliable animal model used for a wide variety of scientific investigations into organ and tissue physiology as well as for student laboratory demonstrations.In the United States, Rana pipiens are caught in the wild and sold by companies located primarily in the eastern half of the country.In our laboratory, Rana pipiens was selected to study capillary physiology as it is a lower vertebrate that spawned and matured in a natural environment. The lung fluke Hematoloechus sp.(H) was reported first in the early 19th century.H have a complicated taxonomy and at least 89 species of the genus are known to exist (1).In general, H have a life cycle with two intermediate hosts, and anurans are its definitive host (2).H eggs are consumed by the first intermediate host, the air-breathing freshwater snail (planorbid).Within the snail, the eggs hatch into sporocysts that produce cercaria, the larval form of trematodes.Cercaria are then shed from the snail and enter dragonflies or damselflies (odonata).When a frog (or toad) eats a dragonfly or damselfly and if that odonate has cercaria, the parasite makes its way through the esophagus and lodges in the lungs within 5 days.Once the lung fluke is located in its definitive host, the life cycle of H is complete (3).In its natural habitat, Rana pipiens is an opportunistic feeder, eating a variety of invertebrates including ants, beetles, and dragonflies.Rana pipiens is a definitive host of H (4).Although the parasite itself is well studied, very little is known about how H effect Rana pipiens. For studies of capillary function (hydraulic conductivity, water permeability, L p ), Rana pipiens has long been considered a gold standard model.Data from frog capillaries have been published for decades in primary and review articles (5)(6)(7).Frog capillary data have been used to test mathematical models of permeability (8) and have provided the target baseline for L p in cultured endothelial monolayers (9).This long history and broad application obligates investigations into new aspects of the model to better understand results from past, present, and future studies. The mesentery is a thin, transparent tissue juxta-intestines that can be exteriorized with minimal trauma in Rana pipiens allowing access to its network of capillaries.Location of each capillary within the microvascular network is known making the model a useful bridge between cultured endothelial cells and whole organ studies of permeability.Aided by an inverted light microscope, individual capillaries are cannulated in situ with a glass micropipette, and the volume flux (J v ) and surface area (S) of each capillary are measured directly.Blood perfusion of each capillary is maintained up to the moment of cannulation, and neural input and lymph flow remain intact.The approach was first used by Landis in 1927 (10) and later modified by Michel and colleagues (11).In the past, L p values <10.0  10 À7 cm•s À1 •cmH 2 O À1 have been used by some investigators as criteria for including capillary L p results in the literature (12).Capillaries with L p values >10.0  10 À7 cm•s À1 •cmH 2 O À1 have been considered "inflamed."This exclusion criteria for single capillary data has not been substantiated by objective physiological indicators of inflammation. The purpose of the present study was to evaluate the impact of lung H on systemic and capillary variables measured in Rana pipiens.Baseline time series data were collected across the year for each variable to establish a useful reference.A secondary analysis was performed on statistical outliers of capillary L p to identify underlying physiological correlates with high L p values. Animal Procedures Wild-caught Rana pipiens [male; n = 422 (No H = 228; H Not Att = 100; H Att = 94); weight = 34.8(SD 6.0) g; length = 19.7 (SD 0.8) cm; # H (median (± 25%/75%) = 2 (± 1/4), range = 1-22); 22 samples over 7 yr; Charles D. Sullivan Co., Nashville, TN] were determined to be healthy by visual inspection.Upon arriving at the animal care facility, each frog was examined and placed in a gentamicin (1.3 mg•mL À1 ) bath for the first 24 h.The frogs were then moved to containers with dry areas and fresh, dechlorinated water that was filtered continuously.The frogs were cage sedentary and housed six per container.Containers were placed on racks in a room where the environment was maintained at 15 C, and light was controlled on a 12-h:12-h light:dark cycle.The frogs were fed weekly.Once every 2 weeks they were fed individually using a 1-mL syringe filled with vegetable beef baby food (Gerber), and on the alternate week, crickets were released into each container.Data were collected from 2010 to 2017.Protocols were approved and monitored by the Animal Care and Use Committee at Montana State University, Bozeman, MT. Surgery and Mesenteric Tissue Preparation Each frog was pithed cerebrally, and one drop of blood was placed on a glass slide for measurement of clotting time.Cotton was placed in the cranial cavity to separate cerebral and spinal column nerve tissue.Body weight, length, and displaced volume were measured, and the frog was placed supine on a pad and heart rate was counted.An incision was made through the skin and muscle wall to expose the contents of the abdomen.A loop of small intestine was exteriorized and draped carefully over a quartz pillar, which allowed the mesenteric tissue to lay flat atop the pillar.The entire tissue preparation was kept cool (14 C-16 C, type-T thermocouple wire, Digi-Sense, Cole Parmer, Vernon Hills, IL) and moist with frog Ringer solution superfused continuously (% 3 mL•min À1 ) from a reservoir and siphoned away with a vacuum line.The tissue and its microvascular bed remained intact, attached to the frog, and blood-perfused during the entire protocol. Blood and Lymph Fluid Collection Just before the mesenteric tissue was exposed, a small incision was made in the skin to collect lymph fluid for later analysis of skin lymph protein concentration ([skpr]).After the area between the skin and abdominal wall was carefully swabbed dry, blood was drawn into three heparinized microhematocrit tubes (74 lL each; not less than 2 USP units of ammonium heparinate per tube; Thermo Fisher Scientific) from a peripheral vein located along the interior surface of the skin near the right axilla and exterior to the abdominal cavity.The first sample was used for measurement of systemic hematocrit (sysHct) and plasma protein concentration ([plpr]), the second for systemic red blood cell concentration ([RBC]), white blood concentration ([WBC]), activated white blood concentration (a[WBC]), and hemoglobin concentration ([Hb]), and the third was prepared for measurement of plasma nitrite/ nitrate concentration ([NO x ]).Next, a small incision was made in the abdominal wall, and lymph fluid was collected for later analysis of abdominal cavity fluid protein concentration ([abpr]).Surgical and body fluid sampling procedures have been published previously (13)(14)(15)(16)(17). Necropsy After the experimental protocol was completed, each animal was euthanized and a detailed necropsy was performed.Lungs and internal organs were inspected for parasites and signs of distress.If present, H in the lungs were removed and counted (18).Lungs, spleen, and liver were dissected free, blotted dry, and weighed. Clotting time. Coagulation of a single drop of blood was determined visually.Time 0 for the clotting time measurement was designated as the time when the blood contacted the glass slide.The drop was observed continuously until a clot formed.Time to clot formation was recorded. Systemic hematocrit. Hematocrit tubes were double sealed with critoseal (Thermo Fisher Scientific) and spun within minutes of blood collection at 13,460 g for 3 min (IEC Micro-MB Microcentrifuge fitted with microhematocrit rotor).sysHct was measured using a microcapillary reader (IEC, Needham Heights, MA) placed consistently at the same height and angle for each measurement with the indicator line placed at the interface of the red blood cell (RBC) column and white blood cell (WBC) layer.Intrasample reliability was within 1%. Red blood cell concentration. A 5-lL sample of whole blood was diluted 1:200 in filtered (0.45 lm pore size, Millipore, Billerica, MA) frog Ringer solution.Diluted blood (10 lL) was loaded into a Reichert bright line hemacytometer (Improved Neubauer ruling pattern, Cat.No. 1483, Hausser Scientific, Horsham, PA).Red blood cells (RBCs) were counted (Fisher Laboratory counter) using a Zeiss light microscope (Axiostar Plus, Â10 A-plan objective).RBCs located completely within the ruling lines of the hemacytometer were included in the count.If a portion of a cell was located outside the line it was not counted.Mean corpuscular volume (MCV) was calculated as MCV = (sysHct/ [RBC])  10 7 . White blood cell concentration. The [RBC] procedure was followed for [WBC] with the only change being a 1:20 dilution of a 12.5 lL whole blood sample.The diluted sample was loaded onto the Neubauer slide, and WBCs were differentiated by size (basophils, small; neutrophils, medium; lymphocytes, large) and counted.To measure concentration of activated white blood cells ([aWBC]), a solution of nitroblue tetrazolium dye was added to 12.5 lL of whole blood and incubated at 37 C for 20 min and then at room temperature for 20 min.The dye/whole blood sample was then diluted 1:20 with frog Ringer, loaded onto the Neubauer slide, and inspected for [aWBC], defined as those that had burst after taking up the dye (19).The same counting decision rules were used as for [RBC]. Plasma [NOx]. Samples of whole blood were placed immediately into EDTA tubes, put on wet ice, and centrifuged (Hettich MIKRO 22 R, Proscientific, Oxford, CT) within 1 h at 3,200 g for 15 min (4-5 C).Plasma was removed, injected into siliconized vacuum tubes, and stored at À70 C. Within 2 to 3 mo of sample collection, [NO x ] was measured using chemiluminescence (Sievers Nitric Oxide Analyzer; model 280i, Boulder, CO; sensitivity, 1 pmole•mL À1 ) as reported previously (20).Briefly, each plasma sample (5-200 lL) was heated to 96 C and injected into a receptacle, which contained a freshly made solution of 0.1 M vanadium III chloride (5 mL, Aldrich Chemical Co., Milwaukee, WI) in 3.0 M HCl (Ricca Chemical Co., Arlington, TX).Vanadium III reduces nitrite to nitric oxide at room temperature (20 C) and reduces NO x to nitric oxide at 85 C (21,22).Area under the curve generated by the nitric oxide analyzer was integrated and recorded on a Dell computer (Pentium 4, GX260 Optiplex).The nitric oxide analyzer was calibrated with KNO 3 standards (range 0.5-5.0nM).Water was collected from each housing container, and [NO x ] measured to assure consistent housing conditions. Protein concentrations [plpr], [skpr], and [abpr]. The plasma portion of the first hematocrit tube sample (see Blood and Lymph Fluid Collection) and the lymph fluid samples were stored at À20 C for no more than 2 mo.A microassay (BioRad Laboratories, Inc., Hercules, CA) was used to measure protein concentration according to the manufacturer's instructions (23).A standard curve and duplicate samples for each animal were read at 595 nm bandwidth (Turner SP-850 spectrophotometer, Barnstead/Thermolyne, Dubuque, IA). Measures of Capillary Variables in Mesenteric Tissue Intravital video microscopy and video imaging. Capillary identification. Individual mesenteric capillaries were defined as tubes of endothelial cells situated between branches, free of vascular smooth muscle, and devoid of rolling or sticking WBCs.The true capillaries (TCs) used in this study were identified in situ by the direction of blood flow, which diverged upstream and converged downstream at the ends of each capillary tube (24). Capillary tube hematocrit (tHct). Frog red blood cell (fRBC) count per 500-lm length of the capillary and capillary radius (d/2 = r, lm) were used to calculate tHct.Capillary diameter (d, lm) was obtained from the average of three sites spaced % 50 lm apart on the video recording: Capillary red blood cell flux. To measure RBC flux, a point along the capillary segment was selected on the video recording.The recording was advanced to determine the time (t, s) required for 50 fRBC to pass by the designated point.Flux was calculated as: Capillary fluid shear rate and flow. For each capillary, capillary fluid shear rate (c) was calculated from fRBC or hRBC velocity (v) and r (lm).Instantaneous velocity (v i ) of RBCs was measured directly on the video monitor at 15-s intervals, averaged, and calculated as described previously (13)(14)(15)(16)(17): where x was the distance (lm) traveled by a RBC in time (t, s).To obtain the mean RBC velocity, v i was divided by a correction factor (CF; 11; RBC radius, R, lm), assuming that each RBC was centered within the capillary tube. Capillary blood flow was calculated as: Capillary balance pressure. Immediately following a successful cannulation, the water manometer that was attached to the pipette assembly to maintain pressure and flow was lowered slowly.The pressure at which the flow of hRBC ceased was recorded as capillary balance pressure. Capillary volume flux per surface area. The modified Landis technique (10,11) was used to measure volume flux per surface area (J v /S) at two pressures (20 and 30 cmH 2 O) during the first occlusion following a uniform, square wave change in shear stress (Ds) stimulus (see Experimental Protocol).Successful occlusion of each capillary was determined by visual inspection to insure valid measures of J v .J v /S was calculated from hRBC velocity (dx/dt occlusion , cm•s À1 ), capillary length (x o , cm), and capillary volume to S ratio (r/2, cm, assuming cylindrical geometry of each capillary): To increase precision, dx/dt occlusion and x o were measured on three hRBC (spaced % 50 lm apart) at three time points (2.0, 2.3, and 2.6 s).Three measures of diameter were obtained at three time points to verify that S remained constant (13)(14)(15)(16)(17). The Starling equation assumes a linear relationship between J v /S and capillary pressure (P c ). Slope of the regression equation for J v /S and P c is L p : where (P c -P i ) and (p c -p i ) are hydrostatic (P) and oncotic (p) pressure differences between capillary lumen (c) and interstitium (i).Sigma (r) is the reflection coefficient of the capillary wall to protein [for assumptions see Williams (15)].r(p c -p i ) is the x-axis intercept of the P c -J v /S relationship.Capillary L p was calculated for each of the nine measures of J v /S at two pressures (see above) and averaged. Capillary burst pressure. The first occlusion was maintained after a sufficient recording of the hRBC marker cells for J v /S measurements.Capillary pressure was then uniformly raised by 1 cmH 2 O every 10 s until one or more hRBCs burst through the capillary wall.The capillary was observed closely on the video monitor and through the microscope to ensure accurate assessment.If a hRBC escaped at the occlusion site (an infrequent occurrence), that measurement was excluded. Experimental Protocol Frogs were fasted for 4 days before each experiment.Fresh water was available ad libitum to prevent dehydration.Frogs with one or more [aWBC] were excluded from the study. After blood collection and surgical procedures, one capillary per frog was cannulated at 10 cmH 2 O (7.4 mmHg), and the balance pressure was measured (see above).The pressure was raised by 1 cmH 2 O to establish low flow during a 2 min equilibration period (steady state 1).At the end of equilibration, the capillary was stimulated mechanically with a square wave change in shear stress (Ds) via an abrupt change in pressure to 30 cmH 2 O (22.1 mmHg).The higher s was maintained for 2 min (steady state 2).J v /S was measured followed by measurement of burst pressure.At the end of the protocol, the animal was euthanized and a necropsy performed. s curves were generated for each capillary from video recordings to verify square wave stimulus and plateaus at steady states 1 and 2. Ds for a given capillary was calculated from plateau values as Ds = s steady state 2 -s steady state 1 .A physiological range for Ds occurred in accordance with the downstream resistance of the microcirculation, which varied in situ from animal to animal.In general, it was assumed that filtration during occlusion reflected filtration at steady state 2.An important advantage of this technique was that the downstream end of each capillary remained undisturbed, and the Ds stimulus was applied uniformly to each intact capillary (13)(14)(15)(16)(17). To minimize variability, measurement error, and bias throughout the study, the PI performed all surgical, video microscopy, and capillary protocols, collected body fluid samples, and measured sysHct and heart rate.The research associate measured weights, clotting time, [RBC], [WBC], [Hb], plasma [NO x ], protein concentrations, capillary diameter, tHct, RBC flux, RBC velocity, and J v /S plus performed necropsies. Statistical Analyses JMP software (SAS Institute, Inc.) was used for statistical analyses.The dataset was control data pooled from experiments performed over the course of 7 years.n equaled number of animals.Normality was assessed for each variable using Shapiro-Wilk W test, and central tendency reported as means (SD) or median (± 25 and 75% quartiles) as appropriate.Values identified using box and whiskers plots were designated as outliers and excluded.A secondary analysis of L p outliers and associated variables was reported.Time series data included two or more data points obtained in 2 or more years.Single time course data points were included in the figures but not in the analyses.One-way analysis of variance with parasite or month as the main effect was used to determine differences between means followed by Tukey's post hoc tests when the main effect was significant.Differences between medians were determined using van der Waerden test.In the case of unequal variance (Bartlett test), a Welch analysis of variance was employed.Significance was set a priori at P 0.05. RESULTS Averages and time series data are presented below for the three parasite conditions.Results are organized according to systemic and capillary variables. Systemic Variables Body and organ weights. On average, body weight (Fig. 1A Similarly, H Att's body weight rate of change was slower (0.9) compared with the increase (3.6 g/month).Average displaced volume (Fig. 1B) was sensitive enough to detect similar changes in average body weight and seasonal changes for No H and H Att. After removal of H during necropsy, lung weight (Fig. 2A) averaged 16% higher for H Not Att and 8% higher for H Att compared with No H.No H lung weight varied naturally with the season, highest in April and lowest in November, with a rate of change slower for the decrease than the increase, À3.6 and 5.0 g/month, respectively.Significant seasonal changes in lung weight were not detected for H Not Att or H Att. Average spleen weight (Fig. 2B) was lower for H Not Att compared with H Att and neither group differed from No H. Distinct oscillations in spleen weight were observed across the year for No H and were not present for H Not Att or H Att. In contrast, No H liver weight (Fig. 2C) showed a dramatic (257%) seasonal decline (À367.5 g/mo) from its highest average in January (1501.5 SD 139.2 mg) to March, then remained low throughout spring and summer.Liver weight reached its lowest monthly average in early fall (584.0SD 240.3 mg), then increased into winter at a rate of 588.8 g/month.Both H Not Att and H Att showed the same downward trend in liver weight from winter into summer as for No H. Average liver weights did not differ between groups. Leukocytes. Among the three WBC subtypes, parasite group was a significant factor for average basophil counts (Fig. 3A); however, differences between groups could not be determined due to Similar to eosinophils, average total [WBC] (Fig. 3D) were higher for H Att compared with No H. Similar to lymphocytes, No H was the only group that demonstrated seasonal variation in total [WBC], which peaked twice during the year, once in April/May and once in September.Three low months occurred through the winter in January, March, and November, and a fourth low occurred in June (change rates = À459.3and 302.7 (cells•cm À3 )/month). Plasma [NOx]. Averages for plasma [NO x ] (Fig. 4) differed among all three groups, highest for H Not Att and lowest for H Att. Distinct No ) ) Blood clotting. Figure 6 presents data for time to blood clot formation.No H clotting time averaged 5 s longer than H Att and demonstrated distinct oscillations across the year with higher month-to-month variability in spring months that diminished as the year progressed.Average clotting time for H Not Att did not differ from the other two groups; however, a significant seasonal decline of 17 s from February to December was notable (rates of change = À1.9 and 5.7 s/mo). Erythrocytes. Average sysHct for No H was higher than H Not Att and H Att. No H sysHct demonstrated seasonal changes that were highest in late winter and spring and lowest in summer into fall (Fig. 7A, rate of change = À4.5 and 1.8%/mo). Average [RBC] (Fig. 7B) did not differ with parasite condition; however, seasonal changes were observed for No H [RBC] with peaks in May and December and troughs in July and September [rate of change = À36.4 and 31.9 1,000 (cells•cm À3 )/month], a pattern that was similar to sysHct.H Att also showed seasonal changes in [RBC] [rates = À27.2 and 30.8 1,000 (cells•cm À3 )/ month] with both peaks shifted earlier in the year compared with No H. H Att [RBC] declined steadily from a high of 367.3  1,000 cells•cm À3 in February to 231.1  1,000 cells•cm À3 in July, a low that was $50  1,000 cells•cm À3 below that of No H in the same month (280.8 1,000 cells•cm À3 ). Protein concentrations in three compartments. Average differences between plasma and abdominal cavity fluid protein ([plpr]-[abpr], Fig. 8D) and plasma and skin protein concentrations ([plpr]-[skpr], Fig. 8E) were stable across the year, suggesting that colloid osmotic pressure differences at the macro-level were stable between the cardiovascular/abdominal cavity and cardiovascular/skin compartments. Abdominal cavity fluid. Average volume of fluid collected from the abdominal cavity (Fig. 9) was greater for H Not Att and H Att compared with No H and varied widely across the year for the two parasite conditions.The stable time series data for [plpr]-[abpr] (Fig. No 8D) were consistent with macro-level protein concentration differences having minimal impact on the changes in volume of abdominal cavity fluid. Capillary Variables Water permeability, hydraulic conductivity. For individual capillaries, median J v /S (Fig. 10A) and median L p (Fig. 10B Balance, r(p c -p i ), and burst pressures. The three pressure variables for individual capillaries are presented in Fig. 12. First, balance pressure (Fig. 12A) is the pressure at which flow through the capillary tube from proximal to distal end is zero.Balance pressure did not differ between groups.Time across the year, however, did influence No H balance pressure, which oscillated around an average of 10.0 cmH 2 O.In contrast, absence of regular oscillations was the most notable feature of the balance pressure time series data for H Not Att and H Att. Figure 12B shows the second capillary pressure, r(p c -p i ), which is the x-axis intercept of the pressure/J v /S plots used to assess L p and the pressure at which filtration flow through the capillary wall is zero.Average r(p c -p i ) was lowest for H Not Att compared with both No H and H Att. A distinct seasonal pattern was revealed for No H where r(p c -p i ) was lowest in March, increased steadily to a peak in December at a rate of 0.5 cmH 2 0/month, and declined at a faster rate (À1.5 cmH 2 O/month) back to March.For the two parasite groups, the annual time course of r(p c -p i ) was more variable and no seasonal changes were detected.Average differences between balance pressure and r(p c -p i ) pressure are shown in Fig. 12C.Overall, H Not Att was less than H Att. H Not Att displayed seasonal variation with average difference between the two pressures rising above zero in May, June, and September, an increase that would favor flow through the tissue during those months if precapillary resistance remained constant.In contrast, average pressure difference for No H and H Att remained negative throughout the year, which would favor filtration into tissues if resistance upstream was constant. Among the 38 variables in this study, the third, capillary burst pressure (wall tensile strength, Fig. 12D), was the only study variable where the average for H Att was higher than both No H and H Not Att.Time series data showed consistent wall strength across the year for No H and H Not Att.In contrast, capillaries in H Att displayed burst pressures that varied month-to-month and by as much as 20 cmH 2 O between summer and winter. Rheology. The comprehensive set of capillary measures in this study included two structure variables.First, density of the mesenteric capillary network did not differ among parasite conditions or seasons (Fig. 13A).Second, capillary diameter also did not differ among parasite conditions and remained steady throughout the year (Fig. 13B).Likewise, none of the indices of capillary oxygen delivery were impacted by parasite or season.RBC flux (Fig. 13C) and RBC flow (Fig. 13D) were steady.RBC v (Fig. 13E) tended to rise in summer, although not significantly, and c (Fig. 13F) reflected v rather than diameter.tHct (Fig. 13G) oscillated, but did not differ statistically between groups or across the year.sysHct minus tHct (Fig. 13H), while not different between groups, did trend lower in summer, reflecting the seasonal changes in sysHct (Fig. 7A). DISCUSSION The present prospective, descriptive study had two primary objectives: 1) to determine the impact of Hematoloechus sp.(H) on Rana pipiens and 2) to establish baselines across the year for a broad range of systemic and individual capillary variables.The first of the two study objectives revealed that 47% of the 38 descriptive, cardiovascular, and immunological variables showed one or more significant changes in Rana pipiens with H.In time, we recognized that H attachment status was an important effector and, ultimately, three groups were identified: no lung Hematoloechus (No H), Hematoloechus not attached in the lungs (H Not Att), and Overall Impact of Hematoloechus (Parasite) on Rana Pipiens (Host) Table 2 provides a succinct snapshot summary of the broad in vivo effects that H had on Rana pipiens.Ten combinations of differences between average values were discovered among the variables across the three parasite conditions.Of note among those listed is a cluster of eight pro-inflammatory indicators in H Not Att-1) elevated lung weight, 2) elevated abdominal cavity fluid volume, and 3) decreased heart rate, all of which were secondary to 4) higher capillary J v /S, 5) higher capillary L p , and 6) lower capillary r(p c -p i ), and, in turn, secondary to 7) elevated lymphocytes and 8) elevated plasma [NO x ].Of interest, lymphocytes were the only leukocyte subtype that increased and remained elevated in both parasite groups.Basophils had a statistically significant parasite effect; however, differences between groups could not be distinguished, and eosinophils were elevated only in the H Att group.Thus, it appears that the host response in H Not Att was specific enough to blunt or inhibit parasite-induced leukocyte responders (basophils and eosinophils), but unable to prevent lymphocytes and/or plasma [NO x ] from triggering the array of inflammation indicators listed above. Blood clotting is also associated with inflammation.Here, eosinophils, total [WBC], and coagulation rate differed in the H Att. The average coagulation rate was fastest, and eosinophils and total [WBC] were elevated compared with No H.These observations plus the fact that pro-inflammatory indicators (lung weight, abdominal cavity fluid volume, and lymphocytes) were elevated in H Not Att, but did not resolve with H attachment suggest that the presence of unattached H induced symptoms of inflammation that were sustained upon H attachment and that eosinophils, total [WBC], and coagulation responded subsequently to H attachment.A very important distinction between the two parasite groups was that inflammation in H Att reversed the increases in plasma [NO x ] and capillary L p that were observed in H Not Att.H Att had the lowest average plasma [NO x ], which not only fell below H Not Att, but also was significantly lower than No H.Although mechanism/s for the dramatic changes in [NO x ] are not known, it appears that the ubiquitous nitric oxide molecule was involved with both proinflammatory and anti-inflammatory processes, all associated with the H parasite and its attachment status.Sources of plasma [NO x ] include endothelial, immune, and nervous system cells.An important aim for future investigations would be to discover how H increases and decreases nitric oxide in Rana pipiens and if these outcomes are achieved dose dependently. Regarding L p , H Att capillaries were "tighter" (lower L p ) than H Not Att and, surprisingly, also tighter than No H, which would be considered an anti-inflammatory phenotype profile for capillaries.It appears that H attachment resulted in parasite-host biochemical communication that not only bypassed or counteracted inflammatory processes at the capillary level of the cardiovascular system, but actually lowered median L p by more than 100% below that for No H.The number of H/L p dose/response curve further supported a correlation between H and capillary L p .Higher capillary burst pressure, an index of wall tensile strength, also occurred with H attachment and did not change in parallel with changes in capillary L p .Capillary wall strength for No H and H Not Att remained stable in the face of dynamic changes in L p reflecting two separate sets of molecular machinery acting in parallel-one that maintains wall strength and is not sensitive to H a second that induces a wide range of changes in L p and is sensitive to H. Mechanisms for these results are unknown at this time.Investigative attention to this important distinction is warranted. Overall, organ and body weights provided useful indicators of the additive effects of H on the host.Fluid accumulated in the lungs, abdominal cavity, and cardiovascular space (lower hematocrit) of H Not Att and the excess fluid (organ weight) did not resolve with H attachment. Higher levels of organ fluid for both parasite groups in the face of opposing barrier functions suggested two different mechanisms leading to increased organ fluid/weight based on H attachment status: 1) excess fluid filtered into tissues and compartments via leaky capillaries (H Not Att) and 2) fluid trapped in tissues and compartments via tight capillaries (H Att).Of the two mechanisms that accounted for similar fluid accumulation in organs, cavities, and spaces, only the "tight" capillaries in H Att would trap fluid and also restrict nutrients.In actual fact, average body weight was lower only for H Att, a consequence that likely was due to diminished nutrition secondary to low permeability. Seasonal Variation The catalogue of baseline information on Rana pipiens that was created from measurements performed across the calendar year for all 38 variables is discussed below and organized according to the three parasite conditions.In many cases, waveforms were identified in the times series data and these are summarized in Table 3.The discussion below highlights comparisons between variables based on similar waveforms and seasonal changes.The data provide valuable insight into the presence of natural internal clocks for the host and parasite. Rana pipiens without lung hematoloechus (No H). One complete oscillation cycle across the year was observed most frequently in the No H dataset. Best examples were the time series traces for sysHct (Fig. 7A, No H) and [RBC] (Fig. 7B, No H).Both curves were similar with peaks in winter/ spring and troughs in summer, suggesting that changes in RBC, not plasma volume, were causal of sysHct changes.Capillary c (Fig. 13F) trended toward a similar pattern 1 month ahead of drops in sysHct and [RBC], a response that was consistent with lower resistance to flow in the microcirculation occurring ahead of detectable system level changes in sysHct and [RBC]. Double-peaked oscillations were identified for spleen weight (Fig. 2B, No H), total [WBC] (Fig. 3D, No H), and plasma [NO x ] (Fig. 4, No H).The spleen is the main erythropoietic organ in Rana pipiens (25).Nitric oxide has been shown to impact fluid extravasation from the splenic circulation in rats ( 26) and heme-maturation of hemoglobin at low doses in mice (27).Here, the frequencies for both spleen weight and plasma [NO x ] were similar across the year, in opposition from January to September and synchronized through December, suggesting a negative correlation followed by a positive correlation between spleen weight and plasma [NO x ]. Three oscillations across the year were displayed by clotting time (Fig. 6, No H), capillary tHct (Fig. 13G, No H), and capillary L p (Fig. 10B, No H), suggesting possible interconnections.The curves for L p synchronized with tHct starting in March and with clotting time starting in May.Of note was the observation that L p and tHct cycles correlated positively, and the relationship strengthened coincident with a unique feature of the L p trace that showed oscillations increasing in amplitude as the year progressed.Positive coupling of capillary tHct with capillary L p in situ represents a unique outcome from this study that has not been documented in the past. A striking example of a sine wave with five oscillation cycles was the trace for capillary balance pressure (Fig. 12A, No H).The steady nature of the waveform across the entire year suggested internal controls with remarkable timing and precision.Although no statistical differences between months were observed, the time trace pattern was noteworthy for its rhythmic activity that most likely reflected the well-known "hunting" mechanism displayed by vascular smooth muscle, in this case, on the venous, downstream side of the capillary bed. Liver weight (Fig. 2C, No H) approached a square wave curve.The trace was distinguished by an abrupt drop from winter to summer, sustained plateau in summer, and then a steep rise from fall into winter.We are not aware of any reports of a seasonal, >200% drop, and similar recovery in liver weight in Rana pipiens.This new information is fundamental to understanding annual processes of digestion, metabolism, and protein synthesis in this model. A single saw tooth waveform was demonstrated by lymphocytes (Fig. 3C, No H) and capillary r(p c -p i ) (Fig. 12B, No H).The trough time points in March were similar between the two curves; however, each had a different rate of rise to peak.Lymphocytes displayed the steeper slope compared with the slower rise of capillary r(p c -p i ) with time.Speculation of a connection between these two variables was tempting.The initial increase in lymphocytes could signal an increase in r(p c -p i ) via changes in J v , S, and/or capillary pressure, which would increase resistance to water movement into tissues and, thus, preserve fluid in the vascular space during winter.In spring, lower lymphocytes could signal lower r(p c -p i ), which would reduce the resistance threshold required to move fluid out of the vascular space and, thus, restore tissue hydration.Experiments to test these ideas are required. Flat line time series data for No H confirmed stability in the animal model.Steady heart rate (Fig. 5, No H) across the year was an index of the successful, consistent, low-stress housing environment that was provided for Rana pipiens.Capillary burst pressure (Fig. 12D, No H) also displayed little to no variation, indicating that capillary wall strength was maintained while dynamic changes in capillary L p and r(p c -p i ) occurred.Heart rate and burst pressure plus the additional flat line curves listed in Tables 3 document important reference check points that are useful for assuring controlled studies of systemic and capillary physiology in Rana pipiens. Rana pipiens with lung hematoloechus not attached (H Not Att). None of the seasonal patterns discovered for No H were replicated in H Not Att.Instead, two variables associated with inflammation and four inflammation-responding variables were identified.First, plasma [NO x ] (Fig. 4, H Not Att), which is inflammatory at high levels, approximated a double-peaked trace with a prominent elevation in spring (March, April, May).Second, eosinophils (Fig. 3B, H Not Att), known to be parasite responsive, approximated a saw tooth waveform that peaked from July into September.These results indicated that plasma [NO x ] and eosinophils, together, effectively sustained inflammation from March through to September. The first of the inflammation-responding variables was clotting time (Fig. 6, H Not Att), which displayed a sawtooth waveform that was inverse and shifted right relative to the time trace for eosinophils (eosinophil trough in January vs. clotting time peak in February; eosinophil peak in September vs. clotting time trough in November), suggesting possible stimulus-responses between these two variables.The second, third, and fourth inflammation-responding variables were capillary J v /S (Fig. 10A, H Not Att), capillary L p (Fig. 10B, H Not Att), and capillary balance pressure minus r(p c -p i ) (Fig. 12C, H Not Att).Each displayed accentuated oscillations between March and September, indicating vigorous responses of capillaries during the same timeframe as the increases in plasma [NO x ] and eosinophils.The accentuated intramonth variability of J v /S, L p , balance pressure minus r(p c -p i ) compared with No H and H Att was additional compelling evidence of disrupted/chaotic barrier control due to inflammation in H Not Att. Rana pipiens with lung hematoloechus attached (H Att). Heart rate, plasma [NO x ], and L p were the three variables with significant month-to-month differences in the annual time series data for H Att. From the heart rate trace (Fig. 5, H Att), it was clear that H Att were stressed at different time points during the year, a result that differed from the flat heart rate traces for No H and H Not Att.All animals in the study lived in the same controlled environment and received the same care, as such, an external source of stress was not apparent.Investigators were blind to presence or absence of H until necropsy.We conclude that the presence of attached H was likely an internal source of stress that was sufficient enough to influence heart rate.Seasonal changes in heart rate for H Att may be one of the best indicators of H activity/ potency within Rana pipiens. Plasma [NO x ] (Fig. 4, H Att) also had a significant annual time series in H Att that was not present in No H and, when compared with H Not Att, was most instructive.The first prominent peak in spring that was the signature for plasma [NO x ] in H Not Att was noticeably absent in H Att, and the second peak in August was muted.These traces suggested that H, when attached, introduced a nitric oxide-targeted inhibitor into the host.The high levels of plasma [NO x ] in H Not Att during March, April, and May plus the absence of that peak during the same timeframe in H Att indicated the greatest proinflammatory (H Not Att) and anti-inflammatory (H Att) impact of H, an observation that could be useful for designing future experiments focused on nitric oxide, H, and Rana pipiens. Surprisingly, H Att time series data revealed a seasonal change in capillary L p (Fig. 10B, H Att). Once recognized, it was obvious that H attachment silenced an internal clock that was generating the L p oscillations in No H as well as the accentuated excursions in L p observed in H Not Att.The end result was a parasite-induced flat baseline for L p with one internal clock still ticking, a seasonal change in L p that was highest in January and lowest in November and that had gone undiscovered previously (28). Impact of Rana pipiens (Host) on Hematoloechus (Parasite) Data also revealed an impact of host on parasite.Higher spleen weight in H Att suggested increased erythrocyte production; however, average [RBC] did not reflect an increase.This observation implied that, on average, increased RBC production kept pace with consumption in H Att resulting in [RBC] that remained at or near normal levels.However, [RBC] time trace data for all three parasite conditions (Fig. 7B) provided additional details and insight.The H Att [RBC] trace showed a deep trough in July that was absent in the other two groups indicating a point in the year when H Att were consuming host RBC vigorously and outpacing RBC production.Likewise, a deep trough in H Att skin lymph protein (Fig. 8B), a variable that also did not differ on average between groups, suggested active protein consumption in May.Together these data highlight specific feeding periods for H, when attached, and narrow the target for future studies focused on understanding feeding cycles of H.The data also suggest protein, in addition to RBC, as a potential source of nourishment for H. Secondary Analysis-Capillary L p Outliers During this 7-year study, we accumulated a dataset that was large enough to allow for separate analyses of variables from Rana pipiens that were associated with outlier values of L p .The added advantage was knowledge of H status, which allowed the data to be grouped accordingly. Three distinct combinations or clusters of variables associated with the L p outliers in each of the three parasite groups are listed in Table 1.In No H, a cluster of inflammatory variables was identified that reflected acute inflammation along with leakier and weaker capillaries.In contrast, none of the variables for No H were significantly different for H Not Att.Instead, decreased capillary density/capillary rarefaction was identified, indicating adaptation to chronic inflammation.The third scenario was in H Att, where lower spleen weight accompanied leaky capillaries. The three unique outcomes for variables associated with L p outliers support the assertion made by some past investigators that capillaries with high values of L p are "inflamed".However, it is important to note that the discovery of different variables for each group appears to reflect adaptation from acute to chronic inflammation related to specific stimuli associated with H status. We conclude that a high value of capillary L p does not indicate similar etiology and is not necessarily a result of methodological anomalies.It is also worth noting that the third group (H Att) had the lowest "n" and the lowest median L p of the three outlier groups further emphasizing the rarity of the data reported here and consistent with capillaries being less reactive to stimuli when H are attached.Finally, average total [WBC] were lower, not higher, for L p outliers in No H, suggesting a minimum [WBC] that protects and maintains the integrity of the capillary barrier in Rana pipiens. Perspectives and Significance For almost 100 yr, Rana pipiens has been valued for studies of capillary physiology (10,29); yet, the broad set of variables presented here had not been characterized.Detailed measurements of variables at the system and individual capillary levels presented in the context of known H status are the first of their kind for Rana pipiens.We have discovered that H challenge Rana pipiens and, as a direct consequence, challenge the specialty of capillary physiology.Results intersect with the disciplines of medical and veterinary practices and helminthology, capillary physiology and pharmacology, and endothelial cell biology, biomedical engineering and mathematics.Examples are discussed below. Medical and veterinary practices and helminthology. The H-induced pro-and anti-inflammatory responses reported here focus the significance of this study on potential treatments for chronic inflammatory diseases such as cancer, asthma, obesity, Type II diabetes, Alzheimer's disease, rheumatoid arthritis, and heart disease as well as parasitic diseases in animals and humans.In addition, increased capillary wall tensile strength with H Att indicates a mechanism that, if known and titrated properly, could be beneficial for preventing stroke/cerebral hemorrhage.L p outlier data also demonstrated clearly that capillaries are leakier and weaker in the face of low [WBC], thus emphasizing a protective role of WBC, the mechanism/s of which would benefit patients with stroke and those with compromised immune systems. The annual time series data provide more specific details of naturally occurring versus H-induced activity.Time lines of annual feeding cycles, prominent periods of inflammation, and high stress points due to H will allow for more fine-tuned investigations of H in the future.The data open possibilities for answering questions regarding how H regulates their feeding, how H stimulates proinflammatory and antiinflammatory responses, how H stresses the heart only at certain times of the year, and how H strengthens capillary walls.The time series data also provide insight into the extended lengths of time necessary to induce effects in some variables.Excellent examples were capillary r(p c -p i ), which adjusted naturally over a period of 9 mo and L p that required 11 mo to demonstrate adaptive change to H attachment.Each of these clues expand our collective knowledge about systems physiology and must be accounted for when translating this research into creative new ways to improve health and minimize the impact of disease. Schistosoma mansoni is a water-borne trematode (blood fluke) with snail as its intermediate host and human is its definitive host.S. mansoni is the most prevalent parasite in humans causing the tropical disease, schistosomiasis.The fluke penetrates through the skin, lives in blood vessels, and comes into contact with endothelial cells during its migration through the body.In 1997, Coulson and Wilson (30) demonstrated increased inflammation in the presence of S. mansoni in the lung.In 1999, Trottein (31) studied the effect of S. mansoni on permeability of cultured endothelial cells originating from lung and brain.Their results indicated that S. mansoni secreted/excreted low molecular weight molecules that decreased monolayer permeability to inulin via cAMP/protein kinase A and phosphorylation of myosin light chain kinase producing an anti-inflammatory endothelium phenotype.These pro-and anti-inflammatory results from S. mansoni are consistent with our discoveries here with H.In Rana pipiens, we report: Capillary physiology and pharmacology. Our awareness expanded when we recognized that H and its attachment status were key to a more accurate understanding of capillary L p .Realizing that the H status was introducing "noise" into the overall L p dataset was essential to acknowledging the fact that H was compromising our ability to glean important information about capillary barrier function in situ.By broadening our study, we discovered that H impacts, at a minimum, the immune and cardiovascular systems of Rana pipiens.We conclude that wild-caught Rana pipiens is a mixed model.We recommend that Rana pipiens be devoid of H in the future when used for scientific purposes and, in particular, for studies of capillary physiology.Adopting this recommendation, using the reference data presented here for No H, reporting H status (known or unknown) when referencing previous studies, and reporting H status in future studies will facilitate a new and higher "gold" standard for studies of single capillaries in situ and improve Rana pipiens as a long-standing, classic model. Hydrostatic pressure, oncotic pressure, capillary permeability, and lymphatic clearance are fundamental to maintenance of tissue hydration, nutrition, and preventing edema formation.As such, the Starling forces (33) are integral to fluid balance and survival.By definition, balance pressure and r(p c -p i ) are obtained when flow through (balance pressure) and out of [r(p c -p i )] the capillary are both zero.Calculating the difference between balance pressure and r(p c -p i ) pressure (Fig. 12C) from direct measurements provided new insight into the dynamic interplay between capillary pressure and barrier resistance to fluid filtration in situ (assuming constant precapillary resistance).The difference between the two revealed a more accurate prediction of whether fluid movement favored perfusion (delta pressure above zero) or filtration (delta pressure below zero).One practical example of how to apply this analysis is in the design of pharmacological agents.If a drug agent is engineered to manipulate only hydrostatic pressure without considering the impact on capillary barrier function, it is likely that side effects related to hydration, nutrition, and perhaps thermoregulation will ensue.This fundamental issue, when addressed, will minimize or negate some common symptoms associated with drug agents including thirst, weight loss or gain, and hyperthermia. Endothelial cell biology, biomedical engineering, and mathematics. Originally, Sill et al. (9) compared their data to values of L p assessed in situ in frog mesenteric capillaries with median = 3.0  10 À7 , range = 0.87 to 25.74  10 À7 cm•s À1 •cmH 2 O À1 for n = 20 capillaries (38).In the present study, control L p ranges were the following: 0.5 to 22.2  10 À7 (No H); 0.7 to 43.9  10 À7 (H Not Att); and 0.7 to 9.3  10 À7 cm•s À1 •cmH 2 O À1 (H Att) with low values that were not attained in BAEC and high values well above BAEC.In addition, Sill et al. (9) and Hillsley and Tarbell (36) excluded cultures with L p >5.0  10 À7 and > 6.0  10 À7 cm•s À1 •cmH 2 O À1 , respectively, from their reports.No criteria were provided.The in situ frog mesenteric capillary data raise the question of how target baselines are selected for experiments on BAEC.Expanded rationale should be required in the future. Mathematical models are used to test the theoretical impact of intercellular cleft dimensions, matrices, and meshes on L p and solute permeability.Similar to the BAEC experiments, models have been grounded in in situ assessments of L p in frog mesenteric capillaries.One group (8) tested their model using an average L p value of 5.9  10 À7 cm•s À1 •cmH 2 O À1 from Clough and Michel (39) who also measured cleft dimensions on the same capillaries.Two years later, the same group (40) lowered the testing value for L p to 2.0  10 À7 cm•s À1 •cmH 2 O À1 (41), a value that was three times smaller than their previous work.The reasoning for selecting the lower value for L p was not provided. Although using one number for L p does not represent the dynamic nature of the capillary barrier, it is interesting to note that 5.9  10 À7 cm•s À1 •cmH 2 O À1 (39) is closer to the median for No H (7.0  10 À7 cm•s À1 •cmH 2 O À1 ) and 2.0  10 À7 cm•s À1 •cmH 2 O À1 (41) is more than three times lower than No H and more than two times lower than H Att (4.2  10 À7 cm•s À1 •cmH 2 O À1 ).We performed a comprehensive review of past capillary physiology studies based on whether results were conducted with or without the presence and effects of the Hematoloechus and found that none reported H status. We conclude that, in the past, low values of L p likely were obtained from a mixture of L p assessments performed in No H, H Not Att, and H Att plus L p values > 10.0  10 À7 cm•s À1 •cmH 2 O À1 were excluded.The net result produced a left-skewed distribution of L p and a false, low, target value of 2.0  10 À7 cm•s À1 •cmH 2 O À1 for baseline L p . The present data provide insight into the implications of using a very low value of L p for BAEC experiments and for testing mathematical models.Specifically, low L p in H Att was associated with body weight loss in vivo (Fig. 1A).Although using L p and cleft dimensions from the same capillaries is understandable from a modeling perspective, L p and physiological parameters also must be considered if results are to be applied and translated into practice.This study provides corrected values for capillary L p obtained from uninfected, uninfested Rana pipiens with known physiological parameters.We now recognize that a median L p for No H of 7.0  10 À7 cm•s À1 •cmH 2 O À1 is more realistic for baseline capillary L p as it reflects maintenance of body weight and a "healthy" endothelial phenotype.The corrected value is recommended for baseline L p in BAEC and for tests of mathematical models in the future. Because of our increasing awareness of the impact of H as the study progressed, some variables were added later than others.As such, datasets for clotting time, spleen weight, and liver weight were incomplete for H Not Att and H Att. However, because of the uniqueness of the data in Rana pipiens and for H, we have opted to include the data and allow readers to decide their value. Rana pipiens used in this study were wild-caught and were infested with H naturally, not experimentally.The length of time that H were present in each frog was not known.H were identified by visual inspection, not genetically.Obvious patterns within data for each variable were not apparent, suggesting that genetically-identified subspecies of H and the different samples of Rana pipiens over 7 years did not impact the systemic and capillary variables studied here. Figure 1 . Figure 1.Averages and annual time series for body weight (A) and displaced volume (B) measured in Rana pipiens with three parasite conditions: no Hematoloechus in the lungs (No H), Hematoloechus not attached to the inner lung wall (H Not Att), Hematoloechus attached to the inner lung wall (H Att).Data are presented as means ± SD (n).A: averages: à No H > H Att, P = 0.02.Time series: no H, P = 0.0008; H Not Att, P = 0.01; H Att, P = 0.001.B: averages: à no H > H Att, P = 0.03.Time series: no H, P < 0.0001; H Not Att, P = 0.16; H Att, P = 0.02.Open triangles, different from closed triangles.Months in order: J, January; F, February; M, March; A, April; M, May; J, June; J, July; A, August; S, September; O, October; N, November; D, December. Figure 5 .Figure 6 . Figure 5. Averages and annual time series for heart rate measured in Rana pipiens with three parasite conditions: no Hematoloechus in the lungs (No H), Hematoloechus not attached to the inner lung wall (H Not Att), Hematoloechus attached to the inner lung wall (H Att).Data are presented as means± SD (n).Averages: #No H > H Not Att, P = 0.02.Time series: No H, P = 0.55; H Not Att, P = 0.34; H Att, P = 0.0004.Open triangles, different from closed triangles.Months in order: J, January; F, February; M, March; A, April; M, May; J, June; J, July; A, August; S, September; O, October; N, November; D, December. Figure 11 . Figure 11.Individual values of capillary hydraulic conductivity (L p ) plotted as a function of number of Hematoloechus measured in Rana pipiens with three parasite conditions: no Hematoloechus in the lungs (No H), Hematoloechus not attached to the inner lung wall (H Not Att), Hematoloechus attached to the inner lung wall (H Att).Bold, solid line is the dose/response curve formed by the data limit (R 2 = 0.98). 1 ) disappearance of oscillations in spleen weight, balance pressure, tHct, and L p in H Not Att and H Att, 2) disappearance of seasonal changes in r(p c -p i ) in H Not Att and H Att, 3) highest L p in H Not Att, 4) highest [NO x ] in H Not Att, 5) lowest L p in H Att, 6) lowest [NO x ] in H Att, and 7) increased capillary wall tensile strength in H Att compared with No H.These results with the lung fluke (H) support the conclusion that H shift the in vivo endothelium from a pro-to anti-inflammation profile in Rana pipiens and are consistent with reports from blood flukes (S. mansoni).S. mansoni is well-studied, yet, how the host cells respond to the various proteins, lipids, glycans, and nucleic acids released by the blood fluke remains unknown (32).The results presented here introduce a new opportunity to investigate two different flukes that infest two different definitive hosts and produce similar outcomes.A comparative study may offer untapped avenues of discovery and insight into the mechanisms of parasite-host interactions. ) was 2 g lower for H Att compared with No H and both groups displayed seasonal variation.No H body weights were highest in four winter months and lowest in July/August.H Att displayed low body weights in summer, similar to No H, with highs in late fall and early winter.No H body weight decreased at slower rate than its increase, 1.2 versus 3.6 g/month, respectively. Att was the only group with significant seasonal variation in L p , which was highest in January, lowest in November [rate = -0.3 and 1.6 (cm•s À1 •cmH 2 O À1 )/month], and displayed normal distributions in April, May, August, and September.The Ds mechanical stimulus for each capillary averaged 29.0 (SD 10.5) dynes•cm À2 and did not differ for parasite conditions or season.Figure11illustrates the relationship between the number of H and L p .The data displayed a dose/response curve at the limit with a negative slope that extended from a high value of 38.3 to a low value of 3.9  10 À7 cm•s À1 •cmH 2 O À1 .H Not Att accounted for the highest values of L p shown under the curve and approached the lower, H Att values of L p , as the number of H increased. ) differed among all three groups, highest for H Not Att and lowest for H Att. For No H, J v /S and L p were lower than H Not Att and higher than H Att. Across the year, J v /S and L p showed no significant seasonal changes for No H or H Not Att.H Not Att, in particular, was much more variable from month to month than the other two groups.The error bars show a shift from skewed to normal distributions in J v /S and L p data during June, July, and August for H Not Att, a shift that was not present for No H. H  10 À7 cm•s À1 •cmH 2 O À1 for L p outliers in the No H group.Of the 38 variables tested, five systemic and four capillary variables distinguished the animals with L p outliers from control.Basophils, eosinophils, lymphocytes, total [WBC], clotting time, capillary r(p c -p i ), and capillary burst pressure were lower, and balance pressure minus r(p c -p i ) was significantly higher for the L p outliers versus control (Table1). 10 À7 cm•s À1 •cmH 2 O À1 for the L p outliers in the H Not Att group.In contrast to No H, only capillary density and [plpr] distinguished control from L p outliers for H Not Att (Table1). Table 1 . Systemic and capillary variables measured in Rana pipiens for three parasite conditionsData are means (SD) (n).Animals were grouped as control or outliers based on statistical analyses of capillary L p data.No H, no Hematoloechus in the lungs; H Not Att, not attached to the inner lung wall; H Att, Hematoloechus attached to the inner lung wall; L p , capillary hydraulic conductivity; [plpr], plasma protein concentration; r(p c -p i ), capillary sigma delta pi; [WBC], white blood cell Table 2 . Summary of variables with significant differences between one or more groups of lung parasite conditions in Rana pipiens 4 ], J v /S, L p , balance pressure À r(p c-p i ) Figs.4, 10A, 10B, 12C H Not Att < H Att Spleen weight Fig. 2B No H > H Not Att and H Att sysHct Fig. 7A No H < H Not Att and H Att Lung weight, lymphocytes, abdominal cavity fluid volume Figs.2A, 3C, 9 No H and H Att > H Not Att r(p c -p i ) Fig. 12B No H and H Not Att < H Att Burst pressure Fig. 12D Data are presented in the figures listed.No H, no Hematoloechus in the lung; H Not Att, Hematoloechus not attached; H Att, Hematoloechus attached to the inner surface of the lungs; J v /S, volume flux per surface area; L p , hydraulic conductivity; [NO x ], nitrite/nitrate concentration; sysHct, systemic hematocrit; r(p c -p i ), sigma delta pi; [WBC], white blood cell concentration. Table 3 . Summary of waveforms and associated variables observed in annual time series data for three groups of lung parasite conditions in Rana pipiens Data are presented in the figures listed.No H, Haematoloechus; H Not Att, Haematoloechus not attached; H Att, Haematoloechus attached to the inner surface of the lungs; [Skpr], skin lymph fluid protein concentration; sysHct, systemic hematocrit; [RBC], red blood cell concentration; [WBC], white blood cell concentration; Plasma [NOx], nitrite/nitrate concentration; tHct, capillary tube hematocrit; L P , capillary hydraulic conductivity; r(p c -p i ), capillary sigma delta pi; [Plpr], plasma protein concentration; [Abpr], abdominal cavity fluid protein concentration; [Hb], hemoglobin concentration; MCHb, mean corpuscular hemoglobin; MCV, mean corpuscular volume; J v /S, capillary volume flux per surface area.
2023-06-06T06:17:50.613Z
2023-06-05T00:00:00.000
{ "year": 2023, "sha1": "fd351ca9f239221598bb9a0382e48762b725ec6b", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10393331", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "90c997c1f230c7835ef9c05c233422ae64306fee", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16528850
pes2o/s2orc
v3-fos-license
Modulation of Tim-3 Expression by Antigen-Dependent and -Independent Factors on T Cells from Patients with Chronic Hepatitis B Virus Infection T-cell immunoglobulin domain and mucin domain-containing molecule-3 (Tim-3) was up-regulated on viral specific T cells and contributed to T cells exhaustion during chronic hepatitis B virus (HBV) infection. However, modulation of Tim-3 expression was still not fully elucidated. To evaluate the potential viral and inflammatory factors involved in the inductor of Tim-3 expression on T cells, 76 patients with chronic HBV infection (including 40 chronic hepatitis B [CHB] and 36 asymptomatic HBV carriers [AsC]) and 40 of normal controls (NCs) were enrolled in this study. Tim-3 expressions on CD4+ and CD8+ T cells were assessed in response to HBV-encoding antigens, HBV peptide pools, and common γ-chain (γc) cytokines stimulation by flow cytometry. HBV peptides and anti-CD3/CD28 directly induced Tim-3 expression on T cells. γc cytokines also drive Tim-3 up-regulations on both CD4+ and CD8+ T cells in patients with chronic HBV infection. However, γc cytokines did not enhance the Tim-3 inductions by either anti-CD3/CD28 or HBV peptides stimulation. Furthermore, γc cytokines-mediated Tim-3 induction could not be abrogated by γc cytokine receptor-neutralizing antibodies. The current results suggested that elevation of Tim-3 expression on T cells could be regulated by both antigen-dependent and -independent manner in patients with chronic HBV infection. The role of γc cytokines in modulation of inhibitory pathway might be evaluated as immunotherapies in humans. INTRODUCTION Hepatitis B virus (HBV) leads to a chronic infection in 10% of adults and 90% of children, which results in 1∼2 million people died annually worldwide due to HBV-related end-stage liver diseases, such as liver cirrhosis, hepatic failure, and hepatocellular carcinoma (Hoofnagle et al., 2007;Lok and McMahon, 2009;Lu and Zhuang, 2009). The outcome of hepatitis B is closely linked to their immune status to mediate the clearance of virus. Interferon-γ (IFN-γ) production by viral specific CD4 + and CD8 + T cells response is pivotal for controlling acute hepatitis B virus infection (Rehermann et al., 1995;Bertoletti and Naoumov, 2003). In contrast, the inability of T cells results in the collapse of HBV-specific adaptive immune response in chronic hepatitis B (CHB) (Bertoletti and Naoumov, 2003;Chisari et al., 2010). More importantly, chronic HBV infection is not directly associated with liver inflammation, which is the results of interaction between virus and host immune response. Chronic HBV infection could be divided into different phases . Immune tolerant phase is characterized by high HBV DNA and normal ALT, which showed as asymmetric HBV carriers, while CHB patients reveal acute increase in ALT and continuing hepatic injury . However, the precise mechanisms corresponding to T cells tolerance and immune evasion in chronic HBV infection are still not fully elucidated. Recent studies revealed that multiple inhibitory immune regulatory proteins, including programmed death-1 (PD-1), cytotoxic T-lymphocyte antigen-4 (CTLA-4), and T-cell immunoglobulin domain and mucin domain-containing molecule-3 (Tim-3), were involved in the modulation of T cells impairment during chronic infections (Seddiki et al., 2014;Pauken and Wherry, 2015a,b). Tim-3 could be expressed on several cell types in immune system, including CD4 + and CD8 + T cells (Monney et al., 2002;Hastings et al., 2009;Dorfman et al., 2010). The role of Tim-3 could be vary depending on contexts where it was expressed (Gorman and Colgan, 2014). Study on tuberculosis infection provided evidence that Tim-3 promoted both CD4 + and CD8 + T cell responses (Qiu et al., 2012). However, Tim-3 was found to strongly suppress the T cells functions and was associated with T cells impairment or exhaustion in autoimmune diseases (Lee and Goverman, 2013) and chronic microbial infections (Jones et al., 2008;Sehrawat et al., 2009;Moorman et al., 2012;. Furthermore, Tim-3 contributed to T cell exhaustion partly by enhancing T cell receptor (TCR)-signaling pathway (Wherry, 2011;Ferris et al., 2014), while TCR was also an essential component of Tim-3 elevation based on the finding that CD3/CD28 costimulation up-regulated Tim-3 expression on CD4 + T cells (Hastings et al., 2009). Overexpression of Tim-3 contributed to HBV persistence by induction of T cells dysfunction Nebbia et al., 2012), inhibition of viral-specific CD8 + T cells (Ju et al., 2009;Wu et al., 2012), and suppression of natural killer cells (Ju et al., 2010). However, the role of elevated Tim-3 on T cells in chronic HBV infection was still poorly understood. Human immunodeficiency virus (HIV)-1 protein Nef directly induced PD-1 expression, which was another exhaustion marker (Muthumani et al., 2008). Moreover, common γ-chain (γc) cytokines also induced Tim-3 expression in an antigen-independent manner in HIV-1 infection (Mujib et al., 2012). Thus, we hypothesized that soluble viral and inflammatory factors may be involved in the inductor of Tim-3 expression on T cells. To test this possibility, Tim-3 expressions on CD4 + and CD8 + T cells were examined in response to HBV antigens, peptides, or γc cytokines stimulation. The synergic effects of these factors were also evaluated by costimulation. Subjects A total of 76 hepatitis B e antigen (HBeAg)-positive HBV-infected patients, including 40 CHB patients and 36 asymptomatic HBV carriers (AsC), were enrolled in this study. The diagnoses were made according to the diagnostic standard of Chinese Guideline of Prevention and Treatment for Chronic Hepatitis B (2010 version). All patients were hospitalized or followed-up in Tangdu Hospital from March 2011 to July 2014. No patients were co-infected with HIV, other hepatitis viruses, or concurrently afflicted by autoimmune diseases. Patients who previously received anti-HBV agents or immunomodulatory treatments were also excluded. For normal controls (NCs), Forty healthy individuals with matched age and sex were also enrolled. The clinical data obtained for the enrolled subjects are listed in Table 1. The study protocol was approved by the ethics committee of Tangdu Hospital, Fourth Military Medical University, and written informed consent was obtained from each subject. Virological and Biochemical Assessments HBV DNA was quantified using a commercial real-time PCR kit (PG Biotech, Shenzhen, China) with a detection limitation of 2 log10 copies/mL. Hepatitis B surface antigen (HBsAg), anti-HBs, HBeAg, anti-HBe, anti-hepatitis core antigen were quantified using the ARCHITECT HBsAg, anti-HBs, HBeAg, anti-HBe, and anti-HBc reagent kit (Abbott GmbH & Co. KG, Wiesbaden, Germany). Serum biochemical assessments were made using an automatic analyzer (Hitachi 7170A, Hitachi Ltd, Tokyo, Japan). Statistical Analysis Data were analyzed using Graphpad Prism version 5.0 (GraphPad Software, La Jolla, CA, USA). The Kruskal-Wallis H test and Dunn's multiple comparison test were used for comparison among groups. The Mann-Whitney test was used for comparison between two groups. A value of P < 0.05 was considered to indicate a significant difference. Common γc Cytokines Did Not Enhance the Tim-3 Induction by Either Anti-CD3/CD28 or HBV Peptides Stimulation The γc cytokines IL-2, IL-7, and IL-15 stimulation were more potent inducers of Tim-3 on T cells and, thus, were used for study further. We analyzed whether the γc cytokines presented synergic effect to anti-CD3/CD38 or peptides stimulation on Tim-3 expression on T cells. PBMC from 10 of AsC and 10 of CHB patients, which were selected from the above experiments of Figure 1 but did not overlap with the patients from Figure 2, were cultured with anti-CD3/CD28, alone or presence of either IL-2, IL-7, or IL-15. Tim-3 levels were assessed 4 days after stimulation. The addition of γc cytokines IL-2, IL-7, or IL-15 costimulation with anti-CD3/CD28 did not result in an increased frequencies of either Tim-3 + CD4 + T cells (Figures 3A,C) or Tim-3 + CD8 + T cells (Figures 3B,D). Similar observations were made with IL-15 and cells stimulated with HBV peptides pool. Tim-3 + CD8 + T cells frequencies did not elevated in response to IL-15 and peptides costimulation in either AsC (7.17 ± 3.25%, P = 0.912; Figure 3E) or CHB (9.06 ± 4.53%, P = 0.143; Figure 3F). Common γc Cytokines-Mediated Tim-3 Induction Could Not Be Abrogated by γc Cytokine Receptor-Neutralizing Antibody The γc cytokines increased Tim-3 expression on T cells through the γ chain of the receptor in HIV-1 infection (Mujib et al., 2012). We then further analyzed whether the signaling through γ chain was also the pathway to regulate Tim-3 expression in HBV infection. PBMCs from 8 of AsC and 8 of CHB patients, which were selected from the above experiments of Figure 1 but did not overlap with the patients from Figure 2 or Figure 3, were cultured with anti-common γc receptor neutralizing antibody at 10 µg/mL for 4 h, and then γc cytokines, anti-CD3/CD8, as well as HBV peptides were added for another 4 days treatment. Compared with the PBMCs which did not receive γc-neutralizing antibody, neither CD4 + nor CD8 + T cells displayed reduced frequencies of Tim-3 + cells with each cytokine stimulation (P > 0.05, Figure 4). There were consistent trends of reductions of Tim-3 expression in AsC patients in response to IL-2 (CD4 + , 4.98 ± 2.40% to 3.62 ± 1.85%, P = 0.194, Figure 4A; CD8 + , 3.49 ± 1.98% to 2.53 ± 1.79%, P = 0.199, Figure 4C) and in CHB patients in response to IL-7 (CD4 + , 2.89 ± 0.74% to 1.94 ± 0.56%, P = 0.208, Figure 4B; CD8 + , 5.13 ± 0.61% to 3.26 ± 0.65%, P = 0.051, Figure 4D), but these differences failed to achieve significances. Moreover, both TCR-stimulated FIGURE 2 | Common γ-chain (γc) cytokine-mediated induction of Tim-3 expression in CD4 + and CD8 + T cells contained in peripheral blood mononuclear cells (PBMCs). PBMCs were selected from 18 of AsC and 20 of CHB, which were used in the experiments of Figure 1. Total PBMCs were treated with IL-2 (25 ng/mL), IL-7 (25 ng/mL), IL-15 (25 ng/mL), IL-21 (25 ng/mL), or anti-CD3/CD28 (1 µg/mL) for 4 days. Tim-3 expressions were assessed on CD4 + and CD8 + T cells. (A) Comparison of frequencies for CD4 + Tim-3 + cells in response to γc cytokine and anti-CD3/CD28 stimulations in AsC. (B) Comparison of frequencies for CD4 + Tim-3 + cells in response to γc cytokine and anti-CD3/CD28 stimulations in CHB. (C) Comparison of frequencies for CD8 + Tim-3 + cells in response to γc cytokine and anti-CD3/CD28 stimulations in AsC. (D) Comparison of frequencies for CD8 + Tim-3 + cells in response to γc cytokine and anti-CD3/CD28 stimulations in CHB. Data were presented as box-and-whisker plot. The box presented as median and quartile, and the whisker plot presented as 2.5-97.5% percentile. Dunn's multiple comparison test were used for comparison between groups. cells via anti-CD3/CD28 treatment and viral-specific cells via HBV peptides stimulation were unaffected with regard to Tim-3 frequencies, despite the addition of γc-neutralizing antibody (P > 0.05, Figure 4). We then further analyzed the phosphoylation of STAT-1 in γc receptor-mediated signaling pathway. PBMCs were selected from 10 of CHB patients which were enrolled in Figure 1. As shown in Figure 5A, IL-15 stimulation significantly increased the mean fluorescence intensity (MFI) value of pSTAT-1(blue dashed line) in comparison with normal PBMCs (purple dashed line) (P = 0.0007, Figure 5B). Importantly, inhibition of γc receptor by neutralizing antibody significantly reduced the phosphoylation of STAT-1 (red line) (P = 0.023, Figure 5B), which confirmed the successful blockade of γc receptor. Moreover, although MFI value of pSTAT-1 in IL-15 stimulated, γc receptor neutralized PBMCs (green line) was reduced in comparison with IL-15 stimulated normal PBMCs (P = 0.0027, Figure 5B), it is still remarkably elevated in comparison of MFI value in γc receptor neutralized PBMCs (P < 0.0001, Figure 5B). DISCUSSION In the present study, we provided the evidence to further insight into the mechanism of Tim-3 regulation in chronic HBV infection. We found a higher expression of Tim-3 on both CD4 + and CD8 + T cells in patients with CHB compared to NCs and AsC patients. This is consistent with the previous notion that Tim-3 revealed a stepwise elevation with increasing liver inflammation which was assessed by ALT levels (Nebbia FIGURE 3 | Induction of Tim-3 expression on T cells within PBMCs in response to common γ-chain (γc) cytokine plus antigens costimulation. PBMCs were selected from 10 of AsC and 10 of CHB, which were used in the experiments of Figure 1 but did not overlap with the patients from Figure 2. Total PBMCs were treated with IL-2 (25 ng/mL), IL-7 (25 ng/mL), IL-15 (25 ng/mL) plus anti-CD3/CD28 (1 µg/mL) or HBV peptide pools (10 µg/mL). Tim-3 expressions were assessed on CD4 + and CD8 + T cells. (A) Comparison of frequencies for CD4 + Tim-3 + cells in response to stimulation of γc cytokines and costimulation of γc cytokines plus anti-CD3/CD28 in AsC. (B) Comparison of frequencies for CD4 + Tim-3 + cells in response to stimulation of γc cytokines and costimulation of γc cytokines plus anti-CD3/CD28 in CHB. (C) Comparison of frequencies for CD8 + Tim-3 + cells in response to stimulation of γc cytokines and costimulation of γc cytokines plus anti-CD3/CD28 in AsC. (D) Comparison of frequencies for CD8 + Tim-3 + cells in response to stimulation of γc cytokines and costimulation of γc cytokines plus anti-CD3/CD28 in CHB. (E) Comparison of frequencies for CD8 + Tim-3 + cells in response to IL-15 stimulation and costimulation of IL-15 plus HBV peptide pools in AsC. (F) Comparison of frequencies for CD8 + Tim-3 + cells in response to IL-15 stimulation and costimulation of IL-15 plus HBV peptide pools in CHB. Data were presented as box-and-whisker plot. The box presented as median and quartile, and the whisker plot presented as 2.5-97.5% percentile. Dunn's multiple comparison test were used for comparison between groups. FIGURE 4 | Induction of Tim-3 expression on T cells within PBMCs by common γ-chain (γc) cytokines (IL-2, IL-7, IL-15) could not be abrogated in the presence of anti-common γc receptor neutralizing antibody compared with no antibody treatments. PBMCs were selected from 8 of AsC and 8 of CHB patients, which were used in the experiments of Figure 1 but did not overlap with the patients from Figures 2, 3. (A) Comparison of frequencies for CD4 + Tim-3 + cells in response to γc cytokines stimulation with or without neutralizing antibody in AsC. (B) Comparison of frequencies for CD4 + Tim-3 + cells in response to γc cytokines stimulation with or without neutralizing antibody in CHB. (C) Comparison of frequencies for CD8 + Tim-3 + cells in response to γc cytokines stimulation with or without neutralizing antibody in AsC. (D) Comparison of frequencies for CD8 + Tim-3 + cells in response to γc cytokines stimulation with or without neutralizing antibody in CHB. Data were presented as mean and standard deviation. Mann-Whitney test was used for comparison between groups. # P > 0. 05. et al., 2012). Although we did not find significant correlation between Tim-3 expression and HBV DNA or ALT levels in HBV infected individuals, both immunological and inflammatory factors might contribute to the regulation of Tim-3 expression in HBV infection. HBV-encoding antigens were strong immunogens to activate and stimulate immune cells. Thus, we postulated that soluble HBV viral products could also induce Tim-3 expression. The mixture of HBsAg, HBeAg, and HBcAg were used to stimulate cultured PBMCs in vitro. However, Tim-3-expressing CD4 + and CD8 + T cells did not elevated in response to antigens in both AsC and CHB patients. Furthermore, frequencies of Tim-3 expression were remarkably increased in response to HBV peptide pools stimulation. This is partly because that T cells recognized a peptide derived from the foreign antigen bound to MHC molecule. However, HBVencoding antigens were unmodified native proteins with different conformations, which did not expose the epitopes recognized by MHC molecules. Moreover, direct stimulation of TCR by anti-CD3/CD28 also notably increase Tim-3 expression on CD4 + and CD8 + T cells in both AsC and CHB. Thus, Tim-3 expression in HBV-infected individuals was partly an antigen-dependent manner as a result of infection. Previous studies have been demonstrated that Tim-3 could also be up-regulated both dependently and independently of TCR or antigenic stimulation in viral infection (Hastings et al., 2009;Mujib et al., 2012). Mujib et al. (2012) revealed that Tim-3 could be upregulated in vitro in an inflammatory states where enrichment of γc cytokines in HIV-1 infection. Our observation that γc cytokines, specifically IL-2, IL-7, IL-15, and IL-21 were potent inducers of Tim-3 expression on T cells in the antigenindependent manner in HBV infection were consistent with the role of these cytokines in HIV-1 infection (Mujib et al., 2012). The elevations of γc cytokines were proved to be associated with spontaneous viral clearance and HBeAg seroconversion (He et al., 2013). γc cytokines predominantly related to the regulation of lymphocyte development, homeostasis, and functions (Overwijk and Schluns, 2009). IL-2 was a potent inducer of T cell proliferation as well as Th1/Th2 differentiation in inflammatory response (Hoyer et al., 2008). Both IL-7 and IL-15 robustly expanded dendritic cell-activated HBV-specific CD4 + T cells in vitro (Chen et al., 2006). Moreover, IL-15 was also important in the development and homeostasis of memory CD8 + T cells, NK cells, and NKT cells (Villinger et al., 2004). IL-15 also inhibited HBV replication via IFNβ production and exerted anti-HBV functions independent of γc receptor in mouse model (Yin et al., 2012). IL-21, which derived from HBV-specific CD4 + T cells, played viral roles in sustaining viral-specific CD8 + T cells and promoting B cell response (Li et al., 2015), although our previous studies showed that IL-21 did not enhance HBV-specific immune response in mouse models . Importantly, high serum IL-21 levels after 12 weeks of telbivudine therapy predicted HBeAg seroconversion in CHB . Thus, the up-regulation of Tim-3 in response to γc cytokines(IL-2, IL-7, IL-15, and IL-21) stimulation indicated that Tim-3 may play a negative regulatory role in response to these cytokines, which were consistent with the previously proposed roles of Tim-3 expression on T cells (Sakuishi et al., 2011;Mujib et al., 2012). Although γc cytokines were considered as proinflammation, the involvement of these cytokines in up-regulation of Tim-3 suggested that they were also responsible for activation of inhibitory pathway in viral infections (Mujib et al., 2012). Furthermore, cells costimulated with γc cytokines plus anti-CD3/CD28 or HBV peptides did not result in greater Tim-3 induction compared with mono-stimulation, which suggested that the antigen-dependent and independent induction did not reveal synergic effects in Tim-3 regulation and these two pathways individually were sufficient for Tim-3 induction. We were not able to diminished γc cytokines-induced Tim-3 elevation on T cells by anti-common γc receptor neutralizing antibody in patients with chronic HBV infection. γc cytokines shared γc receptor usage and signal through specific heterodimeric or trimeric receptor complexes (Toe et al., 2013). The consequences of cognate receptor engagement were dependent on receptor expression patterns, expression levels, and downstream JAK-STAT signaling components (Toe et al., 2013). Thus, downregulation of phosphorylated STAT-1 demonstrated successful blockade of γc receptor. The neutralizing antibody might only partly block the function of common γc receptor. However, other components of the receptor complex might play important roles in γc cytokines-induced Tim-3 expression. This was partly because that IL-15 stimulation could also increase the pSTAT-1 level in γc receptor-inhibited PBMCs. Other possibility could be that γc receptor neutralizing antibody caused a shift in the functional status of Tim-3 + T cells. Moreover, γc cytokines might also modulate Tim-3 expression through other signaling pathways. Thus, further studies were needed to investigate the STAT phosphorylation and the changes of Tim-3 expression by functional blocking the other component of receptor complex. In conclusion, both HBV peptides and γc cytokines induced the up-regulation of Tim-3, which suggested that elevation of Tim-3 expression on T cells could be regulated by both antigen-dependent and independent manner in patients with chronic HBV infection. The role of γc cytokines in modulation of inhibitory pathway could be evaluated as immnotherapies in humans. AUTHOR CONTRIBUTIONS JD, XY, and HS performed the study. XW, LW, C-XH, and YZ enrolled the patients. JD, XW, C-QH, LW, AW, C-XH, YZ, and JL analyzed the data, and prepared the manuscript. YZ and JL designed and supervised the study. FUNDING This work was supported by the grants from National Natural Science Foundation of China (31370856, 81671555, and 81072353), and National Science and Technology Major Project of China (2012ZX10002007-001-006).
2017-05-02T20:25:39.381Z
2017-03-28T00:00:00.000
{ "year": 2017, "sha1": "41f40054edd2cf589b73bcb203f12e2b72ba72d8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2017.00098/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41f40054edd2cf589b73bcb203f12e2b72ba72d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247889318
pes2o/s2orc
v3-fos-license
Human Gingival Fibroblast and Osteoblast Behavior on Groove-Milled Zirconia Implant Surfaces Two type of cells representing periodontal hard tissues (osteoblasts) and soft tissues (fibroblasts) were evaluated in response to microgroove-milled zirconia surfaces. A total of 90 zirconia discs were randomly assigned to four width-standardized milling microgroove-textured groups and a control group without grooves (UT). The sandblast and acid-etch protocol were applied to all samples. Both cell lines were cultured on zirconia discs from 1 day up to 14 days. Cell morphology and adhesion were evaluated after 1 day of culturing. Cell viability and proliferation of the cells were measured. Alkaline phosphatase activity, collagen I, osteopontin, interleukin 1β and interleukin 8 secretions were assessed at predefined times. The results obtained were presented in the form of bar graphs as means and standard deviations. Multi comparisons between groups were evaluated using two-away ANOVA or Mann–Whitney tests, and a p-value < 0.05 was established. Group comparisons with regard to cell viability, proliferation and secretion of collagen I, interleukin-1β and interleukin 8 revealed no statistically significant differences. The alkaline phosphatase activity and osteopontin secretion were significantly higher in the group with a large groove compared to the small one and the control group. Nevertheless, the viability of gingival and bone cells did not appear to be affected by the milled microgroove texture compared to the conventional sandblasted and acid-etched texture, but they seem to influence osteoblasts’ cellular differentiation. Introduction Since their discovery, dental implants have revolutionized implantology and have become a routine treatment in everyday dental practice. Despite the high survival rate of implant treatment [1], inflammatory conditions around osteointegrated dental implant have been reported resulting in bone and soft tissue losses [2][3][4][5]. The frequent occurrence of these phenomena in daily oral rehabilitation underscores the need for new materials and improved implant surfaces that can promote osseointegration and preserve bone and gum tissue around dental implants. The first material successfully used in dentistry for dental implants was titanium, but the drawbacks identified over the years, such as the release of nanoparticles, allergic reactions and poor aesthetic associated with a gray color, led to the search for new materials such as ceramics [1,[6][7][8]. The recent technology advancement in dental materials led to an increase in the number of commercially available ceramic materials for clinical use, such as alumina ceramic, machinable glass ceramic, feldspathic porcelain, zirconia ceramic and others. Among them, yttria-stabilized polycrystalline tetragonal zirconia (YTZP) is the most popular because of its mechanical properties and similarity to titanium. In addition, due to its metal-free properties, coupled with its aesthetic white color and translucency like human teeth, YTZP could be the preferred material for dental implants in the future as an alternative to titanium implants [9][10][11][12][13]. The effectiveness of dental implants is mainly based on the bone-implant interaction [14]. However, several factors can influence this process such as the: chemical composition, surface wettability, roughness and topography of the dental implant surface [15][16][17]. Surface modifications of dental implants such as the one mentioned above, surface roughness, are often used on implants made of zirconia, as it has been recognized that they affect the in vitro cell response and implant integration into the bone in vivo. These micro irregularities conceived by sandblasting and acid etching (SBAE) improve the biocompatibility of the implants, increasing contact surface area and adhesion of osteogenic cells. The microroughness from 1 to 100 µm it is obtained by blasting and acid etching resulting in the improvement of osteoblast biological behavior [13,[18][19][20]. Therefore, the topography of the implant surface can be divided into three levels depending on its characteristics: macro (10 µm to 1 mm), micro (1-10 µm) and nano (1-100 nm) scale [15]. These characteristics have been mentioned in recent studies of titanium implants as having a significant impact on promoting bone ingrowth [21], but this has hardly been investigated in the case of YTZP dental implants. Overall, it has been described that microtopography is a critical determinant of human cells' adaptation and differentiation [22], due to the production of directional physical signals in cell regulation and in the assembly and collagen matrix orientation [23]. In that regard, the standardization of implant surfaces by performing topographies on them has been described over the years [24][25][26]. Among the various types of topographies designed, microgroove topographies have been widely explored for their impact on cell alignment, as they can be generated relatively conveniently using a range of microfabrication techniques such as conventional milling. In vitro studies of osteogenic cell behavior showed that they are strongly oriented towards the direction of the grooves when compared to surfaces without texture, on which arbitrary alignment is usually noticed. However, the ideal texture and dimensions for optimal osteointegration are still being extensively researched, and most of these studies are performed on titanium implant surfaces. It is therefore important to know whether creating microgroove patterns on implants made of zirconia would be beneficial for hard and soft tissues [22,26]. Thus, the purpose of this investigation was therefore to understand whether cell behavior of osteoblasts and fibroblasts on width-standardized milling microgroove-textured zirconia surfaces is improved compared to sandblasted and acid-etched microtopography. Substrates Ninety zirconia discs with 8 mm in diameter and 2 mm in thickness made of a commercially available 3 mol % yttria-stabilized zirconia powder (3Y-TZP) (3YSB-E Tosoh Corporation, Tokyo, Japan) with 40 nm in particle size and 60 µm in average cluster size. Their chemical compounds are described in Table 1. The production was based on the pressing and sintering technique. After performing the pressing process (210 MPa) in a steel mold that has 10 nm of internal diameter and 50 nm in height, all samples except for the control group (UT) were textured with widthstandardized milling microgroove defined as groups A, B, C and D, according to Table 2. Once the surfaces were textured, all samples were sintered in an oven (Zirkonofen 700, South Tyrol, Italy) and then cleaned with isopropyl alcohol using ultrasonic equipment. The samples were then sandblasted for 30 s with particles of alumina (Al 2 O 3 ) with an average size of 250 µm, 6 bars of pressure and 12 cm of distance, and washed again with ultrasonic equipment and isopropyl alcohol. After this step, each sample was immersed in hydrofluoric acid (48% HF) at room temperature for half an hour and cleaned once more with ultrasonic equipment, where it was immersed in isopropyl alcohol for 5 min, resulting in a surface roughness of 1.45 µm. In the final step, an ultrasonic bath process was once again performed for all the samples with absolute ethanol and sterilized in the autoclave in order to carry out all biological tests ( Figure 1). All the biological tests were repeated in three independents experiments. The production was based on the pressing and sintering technique. After performing the pressing process (210 MPa) in a steel mold that has 10 nm of internal diameter and 50 nm in height, all samples except for the control group (UT) were textured with widthstandardized milling microgroove defined as groups A, B, C and D, according to Table 2. Once the surfaces were textured, all samples were sintered in an oven (Zirkonofen 700, South Tyrol Italy) and then cleaned with isopropyl alcohol using ultrasonic equipment. The samples were then sandblasted for 30 s with particles of alumina (Al2O3) with an average size of 250 μm, 6 bars of pressure and 12 cm of distance, and washed again with ultrasonic equipment and isopropyl alcohol. After this step, each sample was immersed in hydrofluoric acid (48% HF) at room temperature for half an hour and cleaned once more with ultrasonic equipment, where it was immersed in isopropyl alcohol for 5 min, resulting in a surface roughness of 1.45 μm. In the final step, an ultrasonic bath process was once again performed for all the samples with absolute ethanol and sterilized in the autoclave in order to carry out all biological tests ( Figure 1). All the biological tests were repeated in three independents experiments. The final aspect of the samples from each group after the surface treatment was observed and evaluated by SEM JSM-6010 LV (JEOL Ltd., Tokyo, Japan). SEM JSM-6010 LV images were obtained at 500× magnification at 10 kV acceleration voltage. Backscattering Electron Detector (BSED) images were taken at 15 kV, (Figure 2). FEG-SEM images reveal similar topographies in all textured samples. The final aspect of the samples from each group after the surface treatment was observed and evaluated by SEM JSM-6010 LV (JEOL Ltd., Tokyo, Japan). SEM JSM-6010 LV images were obtained at 500× magnification at 10 kV acceleration voltage. Backscattering Electron Detector (BSED) images were taken at 15 kV, (Figure 2). FEG-SEM images reveal similar topographies in all textured samples. Both cell lines were incubated at 37 °C with 5% CO2 and 98% humidity. When the cells reached approximately 80% confluence, trypsin-EDTA (Lonza, Veners, Belgium) was added to detach them, they were later centrifuged, and the pellet was resuspended in the respective medium. To perform each cell culture assay 1 × 10 4 cells/mL cells at a 4 th passage were seeded in 48-well plates containing sterile sample (Corning, NY, USA). Fibroblast and Osteoblast Cells Viability and Proliferation The viability and proliferation of both cells was assessed with-Cell-Titer Blue ® reagent (Promega, Madison, WI, USA) following the manufacturer's instructions. After 1, 3, Both cell lines were incubated at 37 • C with 5% CO 2 and 98% humidity. When the cells reached approximately 80% confluence, trypsin-EDTA (Lonza, Veners, Belgium) was added to detach them, they were later centrifuged, and the pellet was resuspended in the respective medium. To perform each cell culture assay 1 × 10 4 cells/mL cells at a 4th passage were seeded in 48-well plates containing sterile sample (Corning, NY, USA). Fibroblast and Osteoblast Cells Viability and Proliferation The viability and proliferation of both cells was assessed with-Cell-Titer Blue ® reagent (Promega, Madison, WI, USA) following the manufacturer's instructions. After 1, 3, 7 and 14 days of the cells culturing, fluorescence intensity was measured in arbitrary fluorescence units (AU). The detection range used was excitation/emission wavelengths of 560/590 nm, measured with a luminescence spectrometer (PerkinElmer LS 50B, Waltham, MA, USA). N = 15 samples cultured with osteoblasts and fibroblasts were analyzed per group for the viability and proliferation of the two cell lines. Fibroblasts and Osteoblasts Morphology To determine the cell morphology, samples with osteoblasts and fibroblasts were observed after one day of culturing. After washing, all cell samples were fixed with 1.5% glutaraldehyde and dehydrated with increasing ethanol concentrations (70%, 80%, 90% and 100%). The samples were incubated in Hexamethyldisilazane-HMDS (440191 Aldrich Chemistry, Milwaukee, WI, USA) and then covered with gold by the sputtering method (LEICA EM ACE600, Heerbrugg, Switzerland), with a 15 nm ultrathin gold-palladium film 80-20% in weight using high-resolution sputter coater (208HR Cressington Company, Watford, UK) coupled with an MTM-20 Cressington High Resolution Thickness Controller. Scanning Electron Microscopy-SEM JSM-6010 LV (JEOL Ltd., Akishima, Japan) was carried out at different magnifications (500, 2000, 5000×) at 10 Kv. Backscattering Electron Detector (BSED) images were taken at 15 kV. Two researchers performed the observations within the focus on morphology, spreading and early cell contact establishment with the materials. N = 3 samples cultured with osteoblasts and fibroblasts were analyzed for SEM images. The Activity of Alkaline Phosphatase (ALP) The activity of ALP was evaluated on osteoblasts at 7 days of culturing with fluorometric enzymatic tests from (ab83371 ALP Assay Fluorometric, Abcam, Cambridge, UK) following the instruction by the manufacturer. To evaluate the enzyme activity, a standard curve was elaborated. Both samples and standards were evaluated at excitation/emission wavelengths of 360/440 nm utilizing a Fluorescence spectrometer (PerkinElmer LS 45, Waltham, MA, USA) The ALP values were transformed into mU/µL according to the regression equation from the standard curve. N = 4 samples cultured with osteoblasts were analyzed per group for ALP activity. The quantification of collagen in each osteoblast and fibroblast cell culture was carried out after 7 days of culturing. HumanPro-Kit Collagen I alpha 1 DuoSet Elisa (DY6220 05 R&D Systems, Inc., Minneapolis, MN, USA) was used according to the manufacturer's protocol. The fluorescence intensity of all the samples was detected at excitation/emission wavelengths of 540 nm using a Multimode Plate Reader (PerkinElmer ® Inc., Waltham, MA, USA). Results were acquired in units of absorbance (nm) and transformed into pg/mL according to the standard curve. N = 4 osteoblasts and fibroblasts with culture suspension were analyzed per group to quantify collagen I. Quantification of Osteopontin by ELISA Method The quantification of osteopontin was carried out using the ELISA Chemiluminescent Human Osteopontin kit (LumiAB TM , San Francisco, CA, USA) using a luminescence technique with Multimode Plate Reader (PerkinElmer ® Inc., Waltham, MA, USA ) after 3 days of osteoblast culturing. The fluorescence intensity of the samples was detected at excitation/emission wavelengths of 700 nm. Results were acquired in units of absorbance (nm) and transformed into concentration (pg/mL) based on the standard curve. N = 4 osteoblasts with culture suspension were analyzed per group to quantify osteopontin. Quantification of Interleukin 8 by ELISA Method Interleukin 8 was quantified with the Human IL-8 Chemiluminescent ELISA Kit (Lumi AB TM , San Francisco, CA, USA) using Multimode Plate Reader (PerkinElmer ® Inc., Waltham, MA, USA), at 1 day of fibroblast culturing. Results were acquired in units of absorbance (nm) and transformed into concentrations (pg/mL) based on the standard curve. N = 4 fibroblasts with culture suspension were analyzed per group to quantify IL-8. Statistical Evaluation The statistical evaluation was performed with IBM ® SPSS ® 24.0 for MacBook (SPSS, Chicago, IL, USA). The normality analyses were carried out for all data. Comparisons among groups in terms of viability, proliferation, ALP, interleukin 1β, collagen I, osteopontin and interleukin 8 were based on two-way ANOVA or Mann-Whitney. Tukey's post-hoc analysis was used to find the statistical differences between the groups, and the significance level was stablished at p-value < 0.05. The obtained data were presented in terms of mean ± standard deviation (SD). Fibroblast and Osteoblast Viability and Proliferation The viability and proliferation results for the two cell lines, fibroblast and osteoblast, were attained for 1, 3, 7 and 14 days of culturing ( Figure 3). Fibroblast and Osteoblast Morphology Images obtained from FEG-SEM after 1 day of osteoblast culturing ( Figure 4) showed adherent cells with similar morphologies in all groups, with the exception of the C group, in which the cells showed an elongated phenotype with larger cytoplasmatic extensions. The cell bodies appear to be distributed in the direction of the groove in all samples, and no differences in cell numbers were observed between any of the groups. In the culture with fibroblast cells (Figure 4), the images showed cell adhesion after 1 day of culturing. Fibroblasts showed a characteristic morphology in all groups, with many cell extensions and also flattened cell bodies According to the pattern of the samples, no cell distribution The viability of osteoblasts increased over the 14 days of culturing, although without significant differences when comparing all groups (p > 0.05). Likewise, the rate of proliferation of the osteoblast cells showed greater cell growth in the first seven days of culturing. However, there are no statistically significant differences between any of the groups (p > 0.05). Regarding the fibroblasts' behavior, the groups comparison revealed no statistically significant differences (p > 0.05). Over the culturing time, the cell proliferation rate was higher from 3 to 7 days of fibroblast culturing, but not statistically different between groups however (p > 0.05). Fibroblast and Osteoblast Morphology Images obtained from FEG-SEM after 1 day of osteoblast culturing ( Figure 4) showed adherent cells with similar morphologies in all groups, with the exception of the C group, in which the cells showed an elongated phenotype with larger cytoplasmatic extensions. The cell bodies appear to be distributed in the direction of the groove in all samples, and no differences in cell numbers were observed between any of the groups. In the culture with fibroblast cells (Figure 4), the images showed cell adhesion after 1 day of culturing. Fibroblasts showed a characteristic morphology in all groups, with many cell extensions and also flattened cell bodies According to the pattern of the samples, no cell distribution could be seen. ALP Activity ALP results based on the suspension of osteoblasts are shown in Figure 5. The results show the greatest ALP activity from the A group, but only with significant differences when compared to the B group (p < 0.05). When compared to all the study groups in regard to alkaline phosphatase activity, there were no statistically significant differences between all groups (p > 0.05). ALP Activity ALP results based on the suspension of osteoblasts are shown in Figure 5. The results show the greatest ALP activity from the A group, but only with significant differences when compared to the B group (p < 0.05). When compared to all the study groups in regard to alkaline phosphatase activity, there were no statistically significant differences between all groups (p > 0.05). ALP results based on the suspension of osteoblasts are shown in Figure 5. The show the greatest ALP activity from the A group, but only with significant diff when compared to the B group (p < 0.05). When compared to all the study groups in to alkaline phosphatase activity, there were no statistically significant differences b all groups (p > 0.05). Figure 6 shows interleukin 1β production, and the results are comparable between the groups, with no statistically significant differences among all groups (p > 0.05). Interleukin 1β Materials 2022, 15, x FOR PEER REVIEW 9 of Figure 6 shows interleukin 1β production, and the results are comparable betwe the groups, with no statistically significant differences among all groups (p > 0.05). Collagen Type I The level of collagen type I in the extracellular medium was measured after 7 days osteoblast and fibroblast cultures (Figure 7). The results were identical between all stu groups and for both cultures, with no statistically significant differences between them > 0.05). Collagen Type I The level of collagen type I in the extracellular medium was measured after 7 days in osteoblast and fibroblast cultures (Figure 7). The results were identical between all study groups and for both cultures, with no statistically significant differences between them (p > 0.05). Collagen Type I The level of collagen type I in the extracellular medium was measured after 7 days in osteoblast and fibroblast cultures (Figure 7). The results were identical between all study groups and for both cultures, with no statistically significant differences between them (p > 0.05). Osteopontin The osteopontin concentration was assessed after 3 days of the osteoblast culturing ( Figure 8). There was a marked increase in osteopontin production for all the samples, without statistically significant differences when all groups are compared (p > 0.05). Osteopontin The osteopontin concentration was assessed after 3 days of the osteoblast culturing ( Figure 8). There was a marked increase in osteopontin production for all the samples, without statistically significant differences when all groups are compared (p > 0.05). Interleukin 8 Fibroblast interleukin 8 production was analyzed after 1 day (Figure 9). N cally significant differences were found after 1 day (p > 0.05). When the repeated were performed to compare the effect of all study groups on the concentration o kin 8, no statistically significant differences were found (p > 0.05). Interleukin 8 Fibroblast interleukin 8 production was analyzed after 1 day (Figure 9). No statistically significant differences were found after 1 day (p > 0.05). When the repeated measures were performed to compare the effect of all study groups on the concentration of interleukin 8, no statistically significant differences were found (p > 0.05). Fibroblast interleukin 8 production was analyzed after 1 day (Figure 9). No cally significant differences were found after 1 day (p > 0.05). When the repeated m were performed to compare the effect of all study groups on the concentration of i kin 8, no statistically significant differences were found (p > 0.05). Discussion Yttria-stabilized polycrystalline tetragonal zirconia is a promising alternativ nium, as noted in current literature reviews, based on its biocompatibility, better sue performance linked to aesthetic outcomes and similar tissue response to osteo tion. [27][28][29][30] In order to improve their biological behavior, several modifications a to the surface of zirconia implants. [5,31] In this context, this in vitro study aimed to gain a deeper understanding of th of milled microgroove implants made of zirconia with width-standardized millin groove textures on the biological behavior of human fetal osteoblasts and human Discussion Yttria-stabilized polycrystalline tetragonal zirconia is a promising alternative to titanium, as noted in current literature reviews, based on its biocompatibility, better soft tissue performance linked to aesthetic outcomes and similar tissue response to osteointegration [27][28][29][30]. In order to improve their biological behavior, several modifications are made to the surface of zirconia implants [5,31]. In this context, this in vitro study aimed to gain a deeper understanding of the effect of milled microgroove implants made of zirconia with width-standardized milling microgroove textures on the biological behavior of human fetal osteoblasts and human gingival fibroblasts compared to sandblasted and acid-etched surfaces, which are considered the gold standard surface modification for zirconia implant surfaces. Although a previous study was completed to find the drill bit diameter to be used during the texturing process and to understand the best parameters to obtain the desired dimensions, grooves with standardized widths and equal depths, the results for the topographic parameters analyzed turned out to be statistically different. Microgrooves made by conventional milling were observed with a width from 48 µm to 127 µm and a depth from 8 µm to 17 µm. However, these surface modifications associated with patterned surfaces from 10 µm to 1 mm have not been adequately studied on zirconia surfaces, and most in vitro studies contradict each other when it concerns the pattern dimensions [32][33][34]. In addition to the milled microgroove patterns created on tested groups, microtopography was performed on all groups by sandblasting and acid etching, a technique that makes it possible to discard the microroughness biases of surfaces, a parameter that has already been described as fundamental to the influence of cell adhesion. This is a limitation found in most of the studies that claim that the creation of patterns promotes the growth and cell adhesion of osteoblasts on zirconia surfaces without actually controlling specific parameters of the surfaces, such as surface roughness [35]. The evaluation of the adhesion and distribution of osteoblasts and fibroblasts on milled microgroove-patterned surfaces made of zirconia with standardized widths were carried out. The results showed a greater affinity of the osteoblasts for milled microgroove patterns as well as a more differentiated morphology of the cell bodies. Fibroblast cells showed no signs of changes in cell distribution according to the sample pattern in the studies, which was milled microgroove. The obtained results are in line with a previous study carried out by Zhu et al. in 2005 [36], in which they show that nano grooves play a central role in modulating osteoblast cell behavior and in the orientation and alignment of cell bodies. In addition, it is also described that osteoblasts on surfaces with grooves change their orientation and follow the groove direction [36]. A recent in vitro study carried out by Fernandes et al. demonstrates that osteoblast cells on zirconia implant surfaces texturized with microgrooves exhibited a characteristic filopodia and veil shape in the direction of the groove patterns created. Nonetheless, it was a study performed on zirconia implant surfaces with osteoblast cells, although the microgroove pattern was created with a Nd: YAG laser instead of conventional milling [26]. In addition to studies on dental material, a study by Sun et al. in 2016 on microgroove polystyrene surfaces with osteoblast cells found that, on unpatterned surfaces, they were randomly oriented, but on the microgroove surface, they were aligned following the grooves' direction with marked and typical morphologies [37]. This study also shows that width-standardized milling microgroove textures do not appear to affect fibroblast and osteoblast viability and proliferation. The obtained results are in accordance with a study by Fernandes et al. [26] in which they examined the osteoblast cell on zirconia discs textured with a Nd:YAG laser, and they found an improvement in viability over time with no statistically significant differences compared to the control group. However, they report much higher viability values in the microgroove group compared to the control group. Another similar finding was made by Holthaus et al. from 2012 [38], in which the depth of the microchannel created by micro molding on ceramic surfaces had virtually no significant impact on osteoblasts, but the width of the groove seemed to determine the angle of growth of the osteoblasts. However, in the same study proliferation and collagen production were not influenced by the surface pattern created [38]. Different results were presented by Nadeem et al. [39], showing that human mesenchymal cells cultured in grooves larger than 50 µm showed a more favorable osteogenic response than grooves of 10 µm. After a detailed analysis of the results, abrupt differences in the surface roughness of the groups examined were found, and the authors themselves also point to a stronger cell alignment in the 10 µm grooves [39]. This cellular behavior can be explained by the fact that the micro roughness produced by SBAE is similar on all surfaces, with these microgrooves not adding any added value to the fibroblast and osteoblast cells' behavior. The actual influence of the surface pattern on the biological behavior of osteoblasts and fibroblasts cannot be extrapolated despite the available in vitro studies [40,41]. While an obvious effect of width-standardized milling microgroove textures on the shape and orientation of osteoblast cells was observed, different results were found for their viability. However, some of the main markers of osteoblast differentiation appear to be influenced by width-standardized milling microgroove textures created on the zirconia implant surfaces examined in this study. It is therefore known that when an implant material is placed into an edentulous patient, an inflammatory process is triggered. This inflammatory response to the implant material has been linked to several cytokines that are important at the molecular level for the tissue regeneration process [42]. The expression levels of ALP, IL-1β, Collagen I, osteopontin and IL-8 on widthstandardized milling microgroove textures were analyzed for osteoblast and fibroblast cell cultures. For human osteoblasts, ALP, IL-1β, Collagen I and Osteopontin were chosen as the main signal markers to assess the osteoblast phenotype, and no statistically significant differences were found in the expression levels of all width-standardized milling microgroove-textured groups compared to the control group, only with sandblasted and acid-etched group. After 7 days of osteoblast culturing, however, samples from the A group (127 µm in width) showed the greatest ALP activity, but only with significant differences compared to the B group (48 µm in width). ALP is known as a marker of bone formation and calcification, and it is present during bone mineralization [43]. Therefore, it appears that the width of 127 µm found on the A group favor early osteoblast growth and differentiation. In this sense, a study by Miyahara et al. on macroscopic titanium grooves with a width and depth of 200 µm in contact with osteoblast cells reveals a significantly higher value of ALP activity when compared to those without grooves. These results imply that grooves with greater width may accelerate the differentiation of osteoblast-related marker genes. However, this study was performed on titanium dental implants with the same width dimensions, and no data are available on the surface roughness of the disks tested [44]. A study on implant surfaces made of zirconia and textured with microgrooves reveals no statistically significant differences in ALP expression in microgroove groups compared to the control group and the sandblasted and acid-etched group [26]. However, they only rated the different spacings, and there is no information about the width of the microgrooves tested. Additionally, a statistically significant difference was found in osteopontin expression between group A and control. It is likely that the addition of microgroove patterns (127 µm in width) influenced the behavior of the osteoblast cells by increasing the secretion of bone-related matrix proteins such as osteopontin [45]. Rezaei et al. found similar results in their study on the biological and osseointegration capabilities of hierarchical mesoscale grooves, microscale valleys and nanoscale nodules. They show that creating a hierarchically roughened morphology on zirconia influenced the behavior of the osteoblast cells, as it increased the secretion of osteopontin compared to machined surfaces [46]. Despite the higher value of the osteopontin expression found on milled microgroove groups compared to the control group, there was no statistically significant effect between them. The fibroblast expression markers evaluated in this study were collagen and IL-8. Although the expression levels of collagen I were comparable between the control group and the width-standardized milling microgroove-textured groups, the expression levels of IL-8 on microgroove zirconia implant surfaces were higher with no statistically significant difference. A recent study by Iglhaut et al. with fibroblast cells cultured on grooved implant surfaces shows an increase in IL-8 expression levels [47]. However, they did not show the width of the grooves or the surface roughness values. In this study, despite the higher IL-8 values in the width-standardized milling microgroove-textured groups, no significant effect of the width-standardized milling microgroove-textured group was observed. These results suggest that adding microgrooves to zirconia implant surfaces can improve their biological behavior, especially affecting the biological response of osteoblast cells, which are the main cells of bone tissue. This may improve the long-term survival and success of zirconia dental implants, particularly in patients who have undergone extraoral bone grafting techniques, or patients with systemic conditions based on a potentially greater amount of formed bone tissue and improved tissue sealing [48][49][50]. This in vitro study, despite its limitations, evaluated the cellular behaviour of the two key cell lines for the long-term maintenance of dental implants, osteoblasts for the osseointegration process and fibroblasts for soft tissue adaptation. A detailed evaluation of the cellular behavior on the zirconia surfaces is crucial, especially with these controlled production techniques. After in vitro validation, pre-clinical in vivo studies in animal models are essential to know if the biomaterial has a good biological response to undergo in vivo clinical tests. For the future, our group is testing bacterial colonization on these surfaces to improve implant surfaces and current major problems of long-term implant failure. Conclusions The addition of different dimensions of microgroove patterns by conventional milling on sandblasted and acid-etched zirconia implant surfaces does not seems to improve cellular viability of human osteoblasts and gingival fibroblasts. However, large microgrooves on these gold standard zirconia surfaces appear to affect the main osteoblast markers' differentiation, such as ALP and osteopontin. The improvement of cellular behavior by microgroove surface manipulation could be a procedure in biomaterials manufacturing to improve their long-term survival.
2022-04-03T16:32:03.545Z
2022-03-28T00:00:00.000
{ "year": 2022, "sha1": "2e3637f7deba2094d54969ecdfb5cc4dfd8a176a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/7/2481/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "baf98cf7ac60f72f3a64da646439dd198eaea97a", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
237784829
pes2o/s2orc
v3-fos-license
The Osteogenic Function of Danggui Buxue Tang, a Herbal Decoction Containing Astragali Radix and Angelicae Sinensis Radix, Is Optimized by Boiling the Two Herbs Together: Signaling Analyses Revealed by Systems Biology : The therapeutic efficacy of a herbal mixture, being multi-target, multi-function and multi-pathway, is the niche of traditional Chinese medicine (TCM). Systems biology can dissect the network of signaling mechanisms in a complex biological system. In preparing TCM decoctions, the boiling of herbs together in water is a common practice; however, the rationale of this specific preparation has not been fully revealed. An approach of mass-spectrometry-based multi-omics was employed to examine the profiles of the cellular pathways, so as to understand the pharmacological efficacy of Danggui Buxue Tang (DBT), a Chinese herbal mixture containing Astragali Radix and Angelicae Sinensis Radix, in cultured rat osteoblasts and mesenchymal stem cells. The results, generated from omics analyses, were compared from DBT-treated osteoblasts to those of treating the herbal extract by simple mixing of extracts from Astragali Radix and Angelicae Sinensis Radix, i.e., herbal mixture without boiling together. The signaling pathways responsible for energy metabolism and amino acid metabolism showed distinct activation, as triggered by DBT, in contrast to simple mixing of two herbal extracts. The result supports that boiling the herbs together is designed to maximize the osteoblastic function of DBT, such as in energy and lipid metabolism. This harmony of TCM formulation, by having interactive functions of two herbs during preparation, is being illustrated. The systems biology approach provides new and essential insights into the synergy of herbal preparation. Well-defined multiple targets and multiple pathways in different levels of omics are the key to modernizing TCM. Introduction The preparative procedure in making a herbal extract in traditional Chinese medicine (TCM) is crucial for its pharmacological efficacy, which determines the final chemical composition within the extract. For an example, the Chinese Pharmacopeia has described four different herbal decoctions that could be derived from Coptis chinensis roots. These decoctions have the same herbal materials; however, they show distinct pharmacological activities and modes of action. The distinction is known to be triggered by the variation in preparation methods [1]. In general, TCM herbal formulae are typically composed of two Preparation and Quality Control of Herbal Medicines The preparation of DBT was described in detail previously [4,6,7,10]. Roots of threeyear-old A. membranaceus var. mongholicus (AR) from Shanxi Province and two-year-old A. sinensis roots (ASR) from Minxian of Gansu Province were utilized. Herbal medicine was examined by Dr. Tina Dong (herbalist) based on Standards of Materia Medica in Hong Kong. To extract herbal materials, DBT (AR and ASR in a weight ratio of 5:1) was extracted in eight volumes of water (v/w) at boiling for two hours: this process was performed twice. The mixture was dehydrated under vacuum. Identification and quantification of active compounds of DBT was conducted on an Agilent high-performance liquid chromatography (HPLC) system (Agilent, Waldbronn, Germany), coupled with a diode array detector (DAD) and an evaporative light scattering detector (ELSD). The herbal extracts were isolated by an Agilent C 18 Rat Osteoblast and Micromass Culture Animal protocols were revised and approved by the Animal Experimentation Ethics Committee of the Department of Health, Hong Kong (No. 17-283 for Animal Ethics Approval), under the instructions of "Principles of Laboratory Animal Care" (NIH publication No. DH/HA&P/8/2/3). The postnatal day 1 SD rat was dissected to obtain calvarias. Tissues were digested by 1% trypsin for 10 min, and 0.2% collagenase for 65 min, respectively [11]. Afterwards, the supernatant was obtained via centrifugation at 1500 rpm for 5 min. Osteoblasts were incubated in MEM-α, supplemented with 10% FBS and 1% penicillin/streptomycin. Proliferation of osteoblasts was conducted by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetra-zolium bromide (MTT) assay. For micromass culture, rat embryos were utilized to cultivate mesenchymal progenitor cells from limb buds. The ectoderms were removed after enzymatic digestion of limb buds with 0.1% trypsin and 2.4 U of dispase ll for 20 min. The cell number was adjusted to 25 × 10 6 cells/mL. The micromass was maintained with the CMRL 1066 medium, 10% FBS and 1% penicillin/streptomycin. The micromass was fixed with 4% paraformaldehyde for 15 min, incubated in Alcian blue 8 GS solution (0.1 mg/mL) for 1 h, then washed and immersed with glycerol for Alizarin Red S (40 mM, pH 4.2) staining. Alkaline Phosphatase (ALP) and Collagen Quantification ALP was extracted in lysis buffer (10 mM HEPES, pH 7.5 and 0.2% Triton X-100). ALP enzymatic reaction was examined by mixing the cell lysate with 10 mM p-nitrophenyl phosphate (as a substrate in enzymatic reaction) in a buffer (pH 10.4) containing 0.1 M glycine, 1 mM MgCl 2 and 1 mM ZnCl 2 at 37 • C, and colorimetric analysis was conducted at 405 nm [11]. Dexamethasone and vitamin C were used as a positive control [12]. For determination of collagen, osteoblasts were cultured for 72 h with drug treatment. The cells were incubated in methanol overnight and placed in 0.1% picrosirius red staining solution (100 µL/well) for 3 h. The staining was recognized by a microscope. Adhesion Assay Osteoblasts were cultured for 72 h with drug treatment. Cells were trypsinized for 1-2 min and cultured in fresh cultured medium. The cell suspension was then counted to a cell concentration of 4 × 10 5 cells/mL, and was placed onto a cell culture plate. The cells were cultured for 30 min for attachment, and the suspended cells were collected. The unattached cells were counted via phase-contrast microscopy. Shotgun Proteomics For protein extraction, osteoblasts were subjected to freeze-thaw cycles repeated three times, and sonication in 8M urea buffer (0.1% SDS, pH = 7.4) for 5 min. Next, the cell lysate was precipitated in cold acetone. The proteins were resuspended in 4 M urea (pH = 6.5). Forty milligrams of protein was reduced by DTT and alkylated by iodoacetamide (IAA). The modified proteins were then cleaved by trypsin (1: 50 w/w) for 18 h at 37 • C. Next, the peptide sample was desalted by C 18 ZipTip (Millipore, Darmstadt, Germany) and dried by vacuum. Peptides were dissolved in 0.1% formic acid (FA) and were directly loaded onto a C 18 capillary column (75 µm × 25 cm; 2 µm, 100 Å). The solvent elution was optimized using an Ultimate nanoLC system (Thermo Fisher Scientific, Waltham, MA) at a flow speed of 300 nL/min, and a 120 min LC gradient of 2-90% acetonitrile (ACN) in 0.1% FA was utilized to isolate peptides. The eluted peptides were detected by an Orbitrap Fusion Lumos mass spectrometer (Thermo Fisher Scientific). The electrospray ionization voltage was set at +2.3 kV, and the ion transfer tube temperature was set at 300 • C. MS machine setting was as follows: 1 microscan for MS 1 scan at 60 K resolution, MS 2 at 30 K resolution; full MS mass range: m/z 400-1500; and MS/MS mass range: m/z 100-2000. Automatic gain control targeting for MS 2 was 40 K; maximum injection time was 20 ms; higher-energy C-trap dissociation energy was 35% and dynamic exclusion duration was 4 s. Protein Identification and Quantification MS data were analyzed via Thermo Scientific™ Proteome Discoverer™ 2.2. MS 2 data were searched with SEQUEST ® HT against a database of Rattus norvegicus. Carbamidomethylation (+57.021 Da) of cysteine residues was set as a fixed modification, and oxidation of methionine residues (+15.9949 Da) and acetylation of the protein N-terminus (+42.0106) were considered as variable modifications. MS 1 tolerance was set as 20 ppm, and MS 2 was set at 0.8 Da tolerance. Peptide spectral matches were validated using a percolator algorithm, based on q-values at a 1% false discovery rate. The proteomics data collection was conducted using an Rt-Aligner and feature mapper nodes, created for untargeted label-free quantification workflow in Proteome Discoverer. Lipidomics Analysis The cellular lipids were obtained by homogenizing 1 × 10 7 osteoblasts with 300 µL LC-MS water, 600 µL methanol and 450 µL chloroform. The mixture was subjected to water-organic two-layer separation by centrifugation at 12,000 g for 15 min. The lower layer was obtained and dried under nitrogen. Aliquots of 60 µL lipid samples were pooled as a QC sample. The cellular lipids were measured on a Waters Acquity UPLC connected to a high-resolution mass spectrometer (TripleTOF 4600, AB SCIEX). The cellular lipids were analyzed on a Waters UPLC Acquity instrument, coupled with a high-resolution MS (TripleTOF 4600, AB SCIEX). A Waters C 18 column (1.7 µm, 2.1× 100 mm) was utilized for the separation. The final acquisition method was illustrated as below. The mobile phase for lipidomics was A = ACN/water (60:40) with 10 mM NH 4 13.1-20 min: 0% B. The solvent speed was optimized at 0.2 mL/min. The MS conditions were set as follows: ion source gas 1 at 45, ion source gas 2 at 45, curtain gas at 30, temperature at 450 • C, ion-spray voltage floating at ± 5 kV, de-clustering potential (DP) at 100 V and collision energy (CE) at 10 V. For TOF-MS scan, the accumulation time was set as 0.1 s per spectra, and TOF masses were acquired from 200 to 1200 Da. For product ion experiment, the accumulation time was at 0.05 s per spectra, and masses were acquired from 100 to 1160 Da, DP was set at 100 V and CE was at 35 V with ± 15 V collision energy spread. Information-dependent acquisition (IDA) was selected to perform MS/MS scan, and its parameters were set as follows: exclude isotopes within 4 Da, mass tolerance at 10 ppm and the maximum number of candidate ions per cycle at 20. The "for" and "after" were chosen in the "exclude former target ions" part and were set at "for 15 s after two occurrences" (the time was usually set at half-length of a signal). In addition, "dynamic background subtracts" was chosen in the IDA advance module. Data Extraction and Processing Chromatographic peak identification and alignment were performed by using Progenesis QI 2.3 (Nonlinear Dynamics, Newcastle upon Tyne, UK). Those unstable metabolites were filtered out by applying a cut-off on the coefficient of variation at >30% in QC samples. Matrix of normalized ion abundance was exported to SIMCA ® (version 13, Umetrics AB) for multivariate data analysis. The potential candidates were selected from S-plots of OPLS-DA. The biomarkers were identified with mass fragmentation and matched with Human Metabolome Databases, Kyoto Encyclopedia of Genes and Genomes, METLIN, LipidMap (www.lipidmaps.org, accessed on 16 August 2020) as well as LipidBlast based on their mass fragmentation, retention time and mass accuracy. Pathway and Statistical Analysis Signaling analysis was conducted by KOBAS (KEGG Orthology-Based Annotation System), a software interpreting sequences with KEGG orthology terms and identifing the enriched pathways in the queried sequences, as compared to the background. The pathway datebsese employed here were KEGG, Reactome, Panther, and GO analysis. Pathway analysis was conducted using Ingenuity Pathway Analysis (Qiagen). The multivariate analysis was conducted using SIMCA ® . Data were represented as mean ± standard error of mean (SEM). Statistical analysis was performed with student t-test and Dunnett's test (SPSS, version 13). Statistically, difference was classified as significant: * p < 0.05, ** p < 0.01 and *** p < 0.001. The Osteoblastic Function of DBT Requires Boiling Herbs Together The herbal extracts at different conditions were prepared by optimized methods [4], and were chemically standardized by HPLC. The herbal extracts of DBT (authentic decoction), AR and ASR were subjected to HPLC spectrum by using UV and ELSD detectors ( Figure S1A,B). In addition, the amounts of key chemicals within the herbal extracts were measured ( Figure S1C): the results were in accordance with previous reports [4,6,7,10]. In cultured osteoblasts, application of DBT induced cell proliferation, as well as the differentiation biomarker ALP, in a dose-dependent manner ( Figure 1A,B). In both scenarios, the DBT-induced osteoblastic growth and differentiation were markedly higher than that of the herbal extract derived from AR + ASR (simple mixing of two herbal extracts), suggesting an uniqueness of DBT formulation, i.e., boiling two herbs together. To detect the amount of collagen being deposited in osteoblasts, Sirius Red stain was used. In cultured osteoblasts, applied DBT was able to enhance the staining density of collagen, significantly, by at least~5-fold, as indicated by dark red clusters of collagens distributed throughout the cultures (Figure 2A). The effect of DBT was more robust than that of simple mixing of AR + ASR. Moreover, the herbal-extract-treated osteoblasts were subjected to determination of extracellular matrix, i.e., identifying the adhesion property. The adhesion property of cultured cells was much less in AR + ASR extract, while DBT showed robust induction as compared to the positive control of dexamethasone and vitamin C ( Figure 2B). Proteomics Analysis of Herbal-Extract-Treated Osteoblasts The proteomics profile of cultured osteoblasts, treated with DBT or AR+ASR, was generated by LC-MS, i.e., comparing the profiles of two herbs boiled together (DBT) against that of two herbal extracts simply mixed together (AR+ASR). The data acquired were subjected to sequence searching and label-free quantitative analysis. Label Free Quantitation (LFQ) proteomics, having a calculation of the intensity of precursor ion, was used to reveal the differential expression protein (DEP) after treatment with DBT, as compared with that of AR+ASR. The overall analysis identified 19,466 peptides, mapping to 2800 proteins ( Figure 3A). The "volcano plot" of change against the p-value of LFQ results was constructed: the average spectra number for each comparison was >5 ( Figure 3B). A significantly differentially expressed protein was considered when the p-value of the ttest was <0.05, as well as when the fold of change was higher/lower than ±1.5-fold. From the results, 318 proteins (168 up-regulated and 150 down-regulated) were differentially expressed significantly in DBT-treated osteoblasts, as compared to the AR+ASR group ( Figure 3B). To reveal the difference between DBT and AR+ASR treatment groups, we subjected the normalized protein abundances to principal components analysis (PCA) and heat map clustering. From PCA projection, the maximum variability was identified between DBT vs. AR+ASR ( Figure 3C), the first component covering 41-56% of data variance. In parallel, the heat map showed similar results as that of PCA, where two major clusters separating the protein abundance were identified ( Figure 3D). Figure 2C). The herbal extract from AR+ASR showed a weak induction of chondrocyte and osteogenic differentiation. In comparison to authentic DBT, the treatment of AR+ASR showed much weaker induction of osteoblastic differentiation in all parameters, suggesting a distinct requirement of boiling the two herbs together during the preparation. Proteomics Analysis of Herbal-Extract-Treated Osteoblasts The proteomics profile of cultured osteoblasts, treated with DBT or AR+ASR, was generated by LC-MS, i.e., comparing the profiles of two herbs boiled together (DBT) against that of two herbal extracts simply mixed together (AR+ASR). The data acquired were subjected to sequence searching and label-free quantitative analysis. Label Free Quantitation (LFQ) proteomics, having a calculation of the intensity of precursor ion, was used to reveal the differential expression protein (DEP) after treatment with DBT, as compared with that of AR+ASR. The overall analysis identified 19,466 peptides, mapping to 2800 proteins ( Figure 3A). The "volcano plot" of change against the p-value of LFQ results was constructed: the average spectra number for each comparison was >5 ( Figure 3B). A significantly differentially expressed protein was considered when the p-value of the t-test was <0.05, as well as when the fold of change was higher/lower than ±1.5-fold. From the results, 318 proteins (168 up-regulated and 150 down-regulated) were differentially expressed significantly in DBT-treated osteoblasts, as compared to the AR+ASR group ( Figure 3B). To reveal the difference between DBT and AR+ASR treatment groups, we subjected the normalized protein abundances to principal components analysis (PCA) and heat map clustering. From PCA projection, the maximum variability was identified between DBT vs. AR+ASR ( Figure 3C), the first component covering 41-56% of data variance. In parallel, the heat map showed similar results as that of PCA, where two major clusters separating the protein abundance were identified ( Figure 3D). After revealing all the DEPs, triggered by DBT, the expected and novel signaling pathways were analyzed by KOBAS. The signalling pathways critical for bone development, e.g., cytoskeleton (p = 1.9 × 10 −5 ), tissue development (p = 4.9 × 10 −7 ), osteoblast differentiation (p = 0.05) and the Wnt signaling pathway (p = 0.048), were up-regulated after DBT treatment ( Figure 4A). This observation was consistent with our previous reports of enhancement of osteoblastic function, triggered by DBT [13]. In addition, the DBT-triggered pathways were identified to be related with lipid metabolism, e.g., response to lipid (p = 1.5× 10 −7 ) and fatty acid metabolism (p = 0.02) ( Figure 4B), and glucose and amino acid metabolism, e.g., cellular amide metabolic process (p = 2.14 × 10 −7 ) and cellular component biogenesis (p = 2.3 × 10 −10 ) ( Figure 4C). Moreover, DBT induced a robust RNA metabolism, e.g., proteomes of the structural constituent of ribosomes (p = 4.5 × 10 −6 ) and rRNA processing (p = 8.87 × 10 −8 ) (Figure 4D), and energy metabolism, e.g., oxidoreductase activity (p = 3.6 × 10 −6 ), proton-transporting ATP synthase activity (p = 1.27 × 10 −4 ) and mitochondrial membrane (p = 1.02 × 10 −5 ) ( Figure 4E). Furthermore, other signaling pathways were up-regulated, e.g., calcium ion homeostasis (p = 2.8 × 10 −3 ), the PI3K-Akt signaling pathway (p = 5 × 10 −3 ) and the HIF signaling pathway (p = 1.6 × 10 −2 ) ( Figure 4F). In comparison to DBT, the induced events were not revealed in the treatment of AR+ASR ( Figure 4A-F). After revealing all the DEPs, triggered by DBT, the expected and novel signaling pathways were analyzed by KOBAS. The signalling pathways critical for bone development, e.g., cytoskeleton (p = 1.9 × 10 −5 ), tissue development (p = 4.9 × 10 −7 ), osteoblast differentiation (p = 0.05) and the Wnt signaling pathway (p = 0.048), were up-regulated after DBT treatment ( Figure 4A). This observation was consistent with our previous reports of enhancement of osteoblastic function, triggered by DBT [13]. In addition, the DBT-triggered pathways were identified to be related with lipid metabolism, e.g., response to lipid (p = 1.5× 10 −7 ) and fatty acid metabolism (p = 0.02) ( Figure 4B), and glucose and amino acid metabolism, e.g., cellular amide metabolic process (p = 2.14 × 10 −7 ) and cellular component biogenesis (p = 2.3 × 10 −10 ) ( Figure 4C). Moreover, DBT induced a robust RNA metabolism, e.g., proteomes of the structural constituent of ribosomes (p = 4.5 × 10 −6 ) and rRNA processing (p = 8.87 × 10 −8 ) (Figure 4D), and energy metabolism, e.g., oxidoreductase activity (p = 3.6 × 10 −6 ), proton-transporting ATP synthase activity (p = 1.27 × 10 −4 ) and mitochondrial membrane (p = 1.02 × 10 −5 ) ( Figure 4E). Furthermore, other signaling pathways were up-regulated, e.g., calcium ion homeostasis (p = 2.8 × 10 −3 ), the PI3K-Akt signaling pathway (p = 5 × 10 −3 ) and the HIF signaling pathway (p = 1.6 × 10 −2 ) ( Figure 4F). In comparison to DBT, the induced events were not revealed in the treatment of AR+ASR ( Figure 4A-F). To explore the molecular dynamics of protein-protein interaction from the DEPs, the knowledge-based ingenuity pathway analysis and activated molecular prediction in silico were constructed. Proteome trajectories were categorized into four significant clusters with up-and down-regulated DEPs ( Figure 5). The correlation network was centered in activating multiple collagen synthesis ( Figure 5A), activating β-estradiol ( Figure 5B) and activating ERK1/2 ( Figure 5C). These networks are well recorded to be related to bone metabolism, as predicted by treatment with DBT. To explore the molecular dynamics of protein-protein interaction from the DEPs, the knowledge-based ingenuity pathway analysis and activated molecular prediction in silico were constructed. Proteome trajectories were categorized into four significant clusters with up-and down-regulated DEPs ( Figure 5). The correlation network was centered in activating multiple collagen synthesis ( Figure 5A), activating β-estradiol ( Figure 5B) and activating ERK1/2 ( Figure 5C). These networks are well recorded to be related to bone metabolism, as predicted by treatment with DBT. Lipidomics Analysis of Herbal-Extract-Treated Osteoblasts In LC-MS analysis, the dispersed location of MS scans along with retention time and m/z illustrated the successfully optimized LC gradient and efficient separation of the acquisition method, both +ve and −ve modes ( Figure 6A). Total number of features, detected by the centWave method, an algorithm identifying peak density and the waveletbased chromatographic peak from LC/MS data, was used as a measure of metabolome coverage of this combined strategy. After feature alignment, a total number of 5611 and 6690 grouped features were obtained from +ve and −ve ionization profiles, respectively. In addition, the statistical analysis was used to find the probability of a real difference in Processes 2021, 9, 1119 9 of 14 metabolites between experimental groups. After QC filtering, 90% of metabolites had >80% confidence; these values reflected the difference between the two groups. Processes 2021, 9, x FOR PEER REVIEW 10 of 16 Figure 5. Network analysis of proteomics data. The treatment of osteoblasts was as that in Figure 1. From the proteomics data, the regulated proteins in responding to treatments of DBT and AR+ASR were identified by molecular network analysis. The network was obtained by analyzing the DEPs using ingenuity IPA. The correlation network was centered in (A) collagen synthesis, (B) β-estradiol signaling and (C) ERK1/2 signaling. Lipidomics Analysis of Herbal-Extract-Treated Osteoblasts In LC-MS analysis, the dispersed location of MS scans along with retention time and m/z illustrated the successfully optimized LC gradient and efficient separation of the acquisition method, both +ve and -ve modes ( Figure 6A). Total number of features, detected by the centWave method, an algorithm identifying peak density and the wavelet-based chromatographic peak from LC/MS data, was used as a measure of metabolome coverage of this combined strategy. After feature alignment, a total number of 5611 and 6690 grouped features were obtained from +ve and -ve ionization profiles, respectively. In addition, the statistical analysis was used to find the probability of a real difference in metabolites between experimental groups. After QC filtering, 90% of metabolites had >80% confidence; these values reflected the difference between the two groups. As a quality control procedure, the following steps were routinely performed to ensure system stability and reproducibility of column performance. A mixture of lipid standards was injected as a QC sample before each experiment. The column pressure was recorded and compared to previous runs. The instrument resolution was optimized for different batches. As shown in Figure S2, the QC pool samples were clustered in the center of the PCA plot, which suggested that the differentiation of samples resulted from metabolome differences but not from systematic variance or technical issues. The cumulative value of Q 2 estimates the predictive accuracy of a multivariate statistical model with a Figure 5. Network analysis of proteomics data. The treatment of osteoblasts was as that in Figure 1. From the proteomics data, the regulated proteins in responding to treatments of DBT and AR+ASR were identified by molecular network analysis. The network was obtained by analyzing the DEPs using ingenuity IPA. The correlation network was centered in (A) collagen synthesis, (B) β-estradiol signaling and (C) ERK1/2 signaling. As a quality control procedure, the following steps were routinely performed to ensure system stability and reproducibility of column performance. A mixture of lipid standards was injected as a QC sample before each experiment. The column pressure was recorded and compared to previous runs. The instrument resolution was optimized for different batches. As shown in Figure S2, the QC pool samples were clustered in the center of the PCA plot, which suggested that the differentiation of samples resulted from metabolome differences but not from systematic variance or technical issues. The cumulative value of Q 2 estimates the predictive accuracy of a multivariate statistical model with a threshold of >0.5, and the difference between Q 2 and R 2 Y should not be larger than 0.3. From multivariate statistical analysis, the fit goodness for PLS-DA and OPLS-DA showed acceptable internal cross-validation results, i.e., PLS-DA: R 2 X cum = 0.820, R 2 Y cum = 0.99, Q 2 cum = 0.998 in +ve mode; R 2 X cum = 0.491, R 2 Y cum = 0.998, Q 2 cum = 0.962 in −ve mode ( Figure 6B); and OPLS-DA: R 2 X cum = 0.591, R 2 Y cum = 0.99, Q 2 cum = 0.99 in +ve mode; R 2 X cum = 0.612, R 2 Y cum = 0.997, Q 2 cum = 0.978 in −ve mode ( Figure 6C). In the score plot of OPLS-DA, the clustering of DBT-treated group was well separated from that of AR+ASR group ( Figure 6C). showed acceptable internal cross-validation results, i.e., PLS-DA: R 2 Xcum = 0.820, R 2 Ycum = 0.99, Q 2 cum = 0.998 in +ve mode; R 2 Xcum = 0.491, R 2 Ycum = 0.998, Q 2 cum = 0.962 in -ve mode ( Figure 6B); and OPLS-DA: R 2 Xcum = 0.591, R 2 Ycum = 0.99, Q 2 cum = 0.99 in +ve mode; R 2 Xcum = 0.612, R 2 Ycum = 0.997, Q 2 cum = 0.978 in -ve mode ( Figure 6C). In the score plot of OPLS-DA, the clustering of DBT-treated group was well separated from that of AR+ASR group ( Figure 6C). The differential identified lipids between DBT vs. AR + ASR groups were selected from the S-plot in terms of the threshold of variable influence on projection (VIP) for OPLS models having VIP values ≥ 1 ( Figure 7A). This indicated that the lipid profiles of osteoblasts at different treatments were very different. The differential expression metabolites (DEMs) were listed in a heat map ( Figure 7B). Compared to AR+ASR, the treatment of DBT led to an increase in ethanolamine, fatty amide, phosphosphingolipid, quinone and sphingolipid, as well as a reduction in phosphatidyl-choline (PC), phosphatidyl-ethanolamine (PE), phosphatidyl-serine (PS) and steroid, significantly. The differential identified lipids between DBT vs. AR + ASR groups were selected from the S-plot in terms of the threshold of variable influence on projection (VIP) for OPLS models having VIP values ≥ 1 ( Figure 7A). This indicated that the lipid profiles of osteoblasts at different treatments were very different. The differential expression metabolites (DEMs) were listed in a heat map ( Figure 7B). Compared to AR+ASR, the treatment of DBT led to an increase in ethanolamine, fatty amide, phosphosphingolipid, quinone and sphingolipid, as well as a reduction in phosphatidyl-choline (PC), phosphatidyl-ethanolamine (PE), phosphatidyl-serine (PS) and steroid, significantly. Discussion Bone disease, e.g., osteoporosis, is a threat to human health; this problem is dominant in women aged over 50. Unfortunately, osteoporosis remains practically incurable fully at this moment [14]. One of the most crucial reasons for this unfavorable situation is poor efficiency of drug(s). Under this scenario, combined herbal therapies have been proposed to have enhanced effectiveness and minimized side effects [15]; however, the action mechanism and rationale of herbal combination are unresolved. DBT, a Chinese herbal decoction containing two herbs (AR and ASR), is an excellent example to illustrate the synergy of two herbs. The boiling together of two herbs provides better efficacy in inducing osteoblastic proliferation and differentiation. Furthermore, a complete profile of data and an experimental systems biology approach in revealing the functionality of a herbal mixture were presented here. This systems biology approach can overcome the key barriers in revealing the action mechanism of TCM having a mixture of herbs. Bone formation is composed of the processes of cellular proliferation, cellular differentiation and organelle development. As revealed here, the proteomic data suggest that the up-regulated proteins, induced by DBT but not by AR+ASR, are enriched in collagen formation, integrin signaling and cytoskeleton and cartilage condensation. The regulation of these proteins is in accord with rapid cytoskeleton dynamics and bone regeneration [16,17]. In addition, the identified networks relating to bone differentiation and proliferation are enriched by DBT, consistent with our previous report [18]. In acting on cultured osteoblasts, there are plenty of differences in the signaling/metabolism of AR+ASR compared with boiling two herbs together, i.e., DBT, as identified by systems biology. The central metabolism, a process essential for bone formation [18,19], was triggered by DBT but not by AR+ASR. In line with this notion, Wnt-induced signaling has been known to trigger glycolysis flux during osteoblastic differentiation [20]. Supporting this notion, an increase in grip strength and swimming time was revealed in DBT-treated rats, i.e., DBT had a positive influence in promoting energy metabolism [21]. Moreover, the glycolysis flux and capacity were markedly enhanced by DBT in cardiomyocytes [22]. Meanwhile, DBT could induce the proton-transporting ATP synthase activity and the morphology of the mitochondrial membrane in cultured osteoblasts, compared to AR+ASR [22]. As a hypothesis, DBT is a potential anabolic agent for clinical disorders of substrate-availability-based osteoporosis. Moreover, the current identified pathways have never been identified before under herbal treatment. For example, the metabolism of amino acids, as induced by DBT but not by AR + ASR, is known to catabolize skeletal muscle cells to produce energy during muscle regeneration [23]. From 1 H-NMR metabolic profiling, asparagine was found in a significantly higher amount in DBT-treated culture than that in AR-or ASR-treated cultures [24]. Asparagine is a key metabolite in the TCA cycle [25]. Therefore, we believe that asparagine may be a key compound to activate energy metabolism in the case of DBT. In parallel, bone cells respond to change in the microenvironment by altering the profile of gene expression. HIF 1-α is one of the critical proteins in this situation, which binds to the hypoxia response element in regulating the expressions of EPO, VEGF and PDGF; these growth factors are related to bone differentiation [26]. In line with this notion, DBT was shown to regulate HIF signaling in cultured HEK293T cells [3]. The importance of the membrane bilayer in promoting the growth and differentiation of osteoblasts has been proposed [27,28]. The lipid profile of culture after DBT treatment was distinct from that of AR+ASR. Interestingly, we found that the amount of acetylcarnitine was increased after DBT treatment. Carnitine and fatty acids are essential molecules for energy metabolism, which convey carbon sources via acetyl-CoA to the citric acid cycle, happening more robustly during inflammation [29]. Besides, acetylcarnitine, triggered by Wnt signaling, was utilized in β-oxidation in osteoblasts [30]. Thus, fatty acid oxidation in osteoblasts is required for bone acquisition in a sex-and diet-dependent manner [30]. In TCM, preparing a herbal decoction in boiling water is the most simple and common practice [31], and this approach has been utilized for thousands of years without change. DBT is an excellent example to show the requirement of boiling two herbs together to achieve maximal pharmacological efficacy. Supporting this synergy of herbal mixture, other TCM formulae have been demonstrated to have such characteristics. Kai-Xin-San, an ancient herbal decoction containing Ginseng Radix et Rhizoma, Polygalae Radix, Acori Tatarinowii Rhizoma and Poria, can trigger the secretion of neurotrophic factors in cultured astrocytes; this activity is more robust in herbal mixture as compared to a single herb [32]. In TCM, water as a means of herbal processing could be due to the following reasons: (i) water is an excellent solvent for most phytochemicals; and (ii) dissolved phytochemical is easier to be absorbed in the gut. Despite the fact that boiling could increase the solubility of phytochemicals, many unknown chemical changes could happen during the process [33]. In DBT, the AR-derived compounds, including formononetin and calycosin, were demonstrated to play crucial roles in DBT's functions [34]; these chemicals were shown to have higher amounts in DBT, as compared to that from a single herbal extract of AR or ASR. One hypothesis is that calycosin-7-O-β-D-glucoside and ononin can transform to formononetin and calycosin during the boiling process [32]. In line with our results, calycosin is known to stimulate osteogenic differentiation via activation of IGF1R and PI3K/Akt signaling [34], and formononetin can stimulate osteoblastic differentiation via the p38 MAPK pathway [35]. Thus, their signaling pathways are being matched in our DBT-mediated omics analyses. Conclusions Here, we have demonstrated an in-depth and integrated pipeline to reveal the requirement of boiling two herbs together, to enhance the osteoblastic function of DBT. First, the integration of multi-omics (proteomics and lipidomics) enables an exploration of molecular perturbation in different levels of osteoblastic development. Second, the osteoblastic function of DBT is related to glycolysis, energy metabolism and lipid metabolism. Third, boiling together the herbs collaborates different compounds within DBT, as to maximize their functions. As a result, the boiling of herbs together has a crucial role in controlling the interactions of DBT components. Our omics approach could serve as a crucial backdrop for future studies that characterize the impact of a broad array of different factors in herbal medicine. In addition, the investigation of action mechanisms in applying multitarget drugs, or combinational therapeutics, should be important for the modernization of herbal medicine.
2021-09-01T15:06:50.699Z
2021-06-28T00:00:00.000
{ "year": 2021, "sha1": "ac4e9577f9d404d3d51bb29d1cefbb6cd69b8955", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/9/7/1119/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d5d28c8c27a66c70d16ede5208cd1a2e9bc4b929", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
28918015
pes2o/s2orc
v3-fos-license
The role of laboratory diagnostics in emerging viral infections: the example of the Middle East respiratory syndrome epidemic. Rapidly emerging infectious disease outbreaks place a great strain on laboratories to develop and implement sensitive and specific diagnostic tests for patient management and infection control in a timely manner. Furthermore, laboratories also play a role in real-time zoonotic, environmental, and epidemiological investigations to identify the ultimate source of the epidemic, facilitating measures to eventually control the outbreak. Each assay modality has unique pros and cons; therefore, incorporation of a battery of tests using traditional culture-based, molecular and serological diagnostics into diagnostic algorithms is often required. As such, laboratories face challenges in assay development, test evaluation, and subsequent quality assurance. In this review, we describe the different testing modalities available for the ongoing Middle East respiratory syndrome (MERS) epidemic including cell culture, nucleic acid amplification, antigen detection, and antibody detection assays. Applications of such tests in both acute clinical and epidemiological investigation settings are highlighted. Using the MERS epidemic as an example, we illustrate the various challenges faced by laboratories in test development and implementation in the setting of a rapidly emerging infectious disease. Future directions in the diagnosis of MERS and other emerging infectious disease investigations are also highlighted. Introduction The ongoing threat of emerging viral infections to global pub-lic health is well evidenced by the recent epidemics caused by Middle East respiratory syndrome coronavirus (MERS-CoV), avian influenza A viruses, Ebola virus, and Zika virus (To et al., , 2014Chan et al., 2015bChan et al., , 2015c. Prompt and accurate diagnosis is the first step in the successful control of any of these epidemics, and is particularly important for emerging viral infections because they may spread rapidly and may be associated with severe complications (Sridhar et al., 2015). Definitive diagnosis of these emerging viral infections usually requires laboratory confirmation because their clinical features and epidemiological risk factors may be similar to those of other related infections. The characteristics of an ideal laboratory assay for diagnosing these infections include high sensitivity, high specificity, short turn-around time, low cost, low expertise, and facility requirement, suitability for use in different specimen types, availability for pointof-care testing (POCT), and capability to quantify viral load. Unfortunately, despite the recent advances in the field, no single laboratory diagnostic test has all of these characteristics. It is therefore important to understand the clinical applications of the various types of laboratory diagnostic assays and their roles in the control of emerging viral epidemics. In this review, we use the MERS epidemic as an example to illustrate the advantages, disadvantages, practical uses, and impact on epidemic control of the major types of laboratory diagnostic assays available for emerging viral infections. Overview of MERS-CoV and MERS Middle East respiratory syndrome coronavirus (MERS-CoV) is a novel lineage C betacoronavirus first isolated from a Saudi Arabian man with severe acute community-acquired pneumonia and acute kidney injury in 2012 (Zaki et al., 2012). As of 5 December 2016, 1864 cases of human MERS cases, including 659 fatalities, have been reported by the World Health Organization (http://who.int/emergencies/mers-cov/ en/). The epidemic has continued to expand since 2012, with most human cases of MERS being reported in the Middle East as a result of animal-to-human transmissions from infected animal reservoirs (dromedary camels and possibly bats in the region) and person-to-person transmissions in healthcare-associated outbreaks (Reusken et al., 2013a;Haagmans et al., 2014;Wang et al., 2014;Chan et al., 2015b). Moreover, sporadic cases and clusters of human MERS infection have also occurred in other areas with imported cases of MERS, such as the Republic of Korea . The clinical presentation of MERS may range from asymptomatic infection detected during contact tracing in outbreak investigations to rapidly fatal disease (Chan et al., 2015b). The disease is especially severe in elderly men with co-morbidities (Assiri et al., 2013a). Severe MERS is characterized by rapidly progressive acute pneumonia with fever and respiratory failure not responsive to broad-spectrum antibacterial treatment, and may be associated with extrapulmonary manifestations, including acute kidney injury, hepatic dysfunction, gastrointestinal symptoms, and seizures (Chan et al., 2012(Chan et al., , 2013cZaki et al., 2012;Assiri et al., 2013a;Arabi et al., 2015). A number of repurposed drugs, antiviral peptides, and monoclonal antibodies have demonstrated anti-MERS-CoV activity in vitro and/or in animal models, but none of them have been proven to be effective in randomized controlled trials yet (Chan et al., 2013b(Chan et al., , 2015dGao et al., 2013;Lu et al., 2014a;Jiang et al., 2014;Tang et al., 2014;Ying et al., 2014). Various vaccines have been developed and some are undergoing clinical trials and/or testing in camels (Uyeki et al., 2016). Specimen collection: what, when, and how? Like all other infectious diseases, appropriate specimen collection is the most important step in the laboratory diagnosis of MERS and requires knowledge of viral kinetics in various specimen types with relation to time since symptom onset. Like SARS-CoV, MERS-CoV viral loads in respiratory specimens peak in the second week after symptom onset Oh et al., 2016). Therefore, a patient testing 'negative' for MERS soon after symptom onset should undergo repeated testing if epidemiological history is suggestive of MERS. Lower respiratory tract specimens (including tracheal aspirates, bronchoalveolar lavage fluid, well collected sputum specimens) contain the highest viral RNA loads and should be collected whenever possible (Corman et al., 2016;Oh et al., 2016). However, invasive procedures to obtain lower respiratory specimens may not always be feasible, especially in patients with mild illness. Upper respiratory tract specimens (nasopharyngeal swabs, oropharyngeal swabs, and/or nasopharyngeal aspirates) should be taken for such cases with pooling of swabs in a single container to maximize RNA load as viral loads in the upper respiratory tract are consistently lower than in the lower respiratory tract (Memish et al., 2014b;Corman et al., 2016). Risk assessment regarding requisite transmission-based precautions, patient placement, and personal protective equipment during specimen collection should be conducted due to the possibility of aerosol generation. Specimens should be sent to the laboratory in viral transport medium containing a balanced salt solution, bovine serum albumin, pH buffer, phenol red, and antimicrobials as soon as possible. If specimen processing is likely to be delayed, storage in an ultra-low freezer (-80°C) is recommended. Extrapulmonary specimens that have been reported to contain detectable MERS-CoV RNA include blood, stool, and urine (Drosten et al., 2013;Poissy et al., 2014;Corman et al., 2016). These specimens may provide further opportunities for MERS diagnosis when lower respiratory tract specimens are unavailable. However, viral loads in these specimen types are generally lower than in the lower respiratory tract, although there are reported exceptions (Abroug et al., 2014). Detection of MERS-CoV in stool specimens may have infection control implications, while detection of MERS-CoV RNA in whole blood or serum in particular may be a prognostic marker of poor outcome (Guery et al., 2013;Kim et al., 2016c). For serology testing, acute and convalescent serum specimens should ideally be collected 14 to 21 days apart to enable documentation of seroconversion or 4-fold rise in neu-tralizing antibody titer. When no serum specimen from the acute phase is available, a convalescent phase serum specimen may also be used to establish retrospective diagnosis with a panel of antibody tests (see Serology section below). The choice of investigation will depend on the specimen type, timing post-symptom onset and local test availability. The advantages and shortcomings of different tests are detailed below and summarized in Table 1. Viral culture: an important tool for virus discovery, pathogenesis studies, and evaluation of countermeasures Isolation of infectious MERS-CoV in cell culture inoculated with the patient's bodily fluids and/or tissues establishes the diagnosis of MERS (Zaki et al., 2012). Although the routine use of viral culture for diagnosing MERS in standard clinical microbiology laboratories is limited by the method's relatively slower turn-around time than molecular diagnostics and requirement of a biosafety level 3 facility, this time-tested diagnostic tool has played important roles in the discovery and studies on the pathogenesis and antivirals of MERS-CoV. Unlike the other human-pathogenic CoVs which are notoriously difficult to culture in cell lines, MERS-CoV replicates rapidly with induction of prominent cytopathic effects in a broad range of cell lines (Muller et al., 2012;Chan et al., 2013a). MERS-CoV produces focal cytopathic effects with rounded refractile cells in susceptible cell lines within 5 days after inoculation during primary isolation (Chan et al., 2013a). The spread of these changes throughout the cell monolayers leads to rounding and detachment of cells within 1 to 2 days. Syncytium formation caused by fusion activity of the MERS-CoV spike (S) protein may be seen in Calu-3, Caco-2, Huh-7, and LLC-MK2 cell lines (Zaki et al., 2012;Chan et al., 2013a;de Wilde et al., 2013). These rapid and prominent cytopathic effects allowed Zaki and colleagues to successfully isolate the first MERS-CoV strain from cell lines which were commonly used in clinical virology laboratories (Vero and LLC-MK2) shortly after inoculation of the index patient's sputum sample into these cell lines (Zaki et al., 2012). A recent comparison between Vero and Caco-2 cell lines for the isolation of MERS-CoV showed that the isolation rate of MERS-CoV was significantly higher in Caco-2 than in Vero cells (45.5% vs 19.1%, P = 0.013) (Muth et al., 2015). The isolation rate of MERS-CoV in cell culture was higher in respiratory samples with higher viral RNA loads (66.7% vs 5.9% in samples with ≥10 7 copies/ml and <10 7 copies/ml, respectively), lower respiratory tract samples (0.0% in nasopharyngeal aspirate, 33.3% in sputa, and 48.6% in endotracheal aspirates), and samples which were collected earlier after diagnosis (58.6% vs 22.2% in samples collected within and after 5 days of diagnosis, respectively). These factors should be considered in laboratories which are attempting to isolate MERS-CoV from clinical specimens. The broad tissue tropism of MERS-CoV in cell lines of different human organ tissue origins corroborated with the protean clinical manifestations of MERS in human. The high viral load of MERS-CoV in human lung, kidney, colonic, and hepatic cell lines correlate with the predominantly lower respiratory tract involvement and extrapulmonary manifestations of acute kidney injury, diarrhea, and hepatic dysfunction, respectively (Chan et al., 2015b). Most of these in vitro observations were subsequently validated in ex vivo organ tissue culture and/or animal models (Chan et al., 2013f;Zhou et al., 2014;Chu et al., 2016;Yeung et al., 2016). The replication of MERS-CoV in monocytes, dendritic cells, and T lymphocytes with aberrant induction of inflammatory cytokines/chemokines and activation of extrinsic and intrinsic apoptosis pathways partly explained the pathogenesis of virus dissemination, cytokine/chemokine storm, and lymphopenia in severe MERS (Lau et al., 2013a;Chu et al., 2014Chu et al., , 2016Zhou et al., 2014Zhou et al., , 2015Scheuplein et al., 2015, Tynell et al., 2016. Moreover, MERS-CoV could be isolated from numerous non-human cell lines, including those of nonhuman primate, camel, and bat origins (Muller et al., 2012;Chan et al., 2013a;Eckerle et al., 2014). In contrast, cell lines of mouse and rat origins were not susceptible (Chan et al., 2013a). These in vitro biological characteristics of MERS-CoV provided insights on the possible clinical manifestations, animal reservoirs, and animal species which were susceptible to MERS-CoV infection for animal model development at an early stage of the epidemic (de Wit et al., 2013a;Munster et al., 2013;Coleman et al., 2014;Falzarano et al., 2014;Yao et al., 2014;Zhao et al., 2014;Agrawal et al., 2015;Chan et al., 2015d;Haagmans et al., 2015). Viral culture of MERS-CoV also facilitated the identification and evaluation of anti-MERS-CoV drugs. Screening of potential candidate anti-MERS-CoV agents in chemical libraries consisting of a large number of clinically approved drugs and validation of their in vitro anti-MERS-CoV activity in cytopathic effect inhibition, viral load reduction, and plaque reduction assays using cell culture systems successfully identified repurposed drugs, such as type I interferons and lopinavir-ritonavir, for further testing in animal models (Chan et al., 2013b;Dyall et al., 2014;de Wilde et al., 2014). Similarly, the anti-MERS-CoV effects of newly designed antiviral peptides and monoclonal antibodies were also validated in cell culture (Gao et al., 2013;Jiang et al., 2014;Lu et al., 2014a;Tang et al., 2014;Ying et al., 2014). The availability of viral culture for MERS-CoV in reference research laboratories is crucial for further deepening our understanding on and finding countermeasures for MERS. Nucleic acid amplification tests: the platinum standard for MERS diagnosis Given the limitations of viral culture, more rapid and readily available laboratory assays were required for diagnosing MERS. Specific primers and a standardized laboratory protocol were quickly developed after the successful isolation of the first MERS-CoV strain and sequencing of its complete genome early in the epidemic in September 2012 (Palm et al., 2012). The positive sense, single-stranded RNA genome of MERS-CoV has a size of approximately 30 kb and is arranged in the order of 5'-replicase [open reading frame (ORF) 1a/b]structural proteins [S-envelope (E)-membrane (M)-nucleocapsid (N)]-poly(A)-3' (Woo et al., 2012;Chan et al., 2013d;Lau et al., 2013b). A number of monoplex reverse transcrip-tion-polymerase chain reaction (RT-PCR) assays using primers that target conserved gene regions of the MERS-CoV genome were developed and evaluated for screening and/or confirmatory tests. These gene targets include the leader sequence at the 5'untranslated region, and ORF1a, ORF1b, RNA-dependent RNA polymerase (RdRp), ORF4a, upE (upstream of envelope (E) gene), M, and N gene regions (Corman et al., 2012a(Corman et al., , 2012bLu et al., 2014b;Chan et al., 2015a;Douglas et al., 2015). The most commonly adopted diagnostic protocol utilizes the upE assay as a screening test, followed by either the ORF1a or ORF1b assays for confirmation. In general, these assays were highly sensitive and specific, with their technical limits of detection ranging from 1.6 to 263.0 RNA copies/reaction (Chan et al., 2015b;Kim et al., 2016b). The limits of detection appear to be the lowest in the assays targeting the abundantly expressed leader sequence at the 5'-untranslated region and the N gene, although clinical comparison among the various assays have not been reported. Most of these assays were evaluated using clinical specimens, including respiratory (nasopharyngeal aspirate, sputum, endotracheal aspirate, bronchoalveolar lavage fluid, and/or nose and mouth exudates) and/or extrapulmonary specimens (serum, urine, and/or stool). A number of regional and international external quality assessments showed that the majority (>80%) of participating laboratories were capable of detecting MERS-CoV RNA by RT-PCR assays with high accuracy, but false-negative results might occur in a minority of samples with low viral loads (Pas et al., 2015;Seong et al., 2016;Zhang et al., 2016). Recent advances in molecular diagnostics for MERS include the development of commercial monoplex and multiplex RT-PCR kits, and other novel non-PCR-based diagnostics. Most of the commercial assays utilize primers that target the upE and/or ORF1a gene regions Kim et al., 2016b) (http://www.fast-trackdiagnostics. com/products/ftd-mers-cov/; http://eng.bioneer.com/diagnostic/HumanMDxkits/Accupower-MERS-CoV-Multiplexoverview.aspx; http://seegene.com/neo/en/products/respiratory/anyplex_mers_cov.php). Internal controls of these assays include primers against human glyceraldehyde 3-phosphate dehydrogenase gene, which is a housekeeping gene found in clinical specimens, spiked RNA, tobacco mosaic virus DNA, or phocine herpesvirus DNA spiked in the PCR mixtures. Analytical and clinical evaluation of these assays showed that they generally had high sensitivity and specificity, but false-negative or invalid results may occur in specimens containing high levels of PCR inhibitors, such as sputum specimens (Kim et al., 2016b). Sputum homogenization prior to nucleic acid extraction with proteinase K and DNase treatment may be more effective than either phosphate-buffered saline treatment or N-acetyl-L-cysteine and sodium citrate treatment for improving the sensitivity of these assays . The performance of multiplex assays may be improved by the use of self-avoiding molecular recognition system-artificially expanded genetic information systems (SAMRS-AEGIS) primers to reduce non-specific reactions generated among the multiple primers in the same assay (Glushakova et al., 2015;Yaren et al., 2016). The main advantage of these commercial diagnostic kits is their ease for use in laboratories without technical expertise for designing and performing in-house-developed RT-PCR assays. The major disadvantage is their relatively high costs which may limit their use in resource-limited regions. Other non-PCR-based assays for MERS include reverse transcription-loop-mediated isothermal amplification (RT-LAMP) and reverse transcription isothermal recombinase polymerase amplification (RT-RPA) (Abd El Wahed et al., 2013;Shirato et al., 2014;Bhadra et al., 2015). These isothermal assays generally have short incubation times and are highly sensitive and specific. They are simple to perform and do not require thermocyclers, and are therefore especially suitable for POCT in resource-limited areas where the expertise and equipment for RT-PCR are not readily available. In addition to establishing diagnosis, nucleic acid amplification tests have been applied to fulfill a number of other important purposes in the MERS epidemic. Firstly, it was used to investigate the animal reservoir of MERS-CoV and established the link between dromedary camels and human cases of MERS (Haagmans et al., 2014;Lau et al., 2016). The higher rate of detection of MERS-CoV RNA in the nasal and/or rectal swabs of juvenile camels than in those of adult camels further helped to identify juvenile camels as an important source of camel-to-human transmission of MERS (Alagaili et al., 2014;Wernery et al., 2015). Secondly, RT-PCR is commonly employed in contact tracing during healthcare-associated outbreaks of MERS (Assiri et al., 2013b;Memish et al., 2013;Drosten et al., 2014;Oboho et al., 2015). During these outbreak investigations, it was recognized that asymptomatic infection might occur in young and previously healthy persons and that they might serve as the source of further person-to-person transmission of MERS (Memish et al., , 2014aOmrani et al., 2013). Thirdly, serial testing of different clinical samples of MERS patients by RT-PCR identified the shedding patterns of the virus in respiratory and non-respiratory samples. Notably, viral RNA was detected in 14.6% of stool and 2.4% of urine samples, suggesting that these clinical samples may also be important in the spread of MERS. Fourthly, viral RNA load was found to be a predictive factor for severe disease. High MERS-CoV load in lower respiratory tract specimens was predictive of progression to pneumonia. Blood MERS-CoV RNA positivity at initial diagnosis was associated with worse clinical outcome in terms of a higher rate of requiring mechanical ventilation or extracorporeal membrane oxygenation, as well as death (P<0.05). Fifthly, RT-PCR was commonly employed in in vitro and in vivo antiviral and vaccine evaluation studies (Zumla et al., 2016). Finally, RT-PCR and sequencing were important for surveying molecular epidemiological changes that may be associated with virus adaption for efficient person-toperson transmission. Antigen detection assays: potential for POCT Molecular diagnostic assays have excellent sensitivity for the diagnosis of MERS-CoV infection. However, these assays require dedicated facilities, expensive equipment and highly trained personnel, which places a great strain on laboratory infrastructure in endemic areas of MERS. Therefore, there has been considerable interest in developing MERS-CoV antigen detection assays, which are better than PCR diagnosis in terms of convenience. Specific monoclonal antibodies targeting MERS-CoV proteins are used to demonstrate evidence of MERS-CoV in infected tissues (de Wit et al., 2013b). Such antibodies have also been used to detect the MERS-CoV N protein in respiratory specimens as this antigen is abundantly expressed during the acute phase of illness. Four assays for detection of the MERS-CoV N protein have been described to date Song et al., 2015;Yamaoka et al., 2016). The peptides used to immunize mice for raising monoclonal antibodies were either in the form of recombinant protein synthetically produced by cloning the corresponding DNA fragment into E. coli and subsequently purifying the protein, or prepared using a wheat germ extract-based cell free expression system. Researchers either used a single long peptide or a pool of smaller synthetic peptides spanning the length of the MERS-CoV N protein Song et al., 2015;Yamaoka et al., 2016). Monoclonal antibodies that produced favorable signal-tonoise ratios in a recombinant MERS-CoV N protein immunoassay were selected for incorporation into either an enzyme-linked immunosorbent assay (ELISA) Yamaoka et al., 2016) or a POCT format Chen et al., 2016). The analytical sensitivity of antigen detection assays was measured using lower limit-of-detection (LOD) experiments. We have previously described an ELISA assay that had an LOD of 10 TCID50/0.1 ml using simulated NPA specimens seeded with serially diluted MERS-CoV cultures . A lateral flow immunoassay (LFIA), also described by us, had an LOD of at least 10 3.7 TCID50/ml MERS-CoV . An immunochromatographic test (ICT) developed by Song et al. had a LOD of 1.5 ng/ml recombinant MERS-CoV N protein while the ELISA assay developed by Yamaoka et al could detect 0.625 ng/ml of N protein Yamaoka et al., 2016). Due to careful selection of monoclonal antibodies, the published assays were quite specific for MERS-CoV and did not cross-react with other animal or human coronaviruses. The main advantage of antigen detection assays is their ease of use and rapid results, especially when adapted to a POCT format. POCTs can be performed entirely within a biosafety cabinet, have built-in quality control and require minimal training of laboratory personnel. They offer a rapid and specific 'rule-in' option for clinicians while pending PCR tests, which may require send out to reference laboratories and long turnaround times in regions where laboratory infrastructure is not well developed. However, there are several obstacles to the application of antigen detection for the diagnosis of MERS in humans. Firstly, such assays have not been validated in clinical specimens from suspected MERS cases in endemic areas. There is no head-to-head comparison with PCR, which is the most commonly employed 'platinum standard' diagnostic method in MERS patients. From experience with other respiratory viruses, the sensitivity is expected to be lower than PCR tests, which may provide false reassurance and lapses in infection control if false negative results are interpreted without a confirmatory PCR assay (Chan et al., 2002). Furthermore, antigen detection POCTs and ELISAs designed to date have only been evaluated using nasopharyngeal specimens; it is unknown whether such assays can be used on lower respiratory tract specimens, which are often observed to contain higher viral loads than NPA (Drosten et al., 2013;Guery et al., 2013) and would theoretically be even more suitable for antigen detection assays. Lastly, antigen detection assays do not feature in WHO or CDC algorithms for the diagnosis of MERS, limiting interest in developing commercial kits up to this stage (http://www.cdc.gov/coronavirus/mers/lab/lab-testing.html; http://apps.who.int/iris/bitstream/10665/176982/ 1/WHO_MERS_LAB_15.1_eng.pdf). Antigen detection assays may have a potential role in the epidemiological surveillance of camels. The ICT developed by Song et al was validated using camel nasal swabs, demonstrating a high sensitivity (93.9%) and specificity (99.6%) compared to upE and ORF1A real time RT-PCR . Our LFIA also showed a moderately high sensitivity of 81% with excellent specificity compared to real time RT-PCR on dromedary camel respiratory specimens . These studies indicate that such assays can be used for conveniently detecting infected camels in rural endemic areas. Antibody detection assays: retrospective diagnosis and contact tracing Data regarding the kinetics of the antibody response in MERS patients is steadily accumulating. There appears to be considerable person-to-person variation in the robustness and timing of the antibody response, but as with other viral infections, an initial IgM response is followed by rising IgG titers, which is usually detectable 2 to 3 weeks after symptom onset (Drosten et al., 2013;Chan et al., 2015b;Park et al., 2015b). Longitudinal serology of one MERS-infected patient in China showed that anti-S ELISA antibodies rose before anti-N antibodies (Wang et al., 2016a). The differential kinetics of anti-N and anti-S antibodies in the setting of MERS serodiagnosis requires further study. Antibodies remain detectable long after clearance of infection; neutralizing antibodies were persistently detectable in 86% of patients up to 34 months after the Jordanian MERS outbreak of 2012 (Payne et al., 2016). A wide variety of serological assays have been described for the detection of MERS-CoV-specific antibodies with variations in assay format, the antigen used and the antibody subtype detected. MERS specific IgM detection does not feature in diagnostic algorithms promulgated by the WHO or US CDC (http://www.cdc.gov/coronavirus/mers/lab/lab-testing.html; http://apps.who.int/iris/bitstream/10665/176982/ 1/WHO_MERS_LAB_15.1_eng.pdf). Although IgM titers theoretically rise before IgG, experience with sera from SARS and MERS patients suggests that the time lag between the two may be too short to be of much clinical value (Woo et al., 2004;Meyer et al., 2014a;Wang et al., 2016b). Furthermore, IgM assays are potentially prone to non-specific positivity and cross-reactivity with other coronaviruses requiring te- dious specimen preparation and quality control (Buchholz et al., 2013). The additional value of testing MERS-CoV IgM in patients presenting with acute illness requires further elucidation. In view of these factors, most of the serological assays described for MERS diagnosis either aim to detect total immunoglobulin or IgG. A major variable in serological assays for MERS-CoV is the source of the antigen. For reference or research laboratories with BSL-3 and cell culture facilities, MERS-CoV-infected cells are the most convenient source of antigen. The cell lysate may be spotted on glass slides, microtiter plates or Western blot strips for downstream serological assays. Although convenient, such assays require a reliable means of inactivating live virus within the culture extracts. Furthermore, such assays have also been shown to cross react with other coronaviruses, as infected cells express a wide range of viral antigens, some of which are likely to be conserved across different coronavirus subgroups and even genera (Aburizaiza et al., 2014). For SARS, it has been shown that Western blotting with whole virus lysates may enable us to differentiate genuine seropatterns from false-positives; however, this is a tedious procedure requiring technical expertise and well characterized control sera . In view of these shortcomings, recombinant antigens have been used for ELISA, immunofluorescence (IFA), Western blot, protein microarray and even pseudoparticle neutralization assays (Corman et al., 2012b;Perera et al., 2013;Reu-sken et al., 2013b;Chan et al., 2015b;Park et al., 2015a;Wang et al., 2016b). Using recombinant antigens has two major advantages: firstly, biosafety during assay production is not a major concern and secondly, this method enables the selection of immunogenic and MERS-CoV-specific antigens for maximizing assay specificity and sensitivity. Both viral N and S proteins are abundantly expressed immunogenic antigens stimulating antibody production (Meyer et al., 2014a). In SARS patients, there is evidence that anti-N antibodies rise before anti-S antibodies (Woo et al., 2005). However, convalescent sera tend to react against both antigens with moderate to high sensitivity. The difference between the two antigens lies in their specificity. Although the N protein is easier to clone and purify (being smaller with fewer glycosylation sites), it is more conserved within coronavirus subgroups compared to the S protein, which is the major target for neutralizing antibodies (Meyer et al., 2014a). Therefore, recombinant MERS-CoV N protein-based serological assays are expected to have higher rates of cross reactivity compared to anti-S detection assays. Indeed, N epitope cross reactivity may even extend across coronavirus genera: a SARS-CoV N protein-based ELISA showed a propensity to produce false positive results when tested against convalescent sera from patients recovering from HCoV-229E and even HCoV-OC43 infections (Woo et al., 2005). However, there is evidence to suggest that even sections of the S2 subunit of the S protein can induce cross-reactive antibodies against different beta-coronaviruses (Chan et al., 2013e). Further studies are required to elucidate optimally specific recombinant antigens for MERS serological tests. Many of the classical serological assay formats have been applied to MERS with modifications in the interest of assay specificity and biosafety. The principles, advantages and disadvantages of each assay type are summarized in Table 2. Commercial IgG ELISA assays are now available and provide a fast and sensitive screening tool. Positive results by ELISA require confirmation by a more specific assay -either IFA or gold-standard neutralization assays. Detailed evaluations for many of the published assays have not been possible because of a lack of well-characterized control sera. While the specificity of most assays can be assessed using sera of patients in non-endemic regions, diagnostic sensitivity and comparison-of-methods data are still difficult to come by. In a recent study, Park et al compared the plaque reduction neutralization test (PRNT), microneutralization and pseudoparticle neutralization tests using sera of 17 patients from the South Korean MERS outbreak (Park et al., 2015a). They found that the different neutralization test formats had excellent correlation with each other when testing convalescent clinical specimens. In the first ten days after symptom onset, anti-MERS antibodies are usually undetectable irrespective of assay format (Park et al., 2015a(Park et al., , 2015b. Therefore, serology testing is not useful in acute MERS, although the WHO includes seroconversion (confirmed by neutralization) in paired sera taken at least 14 days apart as one of the diagnostic criteria for a confirmed case (http://apps.who.int/iris/bitstream/10665/176982/ 1/WHO_MERS_LAB_15.1_eng.pdf). The role of MERS serology, therefore, is threefold: firstly, to provisionally diagnose mild or asymptomatic MERS cases who present late with only convalescent sera available; secondly, for serosurveillance of at-risk individuals either as part of an outbreak investigation or in abattoirs where exposure to zoonotic sources may have occurred Muller et al., 2015;Kim et al., 2016a); thirdly, for seroepidemiological studies of zoonotic sources to identify affected camel herds. Indeed, demonstration of MERS-CoV neutralizing antibodies in camel sera is one of the lines of evidence for zoonotic transmission of MERS from camels to humans (Meyer et al., 2014b). In view of the deficiencies of serological assays outlined in Table 2, most authorities recommend using at least two different assays for specific serodiagnosis of MERS. The WHO recommends using either an ELISA-or IFA-based screening assay followed by confirmatory testing of positive sera using a specific neutralization assay (http://apps.who.int/iris/bitstream/10665/176982/1/WHO_MERS_LAB_15.1_eng.pdf). The CDC also adopts a two-phase approach with the first test being an IgG ELISA followed by confirmatory testing using IFA (http://www.cdc.gov/coronavirus/mers/lab/labtesting.html). Microneutralization is performed on ELISApositive, IFA-indeterminate sera for final resolution. Conclusions and future perspectives The experience with MERS and other recent transnational epidemics proves that emerging infectious diseases will con-tinue to be a major challenge in the future. Rising human populations force animals and humans into ever closer proximity increasing the risk of zoonotic transmission of novel infectious diseases. Overcrowding facilitates human-to-human transmission of these infections, both in the community and in healthcare settings. High volumes of air travel enable rapid transport of infected humans to non-endemic regions leading to major outbreaks. All these features were clearly illustrated in the recent MERS epidemic. The key to combating these threats is information sharing, constant vigilance, and efficient infection control. The diagnostic laboratory plays an increasingly important role in the early detection of infected patients enabling prompt initiation of infection control measures and appropriate patient management. However, this role is a complex one requiring a battery of tests and diagnostic algorithms as highlighted in this review. The laboratory also faces several challenges in this role, related to assay validation, reagent shortages, a lack of standard materials, protocol standardization, and quality assurance. Creation of regional and global laboratory networks under the aegis of the WHO or other organizations will be crucial to overcome these difficulties. Pooling of positive control material for serological and molecular assays in biobanks would also be valuable. This is particularly important with the increasing introduction of massively multiplexed and point-of-care tests for the diagnosis of emerging infections, which is an emerging trend in the microbiology laboratory.
2017-08-03T00:52:33.983Z
2017-02-28T00:00:00.000
{ "year": 2017, "sha1": "57b837acb7ae6bb608ab7e7ce16745f48c17cf2f", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s12275-017-7026-y.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "57b837acb7ae6bb608ab7e7ce16745f48c17cf2f", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
118470525
pes2o/s2orc
v3-fos-license
XMM-Newton observation of the highly magnetised accreting pulsar Swift J045106.8-694803: Evidence of a hot thermal excess Several persistent, low luminosity (L_X 10^{34} erg s^{-1}), long spin period (P>100 s) High Mass X-ray Binaries have been reported with blackbody components with temperatures>1 keV. These hot thermal excesses have correspondingly small emitting regions (<1 km^2) and are attributed to the neutron star polar caps. We present a recent XMM-Newton target of opportunity observation of the newest member of this class, Swift J045106.8-694803. The period was determined to be 168.5+/-0.2 s as of 17 July 2012 (MJD = 56125.0). At $L_X 10^{36} erg s^{-1}, Swift J045106.8-694803 is the brightest member of this new class, as well as the one with the shortest period. The spectral analysis reveals for the first time the presence of a blackbody with temperature kT_{BB}=1.8^{+0.2}_{-0.3} keV and radius R_{BB}=0.5+/-0.2 km. The pulsed fraction decreases with increasing energy and the ratio between the hard (>2 keV) and soft (<2 keV) light curves is anticorrelated with the pulse profile. Simulations of the spectrum suggest that this is caused by the pulsations of the blackbody being pi out of phase with those of the power law component. Using a simple model for emission from hot spots on the neutron star surface, we fit the pulse profile of the blackbody component to obtain an indication of the geometry of the system. INTRODUCTION Be/X-ray binaries (BeXRBs) are stellar systems in which a compact object, almost exclusively a neutron star, orbits a main sequence Be star.These stars are rotating rapidly, causing an enhancement in the equatorial material which in turn, leads to hydrogen emission lines in the spectrum.This is a transient phenomenon and thus any star that has exhibited hydrogen emission lines in its spectrum at some time is classed as a Be star.The compact object orbits the primary in a highly eccentric orbit and accretes matter from the equatorial outflow.There are two types of outburst associated with the X-ray emission of BeXRBs: Type I outbursts have LX in the range 10 36 -10 37 erg s −1 and occur periodically around the time of the periastron passage of the neutron star.Type II outbursts reach higher luminosities, LX 10 37 erg s −1 , last much longer and show no correlation with orbital phase (Stella et al. 1986).These are thought to be caused by an enhancement of the circumstellar disc allowing accretion to oc-⋆ E-mail:e.s.bartlett@soton.ac.uk (ESB) cur at any phase of the orbit at a much higher rate.For a review of the observational properties of BeXRBs see Reig (2011). BeXRBs are the most numerous subclass of HMXB and have been predominately detected via the pulsations of the neutron star (e.g.Galache et al. 2008).The number of known HMXBs has increased dramatically since the launch of satellites such as the Röntgen Satellite (ROSAT, Truemper 1982) and the Rossi X-ray Timing Explorer (RXTE, Bradt, Rothschild, & Swank 1993) particularly in the Magellanic Clouds (Liu, van Paradijs, & van den Heuvel 2000, 2005).Given the large sample size of objects now available, we are able to study the properties of these objects on a statistically significant scale.The X-ray spectra of BeXRBs are characterised by intrinsically absorbed power laws with a photon indices, Γ, in the range 0.6-1.4(Haberl et al. 2008), with high energy cut-offs in the range 10-30 keV (Lutovinov et al. 2005;Reig 2011). Some authors have reported a soft excess in the spectra of HMXB pulsars, with blackbody temperatures kTBB<0.5 keV (for e.g.see Hickox et al. 2004).Hickox et al. (2004) suggest that a soft excess is in fact present in most, if c 2013 RAS Table 1.Summary of sources with kT BB >1.0 keV.R BB is the radius of the emitting region implied by L X and kT BB .D is the assumed distance to the source in kpc, errors are shown when they are accounted for in the calculation of the blackbody radius.All errors are 90% confidence level.not all, HMXB spectra, though not always detected due to the high intrinsic absorption and flux of some sources.For systems with LX 10 38 erg s −1 , this excess is thought to originate from the reprocessing of hard X-rays, most likely at the inner radius of the accretion disc surrounding the neutron star.For less luminous sources (LX 10 36 erg s −1 ), the soft excess is attributed to other processes, e.g.thermal emission from the neutron star's surface.HMXBs of intermediate luminosity can show emission from either or both types of soft excess (Hickox et al. 2004). Recent observations with XMM-Newton has revealed that a handful of BeXRBs have blackbody components with kTBB in excess of 1 keV and a derived emitting region R < RNS (see Table 1).Such a small radius indicates emission from a hot spot on the neutron star, possibly from the magnetic polar cap.These sources all have low level X-ray emission (LX ∼ 10 34−35 erg s −1 ) and long pulse periods (P>100 s). Here we report on XMM-Newton observation of another possible member of this group of BeXRBs: Swift J045106.8-694803.This source was detected in the Large Magellanic Cloud (LMC) by the Swift/BAT hard X-ray survey (Beardmore et al. 2009) and was followed by a 15.5 ks observation with the Swift XRT instrument.This confirmed the position of the source and revealed a periodic signal at 187 s.From the accretion model of Ghosh & Lamb (1979), Klus et al. (2013) derived a magnetic field B ∼ 1.2 × 10 14 G from the spin-up rate, indicating that Swift J045106.8-694803 is a highly magnetised accreting pulsar (i.e., neutron star with a super strong magnetic field B 10 14 G).However, there are several interpretations of the high spin down rates observed in these sources which do not require super strong magnetic fields, such as accretion of magnetised material (e.g.Ikhsanov 2012;Ikhsanov & Beskrovnaya 2013) or quasispherical subsonic accretion (e.g.Shakura et al. 2012). OBSERVATIONS AND DATA REDUCTION A ∼7 ks XMM-Newton target of opportunity (ToO) observation was performed during satellite revolution #2308, MJD = 56125.0(2012 July 17).Data from the European Photon Imaging Cameras were processed using the XMM-Newton Science Analysis System v11.0 (SAS) along with software packages from FTOOLS v6.12.Table 2 summarises the details of the EPIC observations.The MOS (Turner et al. 2001) and pn (Str üder et al. 2001) observational data files were processed with emproc and epproc respectively.The data were screened for periods of high background activity by examining the >10 keV count rate.The pn and MOS count rates were below the recommended filtering threshold for the duration of the observation and so no filter was applied.The final cleaned pn image included "single" and "double" (PATTERN 4) pixel event patterns in the 0.2-10.0keV energy range."Single" to "quadruple "(PATTERN 12) pixel events were selected for the cleaned MOS images in the same energy range.Photon arrival times were converted to barycentric dynamical time, centred at the solar system barycenter, using the SAS task barycen. Images, background maps and exposure maps were created for all detectors in the 0.2-10.0keV energy range.A box sliding detection was performed simultaneously on all 3 images twice (the first with a locally estimated background the second using the background map) with the task eboxdetect, followed by the maximum likelihood fitting using the task emldetect.This process resulted in a list of sources including their positions, errors and background subtracted counts. Source counts were extracted from a circular region with radius 61 ′′ , as recommended by the SAS task eregionanalyse which calculates the optimal radius for the source extraction by maximising the signal to noise.Background counts were extracted from a region of identical size offset from the source.This region falls on a neighbouring CCD in the pn detector and on the same chip in the MOS1 and MOS2 detectors.The background subtraction was performed using the epiclccorr task which also corrects for bad pixels, vignetting and quantum efficiency. Source and background spectra were extracted from the same regions.Again, "single" and "double" pixel events (PATTERN 4) were accepted for the pn detector with all bad pixels and columns disregarded (FLAG=0).For the MOS spectra, "single" to "quadruple "(PATTERN 12) pixel events were selected with quality flag #XMMEA EM.The area of source and background regions were calculated usc 2013 RAS, MNRAS 000, 1-10 ing the backscal task.Response matrix files were created for each source using the task rmfgen and arfgen. Position The simultaneous source detection performed on the three EPIC cameras determined the position of Swift J045106.8-694803 as RA(J2000)=04 h 51 m 06.7 s Dec(J2000)=−69 • 48 ′ 04.2 ′′ .The 1σ systematic uncertainty was assumed to be 1 ′′ in accordance with the findings of the XMM-Newton Serendipitous Source catalogue (Watson et al. 2009).This is an order of magnitude larger than the statistical error derived in the source detection, and as such is the dominant error on the position.This position is consistent with the Swift positions reported by Beardmore et al. (2009) and Klus et al. (2013) confirming that these three detections are the same source.Figure 1 shows a V-band image with the location of the Swift and XMM positions with radii equal to the 1σ errors.The images was taken with the ESO Faint Object Spectrograph and Camera (EFOSC2) mounted at the Nasmyth B focus of the 3.6m New Technology Telescope (NTT), La Silla, Chile on the night of 2011 December 9 (MJD = 55904).The optical counterpart of Swift J045106.8-694803 is [M2002] 9775 (Massey 2002), located at RA(J2000)=04 h 51 m 06.96 s −69 • 48 ′ 03.0 ′′ . Timing Analysis Figure 2 shows the light curve of Swift J045106.8-694803.The bottom panel shows the Lomb-Scargle periodogram of the light curve with a bin time of 20 s.A sine wave with the detected period was fit to the 20 s bin light curve and is overlaid for clarity.A period at 168.8 s with a power of 46.1 was detected rising to 82.7 when the bin size of the light curve is reduced to 0.1 s.Monte Carlo simulations with both red and white noise light curves were performed to determine the significance of this detection.A bin time of 20 s was also employed for the simulations to reduce the processing time.One million white noise light curves were generated by "scrambling" the original light curve (i.e.reassigning the flux values to different time stamps) using a random number generator.This method makes no assumption about the underlying distribution of the light curve.Lomb-Scargle analysis was performed on each of these light curves and the highest power recorded.None of the 1,000,000 light curves generated produced a peak in the periodogram greater than 17.0.This suggests that the period discovered in the light curve has a significance >99.9999% or 4.9σ. One million light curves were generated with a power law slope of -2.0 and the same statistical properties (mean, standard deviation and bin time) as the EPIC-pn light curve, using the method of Timmer & Koenig (1995) 1 .The light curves were initially simulated with a duration ten times longer than that of the actual data and were then cut down to the observed duration to minimise the effect of red noise leakage.Gaussian noise was added to each point of this new light curve by drawing a random deviate from a Gaussian distribution with mean and variance equal to each data point following the method detailed by Uttley et al. 2003).Any bins with a negative count rate were set to zero.Lomb-Scargle analysis was performed on each of the simulated time series.Unlike the white noise simulations, the significance of a peak depends on the frequency.The broken line in Fig 2 shows the 99.9999% significance contour.Both the white and red noise simulations indicate that this period is significant. The error in the period was estimated by varying the original light curve within the errors on each data point, using a Gaussian random number generator, 10,000 times.As with the simulations to determine the significance of the detection, Lomb-Scargle analysis was performed on each of these light curves.To speed up the processing time of the simulation, the light curve was only searched for periods between 50 s and 2000 s.The resulting histogram is well fit by a Gaussian with mean 168.5 s and a standard deviation of 0.16 s.As such we determine the period of Swift J045106.8-694803 to be 168.5±0.2 as of MJD = 56125.0. The light curve was split into four energy ranges, 0.2-1.0keV, 1.0-2.0keV, 2.0-4.5 keV and 4.5-10.0keV with approximately equal count rates (0.12, 0.18, 0.17 and 0.13 counts s −1 respectively).Figure 3 shows the pulse profiles for each of these light curves and the entire 0.2-10.0keV light curve, each normalised to the average count rate in the energy range.The zero phase point was determined from the phase shift found from the sine wave fit to the 0.2-10.0keV light curve (see above).Lomb-Scargle analysis was performed on each of the light curves. The strongest detection of the period is in the 1.0-2.0keV energy range, with a Lomb-Scargle power of 56.7.This does not appear to be an issue relating to photon counting statistics, as the period is barely detected in the 2.0-4.5 keV energy range (Lomb-Scargle power of 15.2) which has an almost identical average count rate.The strongest period identified in the 4.5-10.0keV was at 11.3 s with a Lomb-Scargle power of 7.8, this is likely to be noise rather than the detection of a second period.We investigated whether the lack of significant pulsations at higher energies could be due to a change in the shape of the pulse profile, as Lomb-Scargle analysis is more sensitive to sinusoidal variations.We checked for periods using the epoch folding methods of Leahy (1987).The lightcurve is folded on each trial period and tested to see if it is consistent with a constant count rate with a χ 2 test.This reinforces the results from the Lomb-Scargle analysis, with the strongest detection in the 1.0-2.0keV range and no detection in the 4.5-10.0keV range.The first harmonic of the period was the strongest period identified in the 2.0-4.5 keV light curve, suggesting the pulsed profile may become double peaked at higher energies. The pulsed fraction of each light curve was calculated by fitting a sine wave with the period fixed at the value found in the full 0.2-10.0keV energy range.The phase, amplitude and the average value of the light curve were allowed to vary and the ratio of the amplitude and average value were taken.This is equivalent to taking the ratio of the difference of the maximum and minimum value of the sine wave with the sum of these values.This parameter can vary between 1 (completely pulsed) and 0 (constant rate).The values range from 0.47±0.05for the 1.0-2.0keV energy range down to 0.08±0.06for the 4.5-10.0keV range.The epoch folding suggests that the profile becomes double peaked at higher energies.The fit was also performed with the period fixed at the second harmonic for the last two energy bands.The results are summarised in Table 3.We consider the light curves for two energy ranges, 0.2-2.0keV and 2.0-10.0keV, with equal count rates (0.293±0.007 and 0.305 ± 0.007 respectively).The hardness ratio (HR) between the "soft" (<2 keV) and "hard" (>2 keV) light curves was calculated using the formula: where C hard and C sof t are the count rates in the hard and soft bands respectively.The hardness ratio can vary between -1.0 (zero counts in the 2.0-10.0keV band) and 1.0 (zero counts in the 0.2-2.0keV band).Figure 4 shows how the hardness ratio varies with pulse phase.From a comparison with the left panels of Fig. 3, a clear anti-correlation between the hardness ratio and the pulse profile is evident, with the source getting harder with decreasing luminosity. Spectral Analysis The spectral analysis discussed here was performed using XSPEC (Arnaud 1998) version 12.7.0.The three spectra from the different EPIC detectors were fit simultaneously with the models reported here plus an additional constant factor to account for the variations in the different detectors.The model parameters were constrained to be identical across the three instruments.The photoelectric absorption was split into two components.One, N H,Gal , to account for the Galactic foreground extinction, fixed to 8.4 × 10 20 cm −2 (Dickey & Lockman 1990) with abundances from Wilms et al. (2000), and a separate column density, NH,i, intrinsic to the LMC with abundances set to 0.5 for elements heavier than helium (Russell & Dopita 1992) and allowed to vary.All errors are evaluated at the 90% confidence level. , as a function of pulse phase.C sof t is the 0.2-2.0keV count rate and C hard is the 2.0-10.0keV count rate.The phase shown is the same as that of the pulse profiles shown in Fig 3 .The spectra obtained from the three instruments were initially fit with a simple absorbed power law model (phabs*vphabs*powerlaw in XSPEC).This led to an acceptable fit, with a χ 2 of 270.5 for 236 degrees of freedom (dof) with a photon index of Γ = 0.97 ± 0.05 and intrinsic absorption NH,i = (1.3 ± 0.4) × 10 21 cm −2 .The photon index is typical for those seen in other BeXRBs, particularly in the SMC (Haberl et al. 2008). The possibility of a thermal component was also explored and modeled with both a blackbody and diskbb model.Including these parameters improved the fit marginally (χ 2 of 252.8 and 255.7 for 234 dof respectively) but F-tests suggest that these are significant at 99.96% and 99.86% respectively (i.e.>3σ ).Fig. 5 shows the 0.2-10.0keV spectrum along with the best fit model (phabs*vphabs(powerlaw+bbody)).The parameters of all the models discussed here are included in Table 4.The best fit parameters for the phabs*vphabs*(diskbb+powerlaw) are both unphysical and poorly constrained (e.g.Γ = 4 +1 −4 is extremely soft and kT = 4.2 +1.0 −0.6 keV is too hot for a disc, which have kT ∼ 0.1 keV), as such, only the results of the phabs*vphabs*(bbody+powerlaw) model are discussed in detail.The total unabsorbed flux from the blackbody component is 1.3±0.8×10−12 erg cm −2 s −1 , accounting for approximately 40% of the total emission of the source. An intrinsically narrow Gaussian was added to the model at 6.4 keV to see if any evidence for an Fe-Kα line exists.The upper limit on the equivalent width of this component was derived as 0.3 keV.Allowing the energy or width of this feature to vary does not alter this result. The spectrum was fit with the self consistent Bulk Motion Comptonisation model (e.g.Borozdin et al. 1999 4.This model has a χ 2 value of 266.9 for 234 dof, i.e. a χ 2 r value similar to that of the original absorbed power law model.The spectral energy index is abnormally low (0.04 +0.07 −0.04 ), where normal is anything between 0.6 and 1.4 (Haberl et al. 2008).When a blackbody was added to the model (phabs*vphabs(bbody+bmc) in XSPEC the χ 2 fell to 249.8 for 232 dof.Whilst several parameters are still poorly constrained (e.g.f < −0.3), the spectral energy index is consistent with those reported for BeXRBs (α = 0.6 ± 0.5).The blackbody temperature is identical to that reported for the bbody+powerlaw model (kTBB = 1.8 ± 0.2). Figure 6 Shows a comparison between the bbody+powerlaw model and the bbody+bmc model.Between 0.4-10.0keV the two models are indistinguishable.Importantly, the phenomenological powerlaw component approximates the physical bmc component at energies 0.5 keV.Since the bbody+bmc is poorly constrained and to allow for easier comparison with previous works, we will use values determined by the simple, empirical bbody+powerlaw model. Modelling the Phase Resolved Spectra and Pulse Profiles The anticorrelation seen between the hardness ratio and pulse profile has previously been reported for another persistent BeXRB, RX J0146.9+6121 by (La Palombara & Mereghetti 2006).Pulsed phased spectroscopy revealed that the change in spectra could be explained with a static blackbody and variable power law (among other solutions).This possibility was explored by generating 10,000 EPIC-pn spectra with the same absorption, photon index and blackbody temperature as the best fit model using the XSPEC command FAKEIT.These parameters were fixed as they are linked to the physical properties of the system and/or the local environment and so are unlikely to change on a timescale of seconds.The value of the normalisation for the power law and blackbody varied from zero to 5.0 × 10 −4 in steps of 5.0 × 10 −6 .The total number of counts as well as the hardness ratio was calculated for each of the simulated spectra.The results of the simulation were searched for the normalisation values which could reproduce the top two panels in Fig. 7 within errors, i.e. the same count rate and hardness ratio as each of the phase bins. The results of the simulations, along with the pulse profile and hardness ratio, are shown in Fig. 7. Interestingly, a constant blackbody could not reproduce the range of hardness ratios seen.A modulation of the blackbody component ∼ π out of phase with that of the power law are required to reproduce the data.Figure 8 shows the simulated spectrum and model components of Swift J045106.8-694803 at the hardness ratio maximum (hardest) and minimum (softest).The pulsed fraction of the power law and blackbody components are consistent at 0.6 ± 0.2 and 0.7 ± 0.3 respectively. Since the blackbody component varies with rotation and can be inferred to have a small (R<RNS) emitting size from LX and kTBB, it is possible that this region is a hot, magnetic polar cap on the neutron star surface.By adopting a model for the hot spot emission and fitting this model to the pulse profile, the geometry of the system can be constrained, in particular, the angle between the rotation and magnetic axes α and the angle between the rotation axis and line-ofsight ζ. Proper modelling requires detailed knowledge of the magnetic field and temperature distributions over the neutron star surface and is beyond the scope of this work (see, e.g., Ho 2007).Nevertheless useful insights can still be easily obtained using a simple model: two antipodal hot spots that emit as a blackbody and have a beam pattern (i.e., angular dependence) which is isotropic and has no energydependence.For each (α, ζ), the pulse profile is calculated using the analytic approximation of (Beloborodov 2002) to the exact relation given in Pechenick, Ftaclas,& Cohen (1983) which accounts for gravitational light-bending.The pulse profiles are degenerate in the two angles, i.e., (α, ζ) and (ζ, α) produce the same profile.A neutron star mass MNS = 1.4 M⊙ and radius RNS = 12 km are assumed.These model pulse profiles are then fit to the blackbody pulse profile (see panel (3) of Fig. 7), allowing the phase and amplitude to vary.Shaded regions for χ 2 r (for 8 degrees of freedom) values are shown in Fig. 9. For this emission model, the α − ζ parameter space can be divided into four regions which correspond to four classes defined in (Beloborodov 2002): Class I is where one pole is visible all the time, the second pole never, class II is where one pole is visible all the time and the second pole some of the time, class III is where both spots are seen some of the time and class IV is where both spots are always seen.A geometry in class IV is immediately ruled out as it requires the blackbody flux to be constant.Similarly, the out of phase pulsations of the power law (interpreted as the accretion column) and the blackbody suggests we can also rule rule out a geometry in class I, if the accretion column is located just above the neutron star surface since it will always be eclipsed by the neutron star.The results from the fitting suggest that we are seeing both of the neutron star poles during a rotation of the neutron star with best-fit values for the angles of (α,ζ)=(53,70) with a χ 2 of 4.32 for 8 dof. DISCUSSION A soft excess is a common feature in the X-ray spectra of BeXRBs.It is hypothesised that a soft excess is in fact present in all BeXRB spectra, though not always detected due to the high intrinsic absorption and flux of some sources.It is thought to originate from the inner radius of an accretion disc surrounding the neutron star (Hickox et al. 2004), however the majority of the blackbody temperatures reported are a factor ∼10 lower than that found here (e.g.Sturm et al. 2012).The temperature and flux of the blackbody component detected in this observation of Swift J045106.8-694803(kTBB = 1.8 +0.2 −0.3 keV, fX,BB = 1.3 ± 0.8 × 10 −12 erg cm −2 s −1 ) along with a distance to the LMC of 50.6 ± 2.1 kpc (Bonanos et al. 2011) implies a blackbody radius of RBB=0.5±0.2 km, calculated using the formula RBB = LX /(4πσT 4 ).All errors represent the 90% confidence limits.This is less than the radius of a neutron star and so emission from the entire accretion disc is clearly not the origin of this excess. In the last 10 years a smaller subset of HMXBs have been discovered which are reported to have a "hot" (kTBB > 1 keV) thermal excess (see Table 1).These all have RBB 1 km, suggesting emission from the neutron star polar cap.Interestingly these systems are characterised by long pulse periods (P>100 s) and persistent X-ray emission, much like Swift J045106.8-694803.Could this be a selection effect?The ability to detect pulsations in a given observation decreases with increasing pulse period.This could lead to observers requesting longer observations of the longer pulse period pulsars and thus having an greater number of counts in the source spectrum.This in turn would allow fainter spectral components to be detected.However, there are several c 2013 RAS, MNRAS 000, 1-10 pulsars with both short and long periods that have been observed with a similar or greater number of total counts than seen here which have not shown any evidence for this spectral component (see Haberl et al. 2008, Sturm et al. 2012and Townsend et al. 2011, for recent examples with XMM).As such we conclude that this cannot solely be an observational bias. Thermal emission is often observed to have the greatest contribution to the soft energy band.This does not appear to be the case in the spectrum of Swift J045106.8-694803, as the blackbody component contributes ∼ 50% of the total flux of the source at energies 4 keV.Lomb-Scargle analysis of the 4.5-10.0keV light curve shows no evidence for pulsations, which seems to contradict the hypothesis that the thermal emission originates from the polar cap.However, simulations of the spectrum's model components suggest that the lack of pulsations is the result of the two components pulsating out phase. Figure 8 shows the "hard" and "soft" spectra of Swift J045106.8-694803.Despite similar levels of variation in both components, the overall spectrum varies very little above ∼ 3 keV.This reflects the results of the Lomb-Scargle analysis of the higher energy light curves and also explains the reduction in the pulsed fraction at higher energies, which is usually observed to increase with both increasing energy and decreasing source flux as the regions emitting the X-rays become more compact (Lutovinov & Tsygankov 2008).Above ∼10 keV, the non-thermal component once again dominates the X-ray spectrum.If this hypothesis is correct, then pulsations should once more be detectable at higher energies. The decomposition of the spectral components has allowed us to demonstrate how the geometry of the neutron star could be constrained should better data become available.The current values of α and ζ are approximate since we assumed a simple blackbody emission model.More sophisticated modelling is not warranted at this time given the relatively large uncertainties of the pulse profile.Detailed modelling of deeper observations, with better signal-to-noise, could provide an independent measurement of the magnetic field; furthermore, future polarization studies could even break the degeneracy between α and ζ.Klus et al. (2013) use the pulse period determined in this work along with data from Swift and RXTE to show that Swift J045106.8-694803 has a magnetic field above the quantum critical value.Two other accreting pulsars are known with Ṗ values consistent with a super stong magnetic field, 4U2206+54 (Reig et al. 2012) and SXP 1062 (Popov & Turolla 2012;Hénault-Brunet et al. 2012).Intriguingly, both of these sources are members of the hot thermal excess population (see Table 1) which suggests a possible link between these two phenomena (although SXP 1062 is also surrounded by a supernova remnant with kTBB = 0.23 ± 0.05 keV possibly adding to the thermal excess, Haberl et al. 2012a). The link between the magnetic field and spin period of a neutron star is well known (e.g.Shapiro & Teukolsky 1983).For isolated pulsars, the relationship is B ∝ ( Ṗ P ) 1/2 and is due to emission of magnetic dipole radiation.For accretion powered pulsars, the torque experienced due to accretion is much stronger than the dipole spin down torque and the relationship is B ∝ P 7/6 if the neutron star is spinning at its equilibrium period (i.e.Ṗ = 0).If the neutron stars are not in spin equilibrium, the relationship is more complex (e.g. Ghosh & Lamb 1979 B ∝ (− Ṗ /P 2 ) 7/2 ).The "standard" relation for radio pulsars also links the neutron star polar cap size to its spin period, Θ ∝ P −1/2 , i.e. longer period pulsars have small caps.If this is extended to the accretion powered pulsars, the relationships above suggest that higher magnetic fields would be found in pulsars with longer periods and imply smaller polar caps. CONCLUSION We have presented detailed analysis of a recent XMM-Newton ToO observation of the BeXRB Swift J045106.8-694803.The period was determined to be 168.5±0.2 s as of MJD = 56125.0.The pulsed fraction decreases with increasing energy, with no detection of the period at energies >4.5 keV.The X-ray spectrum is adequately represented by two models, a single component continuum model (an absorbed powerlaw) and a two component continuum model (an absorbed powerlaw and blackbody).The extra blackbody component, though just formally significant, is not necessary for an acceptable fit to the spectrum and the parameters of the phabs*vphabs*powerlaw model are consistent with those reported for other BeXRBs.However, it is difficult to explain the dramatic decrease in the pulsations with increasing energy with a single component model.A two component model, with anticorrelated pulsations, can account for this behaviour and the anticorrelation of the hardness ratio and pulse profile and implies α and ζ values of ∼ 53 • and ∼ 70 • . The high temperature of the blackbody (kTBB = 1.8 +0.2 −0.3 keV) implies a radius of blackbody radius of RBB = 0.5 ± 0.2 km, and is attributed to the polar cap of the neutron star.This is not the first detection of a hot thermal excess in the X-ray spectra of HMXBs (see Table 1 for recent examples) and interestingly Swift J045106.8-694803 shares common characteristics with these sources including persistent low level X-ray emission and a long pulse period (P>100 s).If confirmed to be the latest member of this emerging class, it would be the brightest and shortest period source. Interestingly, two of the other sources listed in Table 1 also have high Ṗ values, indicating a strong magnetic field.Whilst based on a small sample, this could suggest that there is a link between a hot thermal excess and the magnetic field strength.Further monitoring of the pulse periods of these sources as well as the temperatures of their thermal components could reveal if this is causal or coincidence.Should the blackbody in Swift J045106.8-694803exist, we predict that above 10 keV the pulse period should once more be detectable as the power law becomes the dominant component in the X-ray spectrum once again. Most of the known X-ray pulsars in the Small and Large Magellanic cloud have been detected with RXTE during a ∼10 yr monitoring campaign (Galache et al. 2008).RXTE has a limited response below 2 keV and this particular pulsar, with its soft pulses and low level emission, would not have been detected unless it went into outburst.Detailed analysis of the XMM survey of the SMC (Haberl et al. 2012b) could reveal a further population of these softly pulsating HMXBs. Figure 1 . Figure 1.V -band image of Swift J045106.8-694803,taken with EFOSC2 on the NTT at La Silla, Chile with the XMM-Newton (solid red) and Swift (broken blue) 1σ error circles.The two Swift observation IDs are labeled. Figure 2 . Figure 2. Top panel shows the EPIC pn light curve of Swift J045106.8-694803 with 50 s bins.Bottom panel shows the Lomb-Scargle periodogram with the detected period marked along with the 99.9999% significance levels determined by white and red noise simulations. Figure 3 . Figure 3. Left panels show the background subtracted pulse profiles from the EPIC-pn detector, folded on the 168.5 s period detected.Right panels show the Lomb-Scargle periodogram in the same energy range. Figure 6 . Figure 6. Figure shows a comparison between the bbody+powerlaw and the bbody+bmc model over the energy range 0.3-10.0keV. Figure 7 . Figure 7. Panels from top to bottom show (1) 0.2-10.0keV pulse profile (2) Hardness ratio (3) The normalisation of the blackbody (BB) required to produce the hardness ratio and count rate of the given phase bin and (4) The normalisation of the power law (Γ) required for the given phase bin.The broken line in panel (3) is the best-fit pulse profile (see text) Figure 9 Figure 9. χ 2 r contours of the fit to the blackbody pulse profile for the angle between rotation and magnetic axes, α, and angle between rotation axis and line-of-sight, ζ.The 90% confidence contour corresponds to a χ 2 r = 1.1.Crosses indicate the best-fit values.The four classes (I-IV) are defined in the text. Table 3 . Summary of timing results.Pulsed fraction for the 4.5-10.0keV energy range are the 3σ upper limit
2013-09-10T20:22:19.000Z
2013-09-10T00:00:00.000
{ "year": 2013, "sha1": "0095653bb2aaf0bc64ee27ba776e33a7f8bd45db", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/436/3/2054/4072281/stt1711.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "8b9208f66ce22b1e9b688922a2ba30f5eafe77c9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257403012
pes2o/s2orc
v3-fos-license
Segmentectomy versus lobectomy for small‐sized pure solid non–small cell lung cancer Abstract Background Segmentectomy has been recommended for ground glass opacity (GGO)‐dominant small‐sized non–small cell lung cancer (NSCLC) or those with GGO component. Pure solid NSCLC is a special sub‐type and has an inferior prognosis. Whether segmentectomy could achieve comparable long‐term outcomes with lobectomy for pure solid small‐sized NSCLC remained controversial. This study aimed to compare the prognosis of segmentectomy and lobectomy for pure solid NSCLC. Methods NSCLC patients with a pure solid nodule (≤2 cm) who received segmentectomy or lobectomy between January 2010 and June 2019 were retrospectively screened. Log‐rank test, univariable, and multivariable Cox regression analyses were used for prognostic comparison. Further, the propensity score matching analysis was adopted to yield a matched cohort. Results After screening, 344 pure solid NSCLC patients with a median follow‐up time of 56 months were reserved. Among them, 98 patients underwent segmentectomy and the other 246 subjects received lobectomy. The lobectomy group had a larger tumor size, a higher rate of lymph node metastasis than the segmentectomy arm. Generally, patients with segmentectomy had a better disease‐free survival (DFS) (p = 0.011) and overall survival (OS) (p = 0.028) than those with lobectomy. However, the multivariable Cox regression analysis indicated that no significant survival difference existed between segmentectomy and lobectomy after adjusting the potential confounding factors (DFS: hazard ratio [HR], 0.72; 95% confidence interval [CI], 0.30–1.77, p = 0.476; OS: HR, 0.36; 95% CI, 0.08–1.59, p = 0.178). Consistently, in the propensity score matched cohort, segmentectomy (n = 74) yielded similar DFS (p = 0.960) and OS (p = 0.320) with lobectomy (n = 74). Conclusions Segmentectomy could achieve comparable oncological outcomes with lobectomy for pure solid small‐sized NSCLC. INTRODUCTION Although lobectomy remained the standard surgical treatment for early-stage non-small cell lung cancer (NSCLC), over the past two decades, more and more studies proved the excellent outcomes of segmentectomy for small-sized NSCLC. [1][2][3] Therefore, many guidelines recommended segmentectomy for peripheral small-sized NSCLC. However, the indication was restricted to ground glass opacity (GGO)-dominant NSCLC or those with GGO components. NSCLC with a pure solid nodule was a special subtype and showed a poorer prognosis than those with GGO components. [4][5][6] Whether segmentectomy could achieve comparable long-term outcomes with lobectomy for small-sized NSCLC with pure solid nodules remained controversial. [7][8][9][10] Koike et al. 7 reported that segmentectomy achieved similar overall survival (OS) and disease-free survival (DFS) with lobectomy for pure solid small-sized NSCLC (≤2 cm). Consistently, Tsubokawa and colleagues 8 observed that no significant recurrence-free survival difference existed between segmentectomy and lobectomy for clinical T1a-bN0M0 lung cancer. Similarly, Soh et al. 11 found that segmentectomy could achieve an equivalent prognosis to lobectomy for cT1b (or less) pure solid NSCLC, but not for cT1c. In contrast, Hattori et al. 9 demonstrated that segmentectomy yielded a worse 3-year LRFS (locoregional recurrence-free survival [RFS]) than lobectomy for ≤2 cm NSCLC with pure solid nodules. In a meta-analysis performed by Rao et al., 10 they found that segmentectomy showed an inferior RFS than lobectomy for pure solid or solid-dominant NSCLC (≤2 cm). Not long ago, the randomized controlled trial (RCT) study of JCOG0802 reported that segmentectomy achieved a better OS and a similar DFS compared to lobectomy for peripheral small-sized NSCLC (≤2 cm). 3 Half of the enrolled NSCLC patients in JCOG0802 had a pure solid nodule. Notably, subgroup analysis showed that the OS advantage existed in patients with pure solid NSCLC, but not in sub-solid NSCLC. However, the DFS and tumor recurrence features in patients with pure solid nodules were not well demonstrated. Therefore, more studies were warranted to demonstrate the outcomes of segmentectomy and lobectomy for pure solid NSCLC. In this study, we retrospectively analyzed the outcomes of segmentectomy and lobectomy for pure solid NSCLC to provide more evidence for the prognostic evaluation of segmentectomy for small-sized pure solid NSCLC. Patients screening We retrospectively screened patients with lung cancer who underwent segmentectomy or lobectomy between January 2010 and June 2019 in our department. Patients with: (1) histopathological confirmed primary NSCLC; (2) radiologically pure solid nodules; (3) tumors with a maximum diameter ≤2 cm were initially enrolled. Further, subjects with: (1) small cell carcinoma component; (2) other malignant tumors history in the last 5 years; (3) lost to follow-up within 6 months after surgery were excluded. This study was approved by the ethic committee of the first affiliated hospital of Nanjing medical university and individual informed consent was waived for this retrospective study. Radiologic measurement Chest computed tomography (CT) scans were performed using a 64-slice multidetector scanner (Siemens Somatom, Germany). The maximum diameter of the whole tumor and the solid component were measured in the lung window (window width, 1500 Hounsfield units; window level, 700 Hounsfield units) by W.X. and X.P. The solid component was defined as an increased opaque area that completely obscured the underlying vascular markings. Areas with a slight, homogeneous increase in density but did not obscure the underlying vascular markings were defined as GGO. Nodules with no GGO component were defined as pure solid nodules. Surgical procedure We performed preoperative three-dimensional CT bronchography and angiography (3D-CTBA) reconstruction with a virtual margin of 2 cm by using "DeepInsight" for all patients who were potentially eligible for segmentectomy ( Figure 1). 12 Next, segmentectomy was conducted under the guidance of 3D-CTBA as described previously. [12][13][14][15] The "Modified Inflation-deflation Method" was used to identify intersegmental interfaces, and the "Stapler Tailoring Technique" was adopted to anatomically separate intersegmental interfaces. Systematic lymph node dissection or sampling was performed. If there was evidence of lymph node involvement, lobectomy with systematic lymphadenectomy would be performed. The surgical margin should be larger than 2 cm or the maximum tumor diameter in segmentectomy, otherwise, segmentectomy would be transferred to lobectomy. Follow-up During the first 2 years, patients received a physical examination and chest thin-section CT every 6 months. Two years later, subjects were followed up every year. Positron emission tomography-CT, electroconvulsive therapy, and brain magnetic resonance imaging were not usually performed and were only recommended when there was any symptom of recurrence. OS was defined as the time from the date of surgery to the date of death (any cause). DFS represented the duration between the surgical date and the date of recurrence or death. Tumor recurrences in the ipsilateral thoracic cavity were seen as locoregional recurrences, whereas recurrences in the contralateral thoracic cavity or other distant organs were defined as distant metastasis. Statistical analysis Student's t-test was used for the comparison of continuous variables, whereas the χ 2 or Fisher's exact test was adopted for categorical variables. Log-rank test, univariable, and multivariable Cox regression analyses were applied for prognostic evaluation. The 5-year OS and 5-year DFS were estimated based on Kaplan Meier method. Propensity score matching (PSM) analysis with a 1:1 matching was performed to yield matched dataset (caliper = 0.2). Age, sex, tumor size, lymph node metastasis, pleural invasion, and histological types were matched. Standardized mean difference (SMD) was calculated to evaluate the imbalance of variables between segmentectomy and lobectomy. All the analyses were performed using R V4.1. The statistically significant level was set at 0.05. F I G U R E 1 Preoperative threedimensional computed tomography bronchography and angiography (3D-CTBA) reconstruction. (a) A patient had a pure solid nodule with a maximum diameter of 1.5 cm. (b) Preoperative 3D-CTBA reconstruction showed that the virtual surgical margin (2 cm) involved the V 8 a and the Intra. V (S 8 b), but not V 8 b. Therefore, S 8 resection was performed. T A B L E 1 Characteristics of patients undergoing segmentectomy or lobectomy. Characteristics Seg Characteristics of subjects in this study A total of 344 NSCLC patients with pure solid nodules were enrolled in this study. Segmentectomy was planned in 103 patients, and five of them were transferred to lobectomy intraoperatively. Four patients were because of positive lymph nodes by fast frozen sections, and the other one was because of an insufficient surgical margin. Finally, segmentectomy and lobectomy were performed in 98 and 246 patients. All the patients had R0 resection, and the mean surgical margin in segmentectomy was 2.33 cm. As shown in Table 1 Prognostic comparison of patients undergoing segmentectomy and lobectomy As shown in Figure 2, patients with segmentectomy had better OS ( p = 0.028) and DFS ( p = 0.011) than those with lobectomy. The 5-year OS and 5-year DFS of subjects in the F I G U R E 2 Prognostic comparison of segmentectomy and lobectomy for small-sized pure solid non-small cell lung cancer (before matching). (a) Patients with segmentectomy had a better disease-free survival than those with lobectomy before matching (p = 0.011). (b) Patients with segmentectomy had a better overall survival than those with lobectomy before matching ( p = 0.028). T A B L E 2 Univariate and multivariable Cox regression analysis (before matching). survival difference existed between patients with segmentectomy and lobectomy (OS: HR, 0.36; 95% CI, 0.08-1.59, p = 0.178; DFS: HR, 0.72; 95% CI, 0.30-1.77, p = 0.476) ( Table 2), suggesting that the survival differences between segmentectomy and lobectomy might attribute to the confounding of other factors. Two deaths were observed in patients undergoing segmentectomy, one was dead because of lung cancer, and another one was because of other diseases. There were 24 deaths in the lobectomy group. Among them, 18 died from lung cancer, and the other six were attributed to other diseases or accidents. Moreover, four locoregional recurrences (4/98, 4.08%) and one distant metastasis (1/98, 1.02%, bone metastasis) were found in patients with segmentectomy. Specifically, tumor recurrences after segmentectomy occurred in the right hilar lymph node, right node station 2, right node station 1, and right pleura, respectively. In the lobectomy group, 7 (7/246, 2.85%) locoregional recurrences and 26 (26/246, 10.57%) distant metastasis were observed. Patients with segmentectomy were more likely to encounter locoregional recurrences, whereas those with lobectomy had a higher risk of distant recurrences (p = 0.019). Propensity score matching analysis To further reduce the confounding of other factors between segmentectomy and lobectomy, a 1:1 PSM analysis was performed. Age, gender, tumor size, lymph node metastasis, pleural invasion, and histological types were matched. As a result, 74 patients with segmentectomy and 74 subjects with lobectomy were reserved (Table 3). After matching, the baseline and main clinicopathological characteristics were comparable between these two groups (p > 0.05 and SMD < 0.2). As shown in Figure 3, no significant DFS (p = 0.960) or OS (p = 0.320) difference was observed in the matched cohort. Consistently, the univariable (HR, 1.03; 95% CI, 0.30-3.55, p = 0.965) and multivariable Cox regression analysis (HR, 2.44, 95% CI, 0.47-12.69, p = 0.287) also showed that no significant DFS existed between patients with segmentectomy and lobectomy (Table S1). All these findings suggested that segmentectomy could achieve equivalent long-term outcomes with lobectomy for pure solid small-sized NSCLC. DISCUSSION Although numerous studies have compared the outcomes of segmentectomy and lobectomy for small-sized NSCLC. Limited was available about the long-term outcomes of segmentectomy for pure solid NSCLC. Hence, most of the current clinical guidelines restricted the indications of segmentectomy for GGO-dominant NSCLC or NSCLC with GGO components. Recent studies indicated that segmentectomy could reach a comparable outcome with lobectomy for small-sized NSCLC, even for those with more malignant radiologic or pathological features, such as high metabolism, invasive characteristics, including lymphatic invasion, vascular invasion, pleural invasion, and/or lymph node metastasis, micropapillary, and solid subtypes, tumor spread through air space. 2,[16][17][18][19] Generally, pure solid NSCLC was significantly correlated with these invasive features. In this study, we found that the 5-year OS and 5-year DFS were 92.0% and 84.1% for pure solid small-sized (≤2 cm) NSCLC. Hattori et al. 20 reported that the 5-year OS and RFS for clinical stage IA radiologic pure solid lung adenocarcinoma (≤3 cm) were 83.4% and 67.5%. Similarly, Suh and colleagues 21 observed that the 5-year RFS was 70% for clinical stage IA pure solid nodule lung cancer. Patients in the present study had a better prognosis than previously reported, which could be attributed to the smaller tumor size, younger patients, and lower proportion of squamous cell carcinoma and other histological subtypes. Notably, patients in the lobectomy group had a similar tumor size (1.6 cm) with those enrolled in Koike's study. 7 Likewise, the 5-year DFS was 81.1% in the current study, equivalent to the 80.0% as Koike et al. 7 reported. Further, we found that patients with segmentectomy had a better 5-year DFS (92.4% vs. 81.1%) and 5-year OS (97.9% vs. 90.0%) than those with lobectomy. The lobectomy group had a larger tumor size than the segmentectomy group, which could account for the poorer prognosis. As expected, the multivariable Cox regression showed no significant survival difference between patients with segmentectomy and lobectomy after adjusting for other confounding factors. Further, the PSM analysis indicated that segmentectomy achieved comparable OS and DFS with lobectomy. All these findings suggested that segmentectomy could reach equivalent long-term outcomes with lobectomy for pure solid small-sized NSCLC. Interestingly, in the prospective multicenter RCT study of JCOG0802, segmentectomy yielded a better OS than lobectomy in NSCLC patients with pure solid nodules, but not in those with sub-solid nodules. 3 Similarly, Isaka et al. 22 reported that segmentectomy reached a higher 5-year OS than lobectomy for stage 0-I lung cancer with high comorbidities, but not for those with low comorbidities. NSCLC with pure solid nodules usually had older age, a higher smoking rate, and a higher proportion of SCC and other histologic subtypes than those with GGO component 6,23,24 Whether segmentectomy could achieve a better OS than lobectomy for pure solid NSCLC with poor conditions needed more studies. A few studies (including the JCOG0802) showed that patients with segmentectomy had a higher total or locoregional recurrence risk than those undergoing lobectomy. 3,9 However, the detailed recurrences in patients with pure solid nodules were not well demonstrated in JCOG0802. In this study, we found that subjects with lobectomy had a higher recurrence risk than those undergoing segmentectomy before matching. Patients in the lobectomy group were more likely to undergo distant metastasis. After matching, the segmentectomy group had a similar recurrence risk to lobectomy. Consistent with our findings, Landreneau et al. observed that segmentectomy achieved similar locoregional, distant, and overall recurrence rates with lobectomy for clinical stage I NSCLC. 1,17,25 Recently, the CALGB140503 project group released that sub-lobar resection (segmentectomy accounted for 41.2%) yielded comparable OS and DFS with lobectomy for N0 NSCLC (≤2 cm). Moreover, the CALGB140503 reported similar locoregional and distant recurrences between sub-lobar resection and lobar resection. However, the proportion of patients with pure solid nodules was not demonstrated. Surgical margin was one of the fundamental factors that influenced the long-term prognosis of NSCLC patients. In this study, all the patients who received segmentectomy had R0 resection under the guidance of 3D-CTBA. The mean surgical margin was 2.33 cm for patients in the segmentectomy group. However, the surgical margin was not recorded in part patients. Although the JCOG0802 and CALGB140503 provided highlevel evidence for the non-inferior oncologic outcomes of segmentectomy (or sub-lobar resection) compared to lobectomy for small-sized NSCLC, the surgical margin was not reported. Therefore, associations between tumor size, surgical margin, and tumor recurrences were not demonstrated. In addition, it was possible that tumor metastasis occurred through the lymph node, air spread, or blood circulation at early-stage. 26,27 In this study, we found that one patient (tumor size, 1.4 cm; surgical margin, 2 cm) who underwent segmentectomy encountered a locoregional recurrence 3 years after surgery. For those patients, a larger surgical margin might be still meaningless. More studies with detailed surgical margins were warranted to reveal the association between surgical margins and NSCLC recurrences. This study had some limitations. First, this was a singlecenter retrospective study and characteristics between patients with segmentectomy and lobectomy were incomparable. To control the bias, multivariable Cox regression analysis and PSM analysis were adopted. Second, although all the patients had a R0 resection, the surgical margin was not recorded in part patients. Therefore, the association between surgical margin and tumor recurrence in pure solid NSCLC was unclear. Three, the sample size was still limited. More studies with large sample sizes, especially the prospective RCT studies, were needed to validate our findings. CONCLUSIONS Segmentectomy achieved similar oncological outcomes with lobectomy for small-sized pure solid NSCLC. Segmentectomy could be an alternative to lobectomy not only for small-sized NSCLC with GGO components but also for those with pure solid nodules.
2023-03-09T06:16:35.021Z
2023-03-07T00:00:00.000
{ "year": 2023, "sha1": "131f483aa49bbbda8792c9932a2fe98836fffd85", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.14840", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "b0e16d237e7d7b36efcaf1948cd8ab45455c12a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259703975
pes2o/s2orc
v3-fos-license
Effects of Audit Fee, Audit Delay, Financial Distress, Audit Opinion and Audit Tenure on Auditor Switching This study aims to analyze the Effect of Audit Fee, Audit Delay, Financial Distress, Audit Opinion and Audit Tenure on Auditor Switching (Empirical Studies of Manufacturing Companies in the Food and Beverage Sub-Sector Listed on the Indonesia Stock Exchange in 2019-2021) partially and simultaneously. This research data uses secondary data in annual reports of company on the Indonesia Stock Exchange (IDX) and the official website of the related company. The sample of this research is a Manufacturing Company in the Food and Beverage Sub Sector using a purposive sampling technique in taking the sample so that there are 21 samples with 3 years of observation (63 observations). The analysis technique used is logistic regression analysis technique using the SPSS 25 software application. The results of the R 2 test show that the effect of the independent variable on the dependent variable is 71.9% and 28.1% is explained by other variables. Based on the test result, it is known that the variables Audit Fee, Audit Delay, Financial Distress, Audit Opinion and Audit Tenure simultaneously have a significant effect on Auditor Switching. The results showed that the variables of financial distress, audit opinion and audit tenure partially had a significant effect on auditor switching, while the variables audit fees and audit delay partially had no significant effect on auditor switching. mandatory to submit audited financial reports (Ilhami, 2018). Statement reports aim to provide information related to the financial condition, performance, and changes in the company's financial structure that is useful for users of financial statements in making economic decisions. Therefore, financial reports must be presented fairly, reliably and can be used as material for consideration in making a decision. Public accountants or auditors are independent parties tasked with examining and providing opinions on the fairness of the company's financial statements (Ruroh & Rahmawati, 2016). Given the urgency of their duties, the auditor must maintain the quality of the resulting audit by having an attitude of independence. Auditors must maintain their independence and avoid things that can reduce this independence. Many companies implement auditor switching to reduce the possibility of a close relationship between the company and the auditor to maintain independence. Agoes (2018) states that auditor switching is one of the things that is required for every company to do in maintain auditor independence and maintain stakeholder trust in the credibility of a financial reports of company. Auditor switching is defined as the process of changing auditors caused by the company's own or because of the obligation to change auditors that have been regulated by the government of company. In Government Regulation No. 20/2015, paragraph 11, it is stated that there are restrictions on public accountants, is five consecutive financial years. This is done to avoid a harmonious relationship between the client company and the auditor, which will have an impact on the audit results of the auditor's financial statements. Concerns about diminishing auditor independence caused by the long working relationship were strengthened by the collapse of the Enron company involving the Public Accounting Firm Arthur Andersen in 2001. This case had an impact on global financial markets and caused a drastic decline in stock prices in various parts of the world, from Europe and America to Asia. This case has made many people from all over the world question the independence of auditors (Maulana, 2015). The phenomenon of auditor switching in Indonesia, one of which is in the Food and Beverage Sub Sector Manufacturing Companies listed on the Indonesia Stock Exchange (IDX), can be seen in PT FKS Food Sejahtera Tbk, which uses the same auditor for 4 years, and the audit delay that occurs reaches 120 days (in regulation No. KEP-431/BL/2012). However, based on the data obtained, it is known that there was an audit delay with reporting reaching more than 120 days in 2017-2019, during which the auditor should have been able to recognize the condition of the company (Marisa et al., 2022). Auditor switching in a company has several factors that influence it. Based on Nasser et al., (2006), it was revealed that auditor switching will lead to increased audit fees. This increase was due to the large start-up audit costs to get to know and study the client's business environment. The policy regarding the determination of the audit fee is stated in the Decree of the Chairperson of the Indonesian Institute of Public Accountants (IAPI) with No. KEP.024/IAPI/VII/2008 (updated Management Regulation No. 2 of 2016 concerning the Determination of Fees for Financial Statement Audit Services). This letter serves as a guide for all IAPI members who practice as public accountants in determining the amount of fair compensation for the professional services rendered. Public accountants, in determining audit fees, must pay attention to the stages of audit work and consider client needs, independence, level of expertise, and the basis for determining fees that have been agreed (Gultom, 2019). Another factor that influences auditor switching is audit delay. Audit delay is the period of time required by the auditor to produce an audit report on the company's financial statements, starting from the closing date of the financial year until the date the audit opinion is submitted and signed. Audit delays that exceed the predetermined time limit will result in delays in the publication of financial reports. This delay indicates a problem with the issuer's financial statements, thus requiring more time to complete the audit (Verawati & Wirakusuma, 2016). Another factors, such as financial distress are conditions where a company is going through a financial crisis phase and is on the verge of bankruptcy. Companies that are financial distress will affect the views of several interested parties, both internal and external. Therefore, companies will tend to increase all subjective evaluations and be careful when disclosing the company's actual financial condition. The condition of this company will cause the company to conduct auditor switching order to avoid an audit opinion that explains the actual condition of the company's financial statements that the value of the company's liabilities is greater than the value of its assets (Schwartz & Menon, 1985). Another factors, such as the audit opinion given by the auditor, Audit opinion is very important in the audit process because it is the main source of information that can be provided to users about what the auditor has done and the conclusions he has drawn. Unqualified Opinion is not only an indicator of the successful performance of a business entity, government, or certain group but also a standard for public reputation in an effort to enhance a positive image in financial management and accountability. Companies that get unqualified opinions tend not to change their auditors (Lutfi & Sari, 2019). Audit tenure is the length of time for which a relationship exists between the client and the auditor. The longer the auditor audits the financial statements of the same client company, the less independent the auditor will be. This occurs when a close relationship arises between the auditor and the company, which can lead to a decrease in the quality of the resulting audit. Long-tenure audits can cause the quality of the auditor's work competence to tend to decrease significantly over time, which can lead to the perception that it is difficult for the auditor to act independently. That is what makes tenure audits a factor that also influences auditor switching (Astrini & Muid, 2013). Agency Theory Agency theory is a theory that provides an explanation of the relationship between principal and agent. Jensen & Meckling (1976) define an agency relationship as a condition where one or more people (the principal) involve another person (the agent) in performing some services on behalf of the principal, which involves delegation of authority in decision-making by the agent bound by a contract. The principal in this case is the part that provides, while the agent is the part carrying out the mandate. The main objective is to explain how the parties to a contractual relationship can design a contract to minimize costs as a result of asymmetric information and uncertainties. Agency theory states that independent auditors should be able to minimize differences in interests that occur between principals and agents by monitoring and examining activities carried out by interested parties (Hartadi, 2009). Auditor Switching Auditor switching is an action taken by replacing the old Public Accounting Firm or Public Accountant with a new Public Accounting Firm or Public Accountant to conduct an audit of a company conducted by a client company in the following year's period. According to Sumarwoto (2006), auditor switching has two characteristics: internal and external factors of the company. Internal factors (voluntary) are management decisions that replace auditors before the obligation of audit rotation or auditors who resign, and external factors are the obligation of audit rotation according to government regulations (mandatory). Changing auditors will encourage companies to present financial reports better than when the company was still being audited by the previous auditor (Tifanny et al., 2020). The replacement of the auditor is carried out to maintain the independence and objectivity of the auditor. Audit Fee According to Agoes (2018), an audit fee is a reward received by a public accountant after completing all of audit services, the amount of which depends on the risk involved in the assignment, the complexity of the services provided, the level of ability required, the fee structure of the Public Accounting Firm concerned, and other professional considerations. It is concluded that the audit fee is the amount of fees received by public accountants after completing audit services, which has been measured through information on estimated audit fees from the working hours of the audit staff concerned. Schwartz & Menon (1985) stated that what prompted companies to change auditors could be relatively high audit fees, so there was no agreement between the two parties regarding the amount of the audit fee. Audit Delay Audit delay is the process of delaying the publication of financial reports to the public caused by a long audit process and calculated by adding up the days between the date of the financial statements per period issued by the company and the date the independent auditor's report is issued (Carslaw & Kaplan, 1991). During the audit process, which tends to take quite a long time, the auditor will often experience several problem that will later have an impact on the time of audit completion, so that the audit report experiences a delay. Audit delay is considered one of the factors influencing auditor turnover because if the submission of financial reports to the capital market is delayed, the capital market will be suspicious and have a negative assessment of the company. This is feared to affect the decisions of stakeholders. Financial Distress Financial distress is a condition in which a company experiences financial difficulties and is on the verge of bankruptcy. Financial distress is marked by the termination of employment and having obligations that are greater than the assets owned by the company in financial statements. This financial distress will occur if the company is can't to meet the payment schedule or when the company's cash flow projections indicate that the payment will not be fulfilled in the near future (Sembiring, 2016). Companies that experience financial distress will have a negative value in operating their companies, so the possibility for companies to change auditors is much greater in order to gain the trust of stakeholders. Audit Opinion According to Ardiyos (2007), an audit opinion is a collection of reports on the results of an assessment of the fairness of the financial statements presented by the company and issued by a registered public accountant. According to Mulyadi (2013), audit opinion is defined as the auditor's opinion regarding the fairness of audited financial statements, generally in all material matters, based on the suitability of the preparation of financial statements with generally accepted accounting principles. An audit opinion issued other than an unqualified opinion will create the perception that the company is closed and has problems in its operating financial system. Therefore, the company will avoid a disclaimer opinion from the auditor. Audit Tenure Audit tenure is the period of time needed to provide audit services to certain clients by a Public Accounting Firm (Shockley, 1981). Long-tenure audits can cause the quality of the auditor's performance competence to tend to decrease significantly over time and lead to the perception that it is difficult for the auditor to act independently. This is due to the possibility that there is a personal relationship, which is considered to interfere with auditor independence (Astrini & Muid, 2013). The audit tenure is related to the signal theory, whereby if financial reports are submitted on time, they become good news and give a good signal to the public, and vice versa. This makes audit tenure one of the factors that influence auditor turnover. Data Types and Sources The type of data used in this research is secondary data. The research data source was obtained from audited financial reports in the annual report of the Food and Beverage Manufacturing Companies Listed on the Indonesia Stock Exchange for the period 2019-2021, obtained from the website www.idx.co.id and the website of the company concerned. Effects of Audit Fee, Audit Delay, Financial Distress, Audit Opinion and Audit Tenure on Auditor Switching Additional data sources were obtained through books, literary media, company documents, articles, scientific journals, and other reading media related to the topic this study. Population and Sample The population of this research is Food and Beverage Sub-Sector Manufacturing Companies Listed on the Indonesia Stock Exchange in 2019-2021 which there are 30 companies.. The sampling technique in this research used the purposive sampling method, which is a sampling method that was selected based on certain criteria, and there were a total of 21 companies that met the requirements of the established criteria with 3 years of research, so amount sample is 63 data. Data Collection Techniques and Data Analysis Techniques The data collection technique in this study is the documentation study technique. This technique is used to reveal the data again if necessary for analysis or other comparisons. The documentation method is also used as a complement to research data. Data analysis techniques in this study used the SPSS Statistics Version 25.0 software program. The data analysis technique used is logistic regression. Result/Findings Testing the data in this study was carried out in several stages, namely: descriptive statistics, the classical assumption test, logistic regression analysis, and hypothesis testing. The results of testing the DW value of 2.088 are greater than the table values with a significance of 5%, n = 63, and k = 5 from the upper limit (du) of 1.7671 and smaller than 4 -1.7671, so it is concluded that there is no autocorrelation in this research. The results of the test show that each independent variable has a fair value. It can be seen that the data table shows that no one infringe the tolerance values > 0.1 and VIF < 10, so it is concluded that there is no multicollinearity in the research. Model Summary Step The results of the test found that the Nagelkerke R Square value was 0.719, which means that the variability of the dependent variable explained by the independent variables was only 71.9%. The results of the test found that the regression model was declared good because there was a decrease in the initial -2LL value (block number 0) with the final -2LL value (block number 1). Hosmer and Lemeshow Test Step Chi-square Df Sig. 1 6.450 8 .597 Source: Data Olahan SPSS Versi 25, 2023 The results of the test conclude that the significance value is much greater than 0.05, so the research model is able to predict the observed value or fit with the observation data. Classification Matrix The results of this research show that the ability of the regression model to measure the prediction of a company's decision to change auditors is 85.7%. This test explains that there are a total of 30 companies that are likely to perform auditor switching out of a total of 31 companies that carry out auditor switching in the company. In addition, the ability of the regression model that does not perform auditor switching provides a prediction of 75.0%. These results state that with the regression model used, there are a total of 24 companies that are predicted not to carry out auditor switching out of a total of 32 companies. Omnibus Tests of Model Coefficients Chi-square df Sig. Step 1 Step Discussion Based on the tests that have been carried out, the research results obtained are: Effect of Audit Fee, Audit Delay, Financial Distress, Audit Opinion, and Audit Tenure on Auditor Switching Based on the results of the omnibus test table, it is known that the results of testing the independent variables on the dependent variable mutually influence each other. The results of the analysis of all variables on auditor switching have a significance value of 0.000 <0.05 (α = 5%); this value indicates that the variables audit fee, audit delay, financial distress, audit opinion, and audit tenure have an effect on auditor switching. The results of the research explain that all variables are related to each other so that they have a role in predicting the occurrence of auditor changes in a company. Exp(B) Wald Step 1 The results of the analysis of audit fee variable on auditor switching in the company have a coefficient value of -0.024, and the significance value of the variable that is greater than the significance level is 0.193 > 0.05 (α = 5%). This value indicates that the audit fee variable has no significant effect on auditor switching. The increase in the amount of audit fees has no effect on the occurrence of auditor switching in the company. This research shows that the company maintains auditors who provide results in accordance with company expectations, even though there has been an increase in audit fees from the previous period. High audit fees are followed by quality and performance capabilities that can benefit the company and provide reliable audit results for all interested parties. Audit fees that fail to influence auditor switching are caused by companies think that the Public Accounting Firm or the Public Accountant who audits the company understands and knows the company's performance. The company will also consider that the auditor used the previous year has fulfilled their duties properly and in accordance with the company's expectations (Subiyanto et al., 2022). The results of this research support previous research by Stevani & Siagian (2020), which stated that the audit fee variable has no significant effect on auditor switching. The results of this study differ from research by Widnyani & Muliartha (2018), which states that audit fees has a significant effect on auditor switching. Effect of Audit Delay on Auditor Switching The results of the analysis of audit delay variable for auditor switching in the company have a coefficient value of 0.000 and a significance value of the variable that is greater than the significance level is 0.656 > 0.05 (α = 5%). This value indicates that the audit delay variable has no significant effect on auditor switching. Audit delay in complete audits or audit delays issued by the auditors has no effect on auditor switching in the company. The company will try to provide an explanation regarding this delay by conveying acceptable reasons so that it does not affect the decision-making of stakeholders in their investment interests. The company will retain its auditor if it obtains results that are in accordance with company want, even if there is a delay in the audit process, and will continue to use audit services that are in accordance with the company's capabilities and quality. The audit delay that failed to influence auditor switching was probably due to the fact that this research sample generally had not passed BAPEPAM's 120-day restriction. If the company changes the auditor, it takes quite a while for the new auditor to understand the client's business situation. Then the delay factor from the client company in submitting the files an paper needed by the auditor also slows down the auditor's performance process in the company audit (Subiyanto et al., 2022). The results of the research support previous research by Rohmah et al., (2018), which stated that the audit delay variable has no significant effect on auditor switching. The results of this research are different from research by Pawitri & Yadnyana (2015), which states that audit delay has a significant effect on auditor switching. Effect of Financial Distress on Auditor Switching The results of the analysis of financial distress variable on auditor switching in the company have a coefficient value of 0.006 and a significance value of the variable that is smaller than the significance level is 0.010 <0.05 (α = 5%). This value indicates that the financial distress variable has a significant effect on switching auditors. Financial difficulties or financial distress that are being experienced by companies have a significant effect on the occurrence of auditor switching. Financial difficulties will affect the change of auditors in order to obtain the desired opinion and audit fees that are in accordance with the company's capabilities. Bankrupt companies are more likely involve an auditor who has high independence to return to give confidence to shareholders and creditors and avoid legal problems (Nasser et al., 2006). Therefore, companies that are in a position of financial crisis tend to change their auditors to obtain appropriate audit results company wishes. The more difficult of the company financial condition, the more often auditor changes, this do to avoid a bad public view of the company. The research results support previous research by Sima & Badera (2018), which stated that financial distress has a significant effect on auditor switching. The results of this research different from research by Fauziyyah et al., (2019), which states that the financial distress variable has no significant effect on auditor switching. Effect of Audit Opinion on Auditor Switching The results of the analysis of the audit opinion variable on auditor switching in the company have a coefficient value of 0.263 and a significance value of the variable that is smaller than the significance level is 0.010 <0.05 (α = 5%). This value indicates that the audit opinion variable has a significant effect on switching auditors. The results of the audit opinion issued by the auditor have a significant effect on the occurrence of auditor switching. An unqualified opinion is known to be the opinion desired by all companies that audit financial statements. This is because the opinion provides a good image for the company's operations, both in terms of finance and performance. Several audit opinions other than Unqualified received by the company give the view that the company has problems in the company's operational system and provide negative predictions to stakeholders to invest. Therefore, the company will try to stay away from opinions other than the unqualified opinion of the auditor. The research results support previous research by Wijaya & Rasmini (2015), which stated that audit opinions have a significant effect on auditor switching. The results of this research are different from research by Widnyani & Muliartha (2018), which states that the audit opinion variable has no significant effect on auditor switching. Effect of Audit Tenure on Auditor Switching The results of the analysis of the audit tenure variable on auditor switching in the company have a coefficient value of -0.527, and the significance value of the variable is smaller Effects of Audit Fee, Audit Delay, Financial Distress, Audit Opinion and Audit Tenure on Auditor Switching than the significance level is 0.000 0.05 (α = 5%). This value indicates that audit tenure variable has a significant effect against switching auditors. This engagement relationship or audit tenure experienced by the company has a significant effect on the occurrence of auditor switching. Restrictions on the relationship are made in order to maintain the professionalism of both parties, is the client and the auditor, and not to fulfill personal interests without considering the quality of the audit results obtained. between This is the cause of changing auditors, which is often assumed to neutralize a company after a long relationship. A relationship that is too long will create sympathy between the auditor and the client, who will try to resolve each other's personal interests without considering the quality of the audit results obtained. The research results support previous research by Rohmah et al. (2018), which states that audit tenure have a significant effect on auditor switching. The results of this research different from research by Gultom (2019), which states that audit tenure variable has no significant effect on auditor switching. Conclusion Based on the results of the research and analysis that have been carried out, several conclusions can be drawn, as follows: 1. The variables audit fee, audit delay, financial distress, audit opinion, and audit tenure have an effect on auditor switching in Manufacturing Companies in the Food and Beverage subsector Listed on the Indonesia Stock Exchange in 2019-2021. 2. The variable audit fee has no effect on auditor switching in Manufacturing Companies in the Food and Beverage sub-sector Listed on the Indonesia Stock Exchange in 2019-2021. 3. The variable audit delay has no effect on auditor switching in Manufacturing Suggestion Based on the research conclusions, the authors tried to provide input and considerations in the form of suggestions intended for quality research in the future as follows: 1. Future research can use research objects from other corporate sectors, such as energy, property, and real estate, or financial sector companies, or it can use all companies listed on the Indonesian Stock Exchange to obtain more valid results. This is done to find out the effect of auditor switching on the company. 2. Future research is expected to be able to add other variables that theoretically affect auditor switching, for example, KAP size variables, company growth variables, company reputation variables, and other variables. This is done to see what factors can influence the occurrence of auditor switching from the KAP and company sides. 3. Future research is expected to expand the period of research conducted.
2023-07-12T06:27:38.470Z
2023-05-31T00:00:00.000
{ "year": 2023, "sha1": "0ca5c338a37bfd974ddead5f65d69cdfad03a831", "oa_license": "CCBYSA", "oa_url": "https://risetpress.com/index.php/jbmed/article/download/87/74", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "13d910fb535463fba4c68fc5e9f249ce96d1058b", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
119136939
pes2o/s2orc
v3-fos-license
Contract theory in a VUCA world In this paper we investigate a Principal-Agent problem with moral hazard under Knightian uncertainty. We extend the seminal framework of Holmstr\"om and Milgrom by combining a Stackelberg equilibrium with a worst-case approach. We investigate a general model in the spirit of Cvitani\'c, Possama\"i and Touzi (2018). We show that optimal contracts depend on the output and its quadratic variation, as an extension of the works of Mastrolia and Possama\"i (2016) (by dropping all the restrictive assumptions) and Sung (2015) (by considering a general class of admissible contracts). We characterize the best reaction effort of the agent through the solution to a second order BSDE and we show that the value of the problem of the Principal is the viscosity solution of an Hamilton-Jacobi-Bellman-Isaacs equation, without needing a dynamic programming principle, by using stochastic Perron's method. main difficulty is that the Principal has to design the contract without observing directly the effort provided by her Agent. We identify commonly this situation with a Stackelberg equilibrium between the Principal and the Agent: first the Principal anticipates the best reaction effort of the Agent for any given fixed salary. Then, taking into account the optimal efforts, she maximizes her utility and computes the optimal contract satisfying the reservation utility constraint. This paradigm appeared in the 1970's in discrete time models and has then been reformulated by Holmström and Milgrom in [24] in a continuous time version of the problem in which the work of the Agent is to control the drift of a Brownian diffusion. We refer to the monographs [43] and [15] for more explanation, general overviews and mathematical treatments of this theory. Several extensions of the work of Holmström and Milgrom have recently surfaced. A first noticeable extension is the study of Sannikov [38] by studying a Principal-Agent problem with a retiring random time chosen by the Principal. In particular, Sannikov roughly emphasized that the problem of the Principal has to be seen as a stochastic control problem where the continuation utility of the Agent is a state variable. This idea was rigorously extended later in the works of Cvitanić, Possamaï and Touzi in [13,14] by investigating a Principal-Agent problem in which the Agent can control both the drift and the volatility of the wealth of the Principal. More precisely, they show that when the Agent takes also supremum over the possible volatilities, his value function is the solution to a second order BSDE (2BSDE for short), which the theory was introduced by Sonner, Touzi and Zhang in [40] and improved by Possamaï, Tan and Zhou in [35]. The so-called dynamic programming approach of Cvitanić, Possamaï and Touzi consists in restricting the set of contracts offered to the Agent to a suitable class so that the problem of the Principal is reduced to a standard stochastic control problem associated with a Hamilton-Jacobi-Bellman equation. The main difference between the unrestricted and the restricted class of contracts lies in the absolutely continuity of the increasing process appearing in the 2BSDE associated with the problem of the Agent. By building absolutely continuous approximations of the increasing process, they show that the restricted and the unrestricted problems have the same value. In this paper we incorporate the last component of VUCA, Ambiguity, into the standard Principal-Agent problem. To quote Bennett and Lemoine "Ambiguity characterizes situations where there is doubt about the nature of cause-and-effect relationships". We model ambiguity by introducing a third player in the system, named the Nature, which randomly modifies the volatility of the project. As usual, the Agent is hired by the Principal to control the drift of an output process and the Principal cannot observe the actions of the Agent. However, the Principal and the Agent are not informed about the volatility of the project and they just have some beliefs about it. Since we work under weak formulation, the uncertainty on the volatility is represented by assigning to the Principal and the Agent different sets of probability measures under which they make their decisions. We adopt a worst case approach against this scenario so that both individuals present an extreme ambiguity aversion to the problem. They act as if the third individual, the Nature, was playing against them and choosing the worst possible volatility. As a consequence, both the Principal and the Agent play zero-sum stochastic differential games against the Nature. This work is an extension of the models proposed by Mastrolia and Possamaï [31] and Sung [44] to more general frameworks, by dropping several explicit and implicit assumptions made in these papers. We consider a more general framework by not considering only exponential utilities and by not restricting a priori the class of admissible contracts. Since the seminal work of Isaacs [25], differential games and more particularly zero-sum games have received a growing interest. As an overview of works related to this theory, let us recall some noticeable and relevant studies inspiring the present paper and the mathematical tools that they used. Lions and Souganidis have investigated stochastic differential games in [30] by using the viscosity solutions theory, introduced in the works of Lions [27,29,28]. Hamadène and Lepeltier have then proved in [22] the existence of a saddle point when the so-called Isaacs' condition is satisfied with the help of classical BSDEs. These works were then generalized to more general dynamics by Buckdahn and Li in [9], allowing both the drift and the diffusion terms of the output process to be impacted by control processes. Cardaliaguet and Rainer have then introduced a new notion of strategies, called path-wise strategies, to solve a differential game in [12,11]. It was then extended by Bayraktar and Yao in [4] by considering unbounded controls and by using a weak dynamic programming approach. Notice that all of these works mainly deal with Isaacs' condition. Buckdahn, Li and Quincampoix have then sucessfully characterized the value of a stochastic differential games without assuming Isaacs' condition in [10], by using viscosity solutions theory together with a randomization procedure of the stochastic control processes. All these works frame zero-sum games under strong formulation. The work of Pham and Zhang [34] investigates a non-Markovian zero-sum game in the weak formulation, more suitable 2 for Principal-Agent problems with moral hazard, by using path-dependent PDEs. All the previous papers treat stochastic differential games with a dynamic programming principle approach (DPP for short). Although El Karoui and Tan have proved in [18,19] that a DPP holds for very general stochastic control problems, this is not the case for stochastic differential games. As explained in the paper of Hamadène, Lepeltier and Peng [23], a game of type "control against control" may not lead to a DPP. A central point of our work is to avoid DPP by following the stochastic Perron's method developed by Bayraktar and Sîrbu in [1,2,3], and applied to stochastic differential games by Sîrbu in [39]. In our problem, we aim at mixing zero-sum differential games with Stackelberg equilibrium by following partially the dynamic programming approach introduced in [14]. The Agent does not choose the volatility of the outcome process but his worst case approach leads to reduce his problem to the solution to a 2BSDE, seen as an infimum of BSDEs over a set of probability measures. Unlike [14], the problem of the Principal becomes a non-standard stochastic differential game, because the worst probability measures for the Agent and the Principal do not necessarily coincide. The main contribution of this paper is to prove that the value function of the Principal is a viscosity solution to the Hamilton-Jacobi-Bellman-Isaacs (HJBI for short) equation associated to a restricted problem, as soon as a comparison result holds. The method that we use is based on the stochastic Perron's method of Bayraktar and Sîrbu [1,2,3,39] and prevents a dynamic programming principle for our problem, which may be quite hard to check in practice. The stochastic Perron's method amounts to a verification result and it consists in proving that the value function of the Principal lies between a viscosity super-solution (the supremum of the stochastic sub-solutions) and a viscosity sub-solution (the infimum of the stochastic super-solutions) of the HJBI equation. Thus, as soon as a comparison theorem holds for such PDE, and the set of stochastic semi-solutions are non-empty, it follows that the value function of the Principal coincides with the unique viscosity solution. Moreover, the DPP also follows from the definition of the stochastic semi-solutions. The only restriction that we made in our work is to deal with piecewise controls for the problem of the Principal. Although this assumption is restrictive, it is very common in stochastic control theory and meaningful as explained in [39]. Moreover, in view of [34,41] we expect that the value function associated to this restricted problem coincide with the general problem. The structure of the paper is the following, in Section 2 we define the framework and the model. The problem of the Agent is solved in Section 3. Section 4 is at the heart of our study and is the main contribution of our paper. After having studied the degeneracies of our problem, we prove that the value function of the Principal is a viscosity solution to the HJBI equation associated to a restricted problem, as soon as a comparison result holds, without assuming that a dynamic programming principle holds by using stochastic Perron's method. We also discuss examples in which we can expect that the comparison result is satisfied. To ease the reading of the paper, some technical definitions and the proofs of the two main results are postponed to the appendix. We denote by X the canonical process on Ω, i.e. X t (x) = x t , for all x ∈ Ω and t ∈ [0, T ]. We set G := (G t ) t∈[0,T ] the filtration generated by X and G + := (G + t ) t∈[0,T ] its right limit, where G + t := s>t G s for s ∈ [0, T ) and G + T := G T . We denote by P 0 the Wiener measure on (Ω, G T ). Let M(Ω) be the set of all probability measures on (Ω, G T ). Recall the so-called universal filtration where G P t is the usual completion under P. For any subset P ⊂ M(Ω), a P−polar set is a P−negligible set for all P ∈ P, and we say that a property holds P−quasi-surely if it holds outside some P−polar set. We also introduce the filtration F P := {F P t } 0≤t≤T , defined by where T P is the collection of P−polar sets, and its right-continuous limit, denoted F P,+ := (F P,+ t ) t∈[0,T ] , and we omit the indexation with respect to P when there is no ambiguity on it. For any subset P ⊂ M(Ω) and any (t, P) ∈ [0, T ] × P we denote We also recall that for every probability measure P on Ω and F−stopping time τ taking values in [0, T ], there exists a family of regular conditional probability distribution (r.c.p.d. for short) (P τ x ) x∈Ω (see e.g. [42]), satisfying Properties (i) − (iv) of [35] and we refer to [35, Section 2.1.3] for more details on it. We say that P ∈ M(Ω) is a semi-martingale measure if X is a semi-martingale under P. We denote by P S the set of all semi-martingale measures. We set M d,n (R) the space of matrices with d rows and n columns with real entries. It is well-known, see for instance the result of [26], that there exists an F-progressively measurable process denoted by X := ( X t ) t∈[0,T ] coinciding with the quadratic variation of X, P − a.s. for any P ∈ P S , with density with respect to the Lebesgue measure at time t ∈ [0, T ] denoted by a non-negative symmetric matrix σ t ∈ M d,d (R) defined by The formal definition of all the functional spaces mentioned in this paper can be found in Appendix A. Weak formulation of the output process We start by defining A and N as the sets of F-adapted processes taking values in A and N respectively, where A, N are compact subsets of some finite dimensional space. We call control process every pair (α, ν) ∈ A × N. To clarify the notations for the rest of the paper, α has to be understood as the control of the Agent and ν as the control of the Nature. Consider next the volatility coefficient for the controlled process which is assumed to be uniformly bounded and such that σσ ⊤ (·, n) is an invertible F-progressively measurable process for any n ∈ N . For every (t, x) ∈ [0, T ] × Ω and ν ∈ N, we set the following SDE driven by an n-dimensional Brownian motion W Similarly to [14], we build a control model through the weak solutions of SDE (2.1). We say (P, ν) is a weak solution of (2.1) if the law of X t,x,ν t under P is δ x(t) 3 and there exists a P−Brownian motion 4 , denoted by W P , such that We will denote by N (t, x) the set of weak solutions to SDE (2.1). We also define the set P(t, x) of probability measures which are components of weak solutions by We conclude this section by showing that the set P(t, x) satisfies an important property which is essential to deal with the wellposedness of 2BSDEs, the main tool we will use later to solve the problem of the Agent. We recall the definition of a saturated set of probability measures (see [35,Definition 5.1]). Definition 2.1 (Saturated set of probability measures.). A set P ⊂ M(Ω) is said to be saturated if for an arbitrary P ∈ P, any probability Q ∈ M(Ω) which is equivalent to P and under which X is a local martingale, belongs to P. We thus have the following Lemma, whose proof follows the same lines that [14, Proof of Proposition 5.3, step (i)] Estimate sets of volatility The beliefs of the Agent and the Principal about the volatility of the project will be summed up in the families of measures (P A (t, x)) (t,x)∈[0,T ]×Ω and (P P (t, x)) (t,x)∈[0,T ]×Ω respectively, which satisfy that P A (t, x) ∪ P P (t, x) ⊂ P(t, x) for every (t, x) ∈ [0, T ] × Ω. We emphasize that the families P A and P P cannot be chosen completely arbitrarily, and have to satisfy a certain number of stability and measurability properties, which are classical in stochastic control theory, in order to use the theory of 2BSDEs developed in [35]. The following assumption guarantees the well-posedness of 2BSDEs defined in the set of beliefs of the Principal and the Agent. In particular, property (iii) implies that the sets P A (t, x) and P P (t, x) at time t = 0 are independent of x. We thus define P A := P A (0, x), P P := P P (0, x) for every x ∈ Ω. An example of estimate sets of volatility which satisfy Assumption 2.1 is the learning model presented in [31]. In the context of the previous example, [31] studies the case where for certain processes (σ P , σ A , σ P , σ A ) ∈ H 0 (R * + , F) 4 . We refer to their paper for an interpretation of such model. To conclude this section, we define the set of weak solutions to the SDE (2.1) associated to the beliefs of the Principal of the Agent We define the sets N A and N P equivalently. The importance of these sets is that, as explained in the next section, both the Principal and the Agent consider that the volatility of the outcome process is chosen from one of them, according to their beliefs. The contracting problem We study a generalization of both the classical problem of Holmström and Milgrom [24] and the problem of moral hazard under volatility uncertainty studied in [31,44]. In our model, the Agent is hired by the Principal to control the drift of the outcome process X, but none of them have certainty about what is the volatility of the project. Both sides observe X and have a "worst-case" approach to the contract, in the sense that they act as if a third player, the "Nature", was playing against them by choosing the worst possible volatility. Admissible efforts As usual in the literature, we work under the weak formulation of the Principal-Agent problem. Therefor, the set of controls of the Agent is restricted to the ones for which an appropriate change of measure can be applied to the weak solutions of SDE (2.1). In this section we precise the condition required on a control to be an admissible effort and the impact of the actions of the Agent in the outcome process. The Agent exerts an effort α ∈ A to manage the project, unobservable by the Principal, impacting the outcome process through the drift coefficient b : [0, T ] × Ω × A × N −→ R n , which satisfies that b(·, a, n) is an F-progressively measurable process for every (a, n) ∈ A × N . The actions of the Agent are costly for him, so his benefits are penalized by a cost function c : [0, T ] × Ω × A −→ R such that for every a ∈ A, c(·, a) is an F−progressively measurable process. We assume that for some p > 1 there exists κ ∈ (1, p] such that The Agent discounts the future through a map k : , we impose the following conditions on the maps b, c and k Assumption (H ℓ,m,m ). There exists 0 < κ < κ such that for any (t, x, a, n) x, a, n) ≤ κ 1 + a ℓ−1 . (iii) The discount factor k is uniformly bounded by κ. We present finally the definition of admissible efforts of the Agent. Definition 2.2 (Admissible efforts). A control process α ∈ A is said to be admissible, if for every (P, ν) ∈ N A the following process is an (F, P)-martingale . (2.4) We denote by A the set of admissible efforts. Finally, we present the impact of the actions of the Agent in the outcome process. Consider an admissible effort α ∈ A and (t, Thus, under Assumption (H ℓ,m,m ), by Girsanov's Theorem we have for any α ∈ A, and for any where W α is a P α −Brownian motion. More precisely, for some P ∈ P. Admissible contracts The Principal offers to the Agent a final salary taking place on the horizon T . Since the Principal can observe merely the outcome process X, a contract corresponds to an F T -measurable random variable ξ. The Agent benefits from the payments of the Principal through his utility function U A : R −→ R, which depends on his terminal remuneration and is a continuous, increasing and concave map. The Principal benefits from her wealth, penalized by the salary given to the Agent through her utility function U P : R −→ R which is a continuous, increasing and concave map. The outcome process is not necessarily monetary so the Principal possesses a liquidation function L : R −→ R which is assumed to be continuous with linear growth. The following (classical) notion of admissibility for the set of contracts proposed by the Principal is due to the fact that we will reduce later the problem of the Agent to solve a 2BSDE. We denote by C the class of admissible contracts. The problem of the Agent For a given contract ξ ∈ C offered by the Principal, the utility of the Agent at time t = 0, if he performs the action α ∈ A, is given by his worst-case approach over the set N α A of weak solutions to (2.1) associated to his beliefs. That is The problem of the Agent, consisting into finding the action which maximizes his utility, is therefore We will denote by A ⋆ (ξ) the set of optimal α ∈ A when ξ is offered, and define the set of optimal weak solutions The problem of the Principal Since the strategy of the Principal is to anticipate the response of the Agent to the offered contracts, she is restricted to offer contracts such that the Agent can optimally choose his Actions. Moreover, the Agent accepts only contracts under which he obtains more benefits than his reservation utility R 0 . Therefore, the set of admissible contracts is restricted to Notice that for any ξ ∈ Ξ, the set A ⋆ (ξ) is not necessarily reduced to a singleton. As is common in the literature, we will assume that when there is more than one optimal strategy for the Agent, he chooses one which is best for the Principal. We denote such a strategy by α ⋆ (x, ξ). Thus, the problem of the Principal is to find the contract which maximizes her worst-case utility (under her own beliefs) Remark 2.2. For the sake of simplicity, we do not add any discount factor for the Principal's problem (2.7). A model dealing with a discount factor k P : [0, T ] × Ω −→ R could be easily studied and does not add any difficulties, as soon as k P is sufficiently integrable, by modifying the HJBI equation (4.12) below. Solving the Agent problem via 2BSDE In this section we study the Agent's problem (2.6). We follow both the study made in Section 4.1 of [31] by extending it to a more general framework, and [14] by adding uncertainty on the volatility. We mention also that another approach which does not use the theory of 2BSDEs has been proposed in [44]. Definition of the Hamiltonian Define the function F : Define also for every (t, associated with the problem of the Agent (2.6) is defined by (see [9]) Notice that the infimum with respect to n ∈ N in the Hamiltonian has been taken in two stages with the introduction of the sets V t (x, Σ). We assume that the following assertion is enforced. We thus state a fundamental lemma on the growth of any control α ⋆ which is a saddle point in (3.1). We refer to the proof of [20, Lemma 4.1] which fits our setting. 2BSDEs representation of the Agent's problem Consider the following 2BSDE Recall now the notion of solution to this 2BSDE introduced in [40] and extended in [35]. 2) and K satisfies the minimality condition Remark 3.1. Similarly to [14], we use here the result of [32] for stochastic integral by considering the aggregative version of the non-decreasing process K. From now, we set the standing assumption to be used in all the following results We have the following result which ensures that the 2BSDE (3.2) is well-posed. Its proof is postponed to the Appendix. The next Theorem is the main result of this section and it provides an equivalence between solving the Agent's problem (2.6) and the 2BSDE (3.2). Its proof is postponed to the Appendix and is similar to the proof of [14,Proposition 5.4], being its extension to the worst-case volatility case. Then, the value function of the Agent is given by To conclude the section, let us comment the intuition behind this result and the limitations of our model. Remark 3.2. If the volatility of the outcome process is fixed and the Agent controls only the drift, it is well-known that his value function is the solution to a BSDE. The worst-case approach of the Agent makes his value function be the infimum of BSDEs and therefore the solution to a 2BSDE. This reasoning works because the Agent controls only the drift and not the volatility of the outcome. Indeed, by considering a controlled volatility coefficient σ(t, x, α, ν), the worst-case approach of the Agent induces a first 2BSDE and the control α induces a second 2BSDE on top of that. Currently, such kind of 2BSDEs has not been studied in the literature. The Principal's Problem In this section, we aim at solving the contracting problem (2.7). This corresponds to an extension of both [14] to the uncontrolled volatility case and [31] in a more general model, without assuming that a dynamic programming principle holds for the value function of the Principal. We follow the ideas of [1, 2, 39]. A pathological stochastic control problem To facilitate the understanding of this section, we provide a general overview of the method we use, dividing it in the following steps. Step 1. In Section 4.2, we rewrite the set of admissible contracts and the Principal's problem (2.7) making use of the results obtained in Section 3. We also make a distinction between the case in which the estimation sets of the Principal and the Agent are disjoint and the case in which they are not. Step 2. In Section 4.3, we show that if the beliefs of the Principal and the Agent are disjoints, there is a degeneracy in the sense that the Principal can propose to the Agent a sequence of admissible contracts such that asymptotically she gets her maximal utility. Step 3. We solve next the problem of the Principal in Section 4.4 when the beliefs about the volatility of the Principal and the Agent are not disjoint by restricting the study to piece-wise constant controls and by using Perron's method. In the following, we suppose that (S) and the next assumption are enforced. Assumption 4.1 (Markovian case). All the objects considered are Markovian, i.e. they depend on (t, X · ) only through (t, X t ). Remark 4.1. Assumption 4.1 may be removed if we deal with the theory of path dependent PDEs (see among others [16,37]). Here, we assume that it holds for the sake of simplicity and to focus on the procedure to solve the Principal's problem. The problem and remark on the set of admissible contracts The solution to the problem of the Agent provides a very particular form for U A (ξ). More precisely, let (Y, Z, K) be the solution of 2BSDE (3.2), then the process K satisfies the minimality condition (3.3), and Let us define the set of F 0 −measurable random variables Then, for any contract ξ ∈ Ξ there exists a triplet 3) and (4.1) hold. Since such a triplet is unique, we can establish a one-to-one correspondence between the set of admissible contracts Ξ and an appropriate subset of Y 0 × However, as explained in [31], decomposition (4.1) only holds P A −quasi surely and we have to take this fact into account in order to provide a suitable characterization of the set of admissible contracts by means of this formula. The definition is independent of the probability P because the stochastic integrals can be defined pathwise (see [14,Definition 3.2] and the paragraph which follows). . The Principal has thus to propose a contract with the form ) under every probability measure in the space P A . Outside of the support of this space, the Principal is completely free on the salary given to the Agent. We denote by D the set of F T −measurable random variables ξ such that The integrability conditions imposed on Z, K and ξ ensure us that D ⊂ Ξ. In fact, from the reasoning given in the paragraphs above we have that D coincides with Ξ and (4.3) corresponds to a characterization of the set of admissible contracts. Therefore, the problem of the Principal (2.7) becomes with the following slight abuse of notations Degeneracies for disjoint believes Similarly to the study made in [31, Section 4.3.1.], if the believes of the Agent and the Principal are disjoint, we face a pathological case caused by the fact that the Agent and the Principal do not somehow live in the same world. Indeed, if P A ∩ P P = ∅ we have . We then have the following proposition. Then, we have By making M → ∞ we conclude, since the other inequality is trivial. Interpretation. This result is the same as in [31,Proposition 4.2]. Since the Agent does not see the random variables defined outside of his set of beliefs P A , the Principal is completely free on the design of the contract on P P . Thus, the Principal can offer a contract which satisfies the reservation utility constraint on P A and which attains asymptotically her maximal utility on P P . By doing this the Principal cancels all her risk. This situation is not realistic, since a Principal should not hire an Agent with a completely different point of view on the market behaviour. The Principal's problem with common believes. We now turn to a more realistic situation and we study the problem when P A ∩ P P = ∅. In this case, as showed in [31,Proposition 4.3], (4.4) becomes with the abuse of notation A natural restriction to piece-wise constant controls As explained in [38], then in [13,14], the problem (4.7) coincides with the weak formulation of a (non standard) zero-sum stochastic differential game with the following characteristics • control variables: (Z, K) ∈ K Y 0 for the Principal and (P, for the Nature, • state variables: the output process X x,Θ and the continuation utility of the Agent Y y,Θ , with dynamic given for any t ≤ s ≤ T , P − a.s., by with Θ ≡ (Z, K, ν) and . We now fix an arbitrary Y 0 ∈ Y 0 and turn to the procedure to solve (4.7). The main issue is that the class of controls K Y 0 is too general since, as explained in [31,Section 4.3.2] and [13,14], the non-decreasing process K impacts the dynamic of Y Y 0 ,Z,K only throught the minimality condition (3.3) and more information on this process is required to solve the problem. As emphasized in [39,Remark 3.4], we need to consider piecewise controls and restrict our investigation on elementary strategies. This issue is intrinsically linked to the fact that we are looking for a zero-sum game between the Principal and the Nature. We now consider a restricted set of controls piece-wise constant included in K Y 0 . We say that an R d × R + -valued process (Z, K) (resp. ν ∈ N) is an elementary control starting at τ for the Principal (resp. the Nature) if there exist • a finite sequence (τ i ) 0≤i≤n of F t -adapted stopping times such that We denote by U (t, τ ) (resp. V(t, τ )) the set of elementary controls of the Principal (resp. the Nature). If τ = t = 0, we just write U (resp. V). We now set and for any (Z, We thus consider the following restricted problem with the abuse of notation The literature, an more particularly [39,34,41], leads us to expect to get U P 0 = V P 0 for particular cases in view of the related papers dealing with this kind of problems. In other words, in some cases, the value of the general problem (4.6) coincides with its restriction (4.9) to piecewise defined controls. We thus will focus on the restricted problem (4.9) in the following, that we solve completely. The intuitive HJBI equation Assumption (PPD) in [31] seems to be too complicated to prove 5 for a general class of processes K, since it requires a deep study of the measurability of the dynamic version of the value function associated with the problem (4.9). To avoid this difficulty linked directly to the ambiguity on the volatility of the model, we will deal with the so-called Perron's method by following the same ideas as in [1,3,2,39]. Recall that if one aims at associating (4.7) with an HJBI equation, as usual in the stochastic control theory, the problem seems to be ill-posed and we need more information on the process K. We thus expect to have an optimal contract ξ : ) for which the process K is absolutely continuous. More exactly, and by following [14, Remark 5.1] we expect to get an optimal contract in a the subspace of contracts for which there exists a G N A -predictable process Γ with values in M d,d (R) such that This intuition leads us to set the following Hamiltonian function G : x, y, p,p, q,q, r, z, γ, n), where g(t, x, y, p,p, q,q, r, z, γ, n) : Tr σ(t, x, n)σ(t, x, n) ⊤ γ − H(t, x, y, z, γ) +p b(t, x, α ⋆ (t, x, y, z, σ t ), n) · z + Tr z ⊤ σ(t, x, n)σ(t, x, n) ⊤ r Tr z ⊤ σ(t, x, n)σ(t, x, n) ⊤ z . We can now set the HJBI equation which is hopefully strongly connected to the problem of the Principal (4.10) −∂ t u(t, x, y) − G(t, x, y, ∇ x u, ∂ y u, ∆ xx u, ∂ yy u, ∇ xy u) = 0, (t, x, y) (4.12) Reduction to bounded controls In this section, we study a fundamental property on the Hamiltonian G appearing in the HJBI equation (4.12), in order to simplify the study of the stochastic control problem (4.10). The main difficulty is that the set of controls is unbounded, which can be quite hard to investigate in practice. We thus set an assumption ensuring that the supremum over (z, γ) in the definition of G can be reduced to a supremum over a compact set. We show that this assumption holds both for a riskneutral setting and in the one dimensional case, i.e. by assuming that d = n = 1, with additional growth conditions on the data b and σ. The reduction of the set of controls is fundamental in the proof of Theorem 4.1 below. Example 1: Risk-neutral Principal and risk-neutral Agent without discount factor. Assume that both the Principal and the Agent are risk-neutral, i.e. U P (x) = U A (x) = x. We moreover assume that k ≡ 0. In this setting, the worst-case measures of both parties coincide, and the problem reduces to a classical drift-control model in the Principal-Agent literature. In fact, the continuation utility of the Principal at time t is given by , with x ∈ R d and y ∈ R respectively. Since k ≡ 0, it is clear from the system (4.8) that u t is linear with respect to the variable y. Roughly speaking, the Hamiltonian G in HJBI equation (4.12) is evaluated atp = −1 andq = r = 0. Then we have ≤H(t, x, y, p, q). It follows that the optimal controls are z ⋆ = p, γ ⋆ = q and the the infimum is attained for both H and G at the same optimal n denoted by ν ⋆ . The continuous radius in this case is given by R(t, x, y, p, q) = max{|p|, |q|}. Remark 4.2. In the risk-neutral problem, an important consequence of the Principal and the Agent having the same worst-case measures is that the the first-best value can be attained. This is a direct consequence, and an extension, of the well known result in the classical drift-control problem. Example 2: One dimensional case. Assume that d = n = 1. The following assumption on the relative growth of the drift b, the volatility σ and the discount factor k of the output ensures the existence of the continuous radius. 2. For every (t, x, a) ∈ [0, T ] × R × A and for every ν global maximum of σ(t, x, ·), the following limit is finite x, a, ν) − k(t, x, a, ν) σ(t, x, n) − σ(t, x, ν) . The proof of the lemma below is postponed to the Appendix C. Lemma 4.1. Let Assumption 4.3 be satisfied. In addition, assume that n → σ(t, x, n) has a unique minimizer for every (t, x) and a → F (t, x, y, z, a, n) has a unique maximizer for every (t, x, y, z, n). Then for anyq < 0 and for every (t, x, y, u, p,p, q, r) ∈ [0, T ] × R 7 Assumption 4.2 holds. Perron's method to solve the Principal problem We now focus on a deep study of PDE (4.12). We assume that b and σ are continuous functions and that Assumption 4.2 holds. In this section we drop the assumptions made in [31] and we prove a verification result for a nonsmooth value function, by following the Stochastic Perron's method introduced by Bayraktar and Sîrbu [1,3,2,39]. More precisely, we show that the value function of the Principal associated with the problem (4.10) is a viscosity solution to the HJBI equation (4.12). The approach we follow avoids to prove (or assume) a dynamic programming principle and only deals with comparison results. Moreover, the dynamic programming principle is a consequence of the used method. We adapt now the definition of stochastic semi-solutions to stochastic differential games [39] to our framework under the weak formulation. τ ∈ C [t, T ] , R d+1 is a stopping rule starting at t if it is a stopping time with respect to B t . We denote by V − the set of stochastic sub-solution of (4.12). We denote by V + the set of stochastic super-solution of (4.12). To apply Perron's method we need the following assumption, assuring the existence of stochastic semi-solutions to the HJBI equation As explained in [1,3] the set V + is trivially non empty if the function U P is bounded by above, whereas V − is non empty if U P is bounded by below. Now we follow the stochastic Perron's method proposed in [39]. Let us define and notice that we have from Definition 4.3 that for any We thus get the main theorem of this section and we refer to the Appendix D for the proof. 4.4.5 On comparison results in the one-dimensional case for bounded diffusions with quadratic cost. In general, it seems to be hard to get a comparison result for HJBI equation (4.12) in a very general model and we are convinced that only a case-by-case approach has to be considered. In this section we focus on the one-dimensional case, and we illustrate why we can expect a comparison result for HJBI equation ( . Notice that, due to the vertex property together with the definition of A, we have Inspired by [36,Section 4], we introduce the following purely quadratic 2BSDEs, for ς ∈ {−1, 1} Since U A is bounded 6 , it follows from [36, Proposition 4.1] that the 2BSDEs admit a unique solution and that there exists a positive constant κ Y > 0 such that for all t ∈ [0, T ], |Y ς t | is uniformly bounded by κ Y . Hence, we deduce from a comparison principle for 2BSDEs with quadratic growth (see for instance [36,Proposition 3.1]) that |Y t | ≤ max ς∈{−1,1} (|Y ς t |) ≤ |κ Y |, ∀t ∈ [0, T ], P − a.s. Thus, the continuation utility of the Agent, being a state variable of the problem of the Principal, is bounded so we can restrict the domain of y in (4.12) to We have shown that we can restrict the domain of the HJBI equation (4.12) to a bounded domain O X × O Y . To get now a comparison principle in the sense of Theorem 4.1, we refer to the proof of Lemma 4.3 in Sîrbu [39]. Indeed, by noticing that the required conditions on the parameters are evaluated at the test functions (see Step 3 of the proof of Lemma 4.3 in [39]), the continuity of the radius R in Lemma 4.1 ensures that we can reproduce the proof by choosing a penalisation function of the form φ(t, x, y) = e −λt (1 + |x| + ψ(|y|)) where ψ is concave continuously differentiable twice and positive on O Y . Therefore, a comparison theorem for HJBI equation (4.12) can be obtained and the last part of Theorem 4.1 holds. Conclusion In this work we provide a general comprehensive methodology for Principal-Agent problems with volatility uncertainty and worst-case approach from both sides in a general framework. We characterize the value function of the Agent as the solution to a second-order BSDE. Concerning the problem of the Principal, we rewrite it as a non-standard stochastic differential game that we solve by using Perron's method inspiring by the work of Sîrbu [39]. In this work we extend To provide a path for future research, we would like to point that [31] has solved a particular nonlearning model explicitly. Although this assumption allows to get nice closed formula for optimal contracts, it is clearly not realistic at all that the ambiguity set is fixed. In the present paper, we assume that the ambiguity set can evolve along the time but we do not specify how it evolves. An interesting extension to this work might be to add in the problem an adaptive method, inspired by the recent paper [6], to update the ambiguity set. Indeed, we are convinced that a learning procedure will lead to sharper estimates of the unknown volatility process and that the ambiguity may become negligible after some time. A Functional spaces We finally introduce the spaces used in this paper, by following [35]. Let t ∈ [0, T ] and x ∈ Ω and a family (P(t, x)) t∈[0,T ]×x∈Ω of sets of probability measures on (Ω, F T ). In this section, we denote by X := (X s ) s∈[0,T ] a general filtration on (Ω, F T ). For any X T −measurable real valued random variable ξ such that sup P∈P(t,x) E P [|ξ|] < +∞, we set for any s ∈ [t, T ] Let p ≥ 1 and P ∈ P(t, x) and X P the usual P-augmented filtration associated with X. • Let κ ∈ [1, p], L p,κ t,x (X, P) denotes the space of X T −measurable R−valued random variables ξ such that • H p t,x (X, P) denotes the spaces of X-predictable R d -valued processes Z such that We denote by H p t,x (X, P) the spaces of X-predictable R d -valued processes Z such that • S p t,x (X, P) denotes the spaces of X-progressively measurable R-valued processes Y such that We denote by S p t,x (X, P) of X-progressively measurable R-valued processes Y such that • K p t,x (X, P) denotes the spaces of X-optional R-valued processes K with P−a.s. càdlàg and non-decreasing paths on [t, T ] with K t = 0, P − a.s. and We denote by K p t,x ((X P ) P∈P(t,x) ) the set of families of processes (K P ) P∈P(t,x) such that for any P ∈ P(t, x), K P ∈ K p t,x (X P , P) and sup P∈P(t,x) K P K p t,x (P) < +∞. • M p t,x (X, P) denotes the spaces of X-optional R-valued martingales M with P−a.s. càdlàg paths on [t, T ] with M t = 0, P − a.s. and We denote by M p t,x ((X P ) P∈P(t,x) ) the set of families of processes (M P ) P∈P(t,x) such that for any P ∈ P(t, x), M P ∈ M p t,x (X P , P) and When t = 0 we simplify the previous notations by omitting the dependence on x. B Proofs for the Agent's problem Proof of Lemma 3.2. Since ℓ+m m+1−ℓ ≤ 2, we have from Lemma 3.1 that the 2BSDE (3.2) has quadratic growth with respect to z and coincide with the framework of [36]. In view of Remark 4.2 in [35], we aim at applying Theorem 4.1 in [35] by slightly changing its assumptions. More precisely, we replace (i) of Assumption 2.1 in [35] by Assumption 2.1 in [36], excepting part (iii). Condition (iv) in [36] is a consequence of Lemma 3.1. Conditions (v)-(vi) in [36] holds in our setting because k is bounded. Therefore, Assumption 2.1 in [36] is satisfied. Finally, we turn to parts (ii)-(v) of Assumption 2.1 in [35]. The terminal condition U A (ξ) belongs to L p,κ 0,x (P A ) by definition of the admissible contracts and the conditions imposed on c in (H ℓ,m,m ) ensure that (ii) holds. The parts (iii), (iv) and (v) correspond exactly to our Assumption 2.1. Proof of Theorem 3.1. We first prove that (3.4) holds with a characterization of the optimal effort of the Agent as a maximizer of the 2BSDE (3.2). The proof is divided in 4 steps. We have from comparison theorems for 2BSDEs (which are inherited by the classical comparison results for BSDEs) We will follow the idea of Theorem 4.2 in [35], to prove that for every (α, ν) ∈ A × V(σ 2 ), the solution of the 2BSDE (B.1) satisfies the following representation First, notice that since K α,ν is non-decreasing, we have for every P ∈ P A and P ′ ∈ P A [P, Y P ′ ,α,ν 0 , P − a.s. for every P ∈ P A . To the reverse inequality, compute for every P ∈ P A Which is equivalent to Using a linearization (see for instance [17]), we get Then, from Assumption (H ℓ,m,m ) (iii) we deduce that Since K α,ν satisfies the minimality condition (3.3), we deduce that Y P,α,ν 0 − Y α,ν 0 ≥ 0, P − a.s. for every P ∈ P A and (B.5) holds. Which is equivalent to , whose solution is where the measure P α,ν is equivalent to P and is defined by Step 4: We have from the previous steps that the measure P α,ν ∈ P α A and for every measure P ∈ P A we have P-a.s. By similar arguments to the ones used in the proofs of Lemma 3.5 and Theorem 5.2 of [35], it follows that We now turn to the second part of the Theorem with the characterization of an optimal triplet (α, P, ν) for the optimization problem (3.4). From the proof of the first part, it is clear that a control (α ⋆ , P ⋆ , ν ⋆ ) is optimal if and only if it attains all the essential suprema and infima above. 1. Let x be a minimizer of σ, which maximizes q between all the minimizers of σ. If the following limit is finite then there exists M > 0 such that f γ attains its minimum over [c, d] at x for every γ > M . 2. Let x be a maximizer of σ, which minimizes q between all the maximizers of σ. If the following limit is finite then there exists m < 0 such that f γ attains its minimum over [c, d] at x for every γ < m. Proof. 1. We suppose without loss of generality that σ attains its minimum over [c, d] at a unique point x. Define g : [c, d] → R by We have that g is continuous on [c, d] and therefore there exists M g such that Then, for every γ > M := 2. We suppose without loss of generality that σ attains its maximum over [c, d] at a unique point We have that G is continuous on [c, d] and therefore there exists M G such that Then, for every γ < m : Proof of Lemma 4.1. (a) Ifq < 0, the boundedness of b and σ, together with Lemma 3.1 makes g coercive in z and the supremum in this variable can be restricted to a compact. The property on γ is independent of the sign ofq and is presented next. Recall the Hamiltonian It follows from Lemma C.1 the existence of m, M ∈ R such that if γ > M then the infimum in H is attained at the minimizer ν of σ(t, x, ·) and if γ < m then the infimum in H is attained at the maximizer ν of σ(t, x, ·). Suppose now thatp > 0. It follows again from Lemma C.1, that for γ big enough, the infimum in G is attained at the minimizer ν. For γ negative enough, the infimum in G is attained at the maximizer ν. This means that there exists some R := R(t, x, y, u, p,p, q,q, r) such that G and H attain its minima at the same value n ∈ N for |γ| > R. Therefore G does not depend on γ and the supremum on γ can be restricted to the set |γ| ≤ R. Suppose finally thatp < 0. Then for γ big enough, the infimum in G is attained at the maximizer ν. For γ small enough, the infimum in G is attained at the minimizer ν. In both cases, the dependence of G on γ is given by the termp|γ|(σ(t, x, ν) 2 − σ(t, x, ν) 2 ) so it follows that g is coercive in γ. (b) R is continuous as a consequence of the maximum theorem [5]. Indeed, the minimizer and maximizer correspondences are upper hemicontinuous and single-valued, therefore they are continuous functions and R is continuous. D Proof of Theorem 4.1 The following Lemma is used in the proof of Theorem 4.1. Its proof is omitted, being a path-wise approximation of deterministic Lebesgue integrals. Lemma D.1. Define the process K(Z, Γ) by Then, for any bounded map ψ, there exists a sequence k p of elementary controls such that for any ε > 0 and any p big enough Proof of Theorem 4.1 The proof follows the ideas of [39]. Intuitively, V 0 has to be greater than v − since v − is roughly speaking the HJBI equation associated with the problem of the Principal when K has the particular decomposition (4.11). In other words, the value of the unrestricted problem for the Principal has to be a super-solution of such HJBI equation. Step 1. v − is a viscosity super-solution of (4.12). We prove that v − is a viscosity super-solution of (4.12) by contradiction. 2. The viscosity supersolution property at time T . We now aim at proving that v − (T, x, y) ≥ This proof follows the same lines that the step 3 of the proof of Theorem 3.1 in [2] or the proof of Theorem 3.5, 1.2 in [39]. We assume by contradiction that there exists A (y 0 )). Since U P is continuous, there exists ε > 0 such that Thus, using exactly the same Dini type arguments that in [39,3], there exists n 0 big enough such that for some w n 0 ∈ V − we have v − (T, x 0 , y 0 ) + ε < ε 2 4η + inf (t,x,y)∈Tε w n 0 (t, x, y). Step 2. v + is a viscosity sub-solution of (4.12). We now prove that v + is a viscosity sub-solution of (4.12) by contradiction. b. Building the elementary strategy and Property (ii+) We consider the following strategy that we denote byν. * If ϕ − δ < w n 0 + at time τ , we choose the strategy (P,ν(Z, 0)), whereP ∈ P A is such that the minimality condition (ii) in Theorem 3.1 holds with control K, * Otherwise we follow the elementary strategy (P,ν 1 ). General conclusion. In step 1 (resp. in step 2) we have proved that v − is a viscosity supersolution (resp. v + is a viscosity sub-solution) of the HJBI equation (4.12). If a comparison theorem in the viscosity sense holds, then we deduce from (4.15) that which proves the theorem.
2018-03-23T19:27:55.000Z
2018-03-23T00:00:00.000
{ "year": 2019, "sha1": "16333c524500e2ce1cc2839154c7d51009595bd3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1803.08951", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "16333c524500e2ce1cc2839154c7d51009595bd3", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
10588705
pes2o/s2orc
v3-fos-license
A Link No Longer Missing: New Evidence for the Cetotheriid Affinities of Caperea The origins of the enigmatic pygmy right whale Caperea marginata, the only living member of its subfamily (Neobalaeninae), are an outstanding mystery of cetacean evolution. Its strikingly disparate morphology sets Caperea apart from all other whales, and has turned it into a wildcard taxon that holds the key to understanding modern baleen whale diversity. Morphological cladistics generally ally this species with right whales, whereas molecular analyses consistently cluster it with rorquals and grey whales (Balaenopteroidea). A recent study potentially resolved this conflict by proposing that Caperea belongs with the otherwise extinct Cetotheriidae, but has been strongly criticised on morphological grounds. Evidence from the neobalaenine fossil record could potentially give direct insights into morphological transitions, but is currently limited to just a single species: the Late Miocene Miocaperea pulchra, from Peru. We show that Miocaperea has a highly unusual morphology of the auditory region, resulting from a–presumably feeding-related–strengthening of the articulation of the hyoid apparatus with the skull. This distinctive arrangement is otherwise only found in the extinct Cetotheriidae, which makes Miocaperea a “missing link” that demonstrates the origin of pygmy right whales from cetotheriids, and confirms the latter’s resurrection from the dead. Introduction Caperea marginata is the most enigmatic and disparate of all living baleen whales (Mysticeti) in terms of its morphology, behaviour and even sensory abilities [1][2][3][4]. The evolutionary origins of Caperea remain highly controversial, which, given the status of this species as the sole representative of one of the four extant mysticete (sub)families, represents the greatest obstacle to understanding the true scope and structure of modern baleen whale biodiversity. Morphological cladistics generally ally Caperea with right whales [5][6][7], whereas molecular analyses consistently cluster it with rorquals and grey whales (Balaenopteroidea) [8][9][10]. A recent study [4,10] potentially resolved this conflict by proposing that pygmy right whales belong with the otherwise extinct Cetotheriidae, but has been criticised for allegedly not considering ontogenetic change, and for making wrong assumptions about character homology [6,11,12]. Fossil neobalaenines could potentially give direct insights into morphological transitions, but the only available material-Miocaperea-is phenetically close to Caperea [13] and, consequently, has so far remained largely uninformative in this regard. Here, we re-examine Miocaperea with a particular focus on the phylogenetically informative ear region, and show that it shares with cetotheriids a previously overlooked, yet distinctive and highly unusual auditory morphology to the exclusion of all other mysticetes (including Caperea). Material and Methods The re-description of the auditory region of Miocaperea pulchra is based on the holotype and only known specimen, permanently housed at the Staatliches Museum für Naturkunde Stuttgart, Germany (specimen number 46978). No permits were required for the described study. The phylogenetic analysis uses the total evidence matrix of Marx and Fordyce [10], with all subsequent additions and corrections [14,15]. As a result of our new observations on Miocaperea, we amended our data by: (i) rewording and recoding characters 127 ("Lateral lamina of pterygoid"), 162 ("Anteromedial corner of pars cochlearis in ventral view"), 182 ("Facial sulcus on compound posterior process") and 183 ("Position of facial sulcus on compound posterior process in ventral view"); (ii) ordering Char. 182; and (iii) adjusting the scorings of chars 157 ("Position of lateral tuberosity") and 198 ("Dorsomedial corner of sigmoid process in anterior view"). The cladistic matrix including all new and amended scorings is available as Supplementary Material (S1 File), and from MorphoBank (http://www.morphobank.org/), project 2331: the full matrix is stored in the "Documents" section. The analysis was carried out without any clock assumptions in MrBayes 3.2.6 [16], on the Cyberinfrastructure for Phylogenetic Research (CIPRES) Science Gateway [17] (20 million generations, first 25% of generations discarded as burn-in). To determine the distribution of an anteriorly expanded paroccipital concavity, we traced the phylogenetic history of Char. 182 using both parsimony-based and likelihood-based ancestral state reconstruction methods in Mesquite 3.04 [18]. Because we are primarily interested in the presence of the paroccipital cavity and attendant posteroventral flange on the posterior process, we treated states 1 (posteroventral flange present) and 2 (posteroventral flange present and markedly enlarged) as the same for the purpose of state reconstruction. Results and Discussion Re-description of the auditory region of Miocaperea pulchra Both the left and right periotics of SMNS 46978 are preserved in situ, but the right is heavily eroded and broken. The following description will therefore be based on the left periotic (Fig 1), unless stated. Unlike in Caperea, the anterior process is firmly attached to the body of the periotic. In ventral view, the anterior process is relatively wide transversely and about as long anteroposteriorly as the pars cochlearis. There is a well-developed lateral tuberosity, which extends anteriorly just beyond the level of the anterior pedicle of the tympanic bulla. The shape of the lateral tuberosity is slightly obscured, but it appears to be blunt and relatively robust. In medial view, the dorsal portion of the anterior process is anteroposteriorly narrow and notably rises towards the cranial hiatus. Anteromedially, the anterior process is broadly underlain by the lateral lamina of the pterygoid. Because of this, the shape of the anterior border of the periotic can only be surmised, but appears to be markedly concave. Anterior to the pars cochlearis, the medial surface of the anterior process is somewhat concave, presumably marking the origin of the tensor tympani muscle; however, there is no associated ridge or shelf. The mallear fossa is mostly obscured by sediment, but its anteriormost portion at least appears to be poorly defined and shallow. In ventral view, the pars cochlearis is bulbous, with a slightly angular, dorsally displaced anteromedial corner. Unlike in balaenopterids, the rim of the fenestra rotunda is flush with the posterior border of the pars cochlearis. The caudal tympanic process is short and oriented posteriorly. The fenestra ovalis, distal opening of the facial canal and fossa for the stapedius muscle are mostly or entirely obscured by sediment and/or the auditory ossicles (see below). In medial view, the anteriormost portion of the pars cochlearis is confluent with the dorsal border of the anterior process and thus somewhat rises towards the cranial hiatus. The rim of the internal acoustic meatus consists of a series of spike-like, cranially oriented projections, thus giving the dorsal margin of the pars cochlearis a jagged appearance. Ventrally, these spikes are offset from the remainder of the pars cochlearis by a step-like promontorial groove. The fenestra rotunda is relatively large and extends from the ventral border of the pars cochlearis almost to the level of the promontorial groove. Dorsally, the rim of the fenestra rotunda is interrupted by two sulci rising towards the dorsal surface of the pars cochlearis. Unlike in grey whales, the fenestra rotunda is not confluent with the aperture of the cochlear aqueduct. The caudal tympanic process is a small, rounded and somewhat posterodorsally directed plate, and is confluent with a small shelf forming the lateral border of the fenestra rotunda. As far as can be told, the caudal tympanic process is clearly separated from the crista parotica. In ventral view, the distal end of the compound posterior process of the tympanoperiotic is markedly expanded both anteroposteriorly and dorsoventrally, and widely exposed on the lateral skull wall. The distalmost portions of both posterior processes are eroded. The broken base of the posterior pedicle of the bulla is robust and U-shaped, reflecting the internal excavation of the pedicle by the tympanic cavity. Laterally, the posterior pedicle is continuous with a transversely oriented anteroventral flange (sensu [14]). Posteriorly, this flange is bordered by a second, posteriorly oriented posteroventral flange (sensu [14]), which almost completely floors the facial sulcus. Together, the anteroventral and posteroventral flanges delimit a fossa on the ventral surface of the posterior process that is aligned with, and therefore forms part of, the paroccipital concavity. On the right, the paroccipital concavity has been largely obliterated by erosion, but the facial sulcus is preserved and ventrally floored, in the same manner and position as on the left (S1 Fig). Note that the paroccipital concavity was previously misidentified as the facial sulcus (p. 891 and Fig 13D in [19]). The posterior border of the paroccipital concavity is eroded, but the excavation of the exoccipital is clearly more pronounced than in Caperea. The external acoustic meatus has been largely obliterated by erosion. Contrary to what is stated in the original description [19], the holotype of Miocaperea pulchra preserves both stapes, both incudes, the left malleus and the dorsal portion of the sigmoid process of the left tympanic bulla. On both sides of the skull, the auditory ossicles are naturally articulated and virtually in situ (Figs 1 and 2). The head of the malleus is situated immediately beside the dorsomedial apex of the sigmoid process in exactly the position as seen in other mysticetes, without, however, being fused to it as in balaenopterids (Fig 2). The articular facets for the incus are obscured; nevertheless, judging from the shape of the head, the vertical facet appears to be considerably larger than the horizontal one. As in Caperea, the tubercule is transversely short and stocky. The anterior process of the malleus is missing, but the anteroventral side of the head shows a large excavation corresponding to the dorsalmost portion of the sulcus for the chorda tympani. The incus is robust, with a well-developed body and crus longum. The lenticular process is mostly hidden from view, but can be surmised to be relatively large, based on both the shape of the crus longum and the head of the stapes. The crus breve of the incus and most of the stapes remain covered by matrix. Phylogenetic implications We identified several new features of the auditory region that-contrary to the traditional morphological interpretation of relationships-ally Miocaperea with Caperea, cetotheriids and, in some cases, balaenopteroids, but not balaenids. Specifically, Miocaperea shares with (i) Caperea and the cetotheriid Herpetocetus the presence of an angled, medially projecting anteromedial corner of the pars cochlearis (Fig 1) [4]; with (ii) Caperea, Herpetocetus and certain (stem) balaenopteroids a lateral tuberosity of the periotic that extends anteriorly past the level of the anterior pedicle of the tympanic bulla [4]; with (iii) Caperea, the cetotheriids Herentalia, Metopocetus and Piscobalaena, and some balaenopteroids the extension of the lateral lamina of the pterygoid on to the anterior process of the periotic (Figs 1 and 3); and with (iv) cetotheriids, but not Caperea, the presence of an enlarged paroccipital concavity on both the anteroventral surface of the exoccipital and the posteroventral surface of the compound posterior process of the tympanoperiotic (hereafter, "posterior process"). The enlargement of the paroccipital concavity-previously misidentified as the facial sulcus [19]-is particularly striking, and accompanied by the development of a posteroventral flange (sensu [14]) flooring the facial sulcus (Fig 1). The anterior expansion of the paroccipital concavity on to the posterior process is a highly distinctive feature, yet has been mentioned only twice in the cetacean literature, namely, for the cetotheriids Piscobalaena nana [20] and Metopocetus hunteri [14]. The same structure, including an attendant posteroventral flange, occurs in other species of Metopocetus (previously misidentified as either the external acoustic meatus [21] or the facial sulcus [22]), as well as Herentalia, "Cetotherium" megalophysum and "Metopocetus" vandelli (Figs 3 and 4). A less developed version occurs in Brandtocetus, Kurdalagonus and Herpetocetus. In the latter, a marked anterior shift of the facial sulcus has led to a corresponding reduction in the size of the posteroventral flange. Nevertheless, in some species (e.g. H. morrowi; UCMP 124950; SDNHM 34155) the flange is still developed well enough to close the facial sulcus in ventral view. No other extinct or extant mysticetes we examined show evidence of a posteroventral flange. An anteriorly extended paroccipital concavity does occur in some balaenopterids, e.g. certain specimens of Megaptera novaeangliae, but it is generally narrow and does not floor the facial sulcus ( Fig 5). Similarly, the paroccipital concavity is enlarged in the extant grey whale (Eschrichtius robustus) [14] but, in lieu of a posteroventral flange, roofed by an anterior extension of the ventral surface of the exoccipital (Fig 5). The distinctively enlarged paroccipital concavity and associated posteroventral flange represent a previously unrecognised, taxonomically restricted feature of cetotheriids and neobalaenines, later lost again in Caperea. The presence of this structure in Miocaperea pulchra makes this species a rare 'missing link' uniting features otherwise unequivocally associated with cetotheriids and Caperea, respectively, and thus strongly supports an evolutionary relationship between the two (see below). This interpretation is borne out by our phylogenetic analysis, which groups all cetotheriids and neobalaenines into a monophyletic clade and reconstructs the expansion of the concavity on to the posterior process as a shared feature (Fig 6). Functional implications Across Cetacea, the paroccipital concavity has long been interpreted as an osteological correlate of either the posterior sinus and/or the secondary, ligamentous articulation of the stylohyal with the basicranium [14,23,24]. A posterior sinus is generally thought to be present in mysticetes [24], but its existence is poorly established and inferred primarily based on (i) its occurrence in odontocetes, where it occupies some or all or the paroccipital concavity; (ii) the consistent presence of the paroccipital concavity in all mysticetes; and (iii) a study by Beauregard [25], which described the presence of a small "posterior" sinus in Balaenoptera acutorostrata. As far as we can tell, all other referrals to a mysticete posterior sinus in the literature are ultimately based on these points, with little or no direct data to confirm the occurrence of this structure in the living species. In odontocetes, the posterior sinus emerges from the tympanic cavity via the elliptical foramen, which in turns separates the inner and outer posterior pedicles of the tympanic bulla [24]. Inner and outer pedicles are also present in archaic mysticetes, including all of the toothed species and eomysticetids, but are absent in all extant taxa. Previous interpretations of this situation implied the loss of the outer posterior pedicle [5], which would leave the posterior sinus wrapped around the remaining pedicle to extend, as in odontocetes, into the paroccipital concavity. However, new fossil mysticetes from New Zealand (OU 22705, 22732) now show that the outer posterior pedicle was not lost (Fig 7). Instead, the elliptical foramen was gradually closed along the lineage leading to crown mysticetes, as shown by a transformation series ranging from taxa with a well-developed elliptical foramen (e.g. the eomysticetid Tokarahia kauaeroa, OU 22235), to taxa with a partially closed, circular foramen (OU 22732), to taxa with a single posterior pedicle that is deeply excavated anteriorly and extends far onto the dorsal surface of both the involucrum and the outer lip of the tympanic bulla (OU 22705). In modern mysticetes, the remnant of this excavation can still generally be seen on the inside of the posterior pedicle, where it appears as a relatively small, dorsally directed lobe of the tympanic cavity. In some specimens (e.g. OM VT3075, a juvenile Balaenoptera bonaerensis), the bony wall behind this lobe is extremely thin and translucent (Fig 7), which likely marks the ancestral position of the formerly open elliptical foramen. A second line of evidence for the closure of the elliptical foramen comes from the position of the tympanic sulcus, which marks the attachment of the tympanic membrane. In the bulla of archaic mysticetes with an elliptical foramen, the tympanic sulcus runs from the posterior surface of the sigmoid process on to the inside of the conical process, thence rising on to the inside of the outer posterior pedicle. The sulcus in living mysticetes essentially follows the same course, and posteriorly rises up on the inside of the single remaining pedicle. We regard the path of the tympanic sulcus as a phylogenetically and functionally conservative indicator of the outer posterior pedicle. At the same time, however, the single pedicle of extant mysticetes closely resembles the inner posterior pedicle of more archaic species in both size and position, thus suggesting that the two pedicles fused, as opposed to one of them being lost. If the elliptical foramen in crown mysticetes is closed, then it follows that the posterior sinus, which ancestrally exited through it, must have either disappeared or dramatically changed its course. Given the gradual reduction of the internal excavation of the posterior pedicle, we propose that the posterior sinus is absent in crown mysticetes, and that the small sinus described by Beauregard [25] probably was a small diverticulum of the peribullary sinus. This appears to be confirmed by the recent dissection of a minke whale, Balaenoptera acutorostrata (USNM 593594), by REF, which did not reveal any evidence of a posterior sinus. In contrast to the posterior sinus, the association of the paroccipital concavity with the stylohyal is supported by the same recent dissection, which showed that the anteroventral flange of the posterior process houses a robust cartilaginous structure equivalent to either a portion of the tympanohyal or the connection between the tympanohyal and the stylohyal (Fig 8). We therefore propose that the size and/or shape of the paroccipital concavity may be related to feeding, which suggests a similar strategy of prey acquisition in cetotheriids and early neobalaenines. Previous studies independently argued for a suction-based feeding strategy in cetotheriids, based mainly on the morphology of the mandible and the craniomandibular joint [12,26]. No features of the cetotheriid hyoid apparatus [20,26] are correlated with suction feeding, but a strengthened articulation of the latter with the basicranium could plausibly have played a role. Support for this idea may come from Eschrichtius robustus, which is the only living mysticete with an enlarged paroccipital concavity, and, concurrently, the only one known to use suction [27]. Whether Miocaperea itself was a suction feeder or simply retained an ancestral, suction-related morphology remains uncertain. keeled nasals (in Miocaperea, Caperea, Herpetocetus and Piscobalaena); and (vii) the presence of a distally expanded compound posterior process that is clearly exposed on the outer skull wall (in Miocaperea, Caperea and most cetotheriids). Other features previously suggested to ally neobalaenines with cetotheriids and balaenopterids cannot currently be assessed for Miocaperea, but include characters as diverse as the anteroposterior elongation of the scapula (especially in Caperea, Piscobalaena and Tranatocetus) [20,28], the loss of the first digit from the flipper (in all extant mysticetes except balaenids), the triangular shape of the coronoid process of the mandible (in Caperea, cetotheriids and both extinct and extant balaenopteroids) [29] and, potentially, details of the morphology of the tympanic bulla [4]. Nevertheless, the idea that neobalaenines and cetotheriids form part of a single clade has been strongly criticised by some recent studies [11,12], primarily on two grounds: perceived dissimilarities between Caperea and cetotheriids (especially Herpetocetus), thought to preclude a close relationship [12]; and proposed similarities between Caperea and balaenids (together forming the Balaenoidea of some studies), thought to outweigh any resemblances of neobalaenines with cetotheriids [11]. What, if anything, can Miocaperea reveal about the evolution of these features in the neobalaenine lineage? Status of neobalaenines as cetotheriids Perceived dissimilarities. Apparent dissimilarities cited in the literature include (i) the shape and orientation of the postglenoid process of the squamosal, described as twisted and vertically oriented in Herpetocetus, but transversely oriented and posteriorly reclined in Caperea [12]; (ii) the size and shape of the pterygoid hamulus, thought to be broadly triangular in Herpetocetus but small and almost indistinct in Caperea and balaenids [12]; (iii) the size of the pterygoid exposure on the ventral side of the skull, which is relatively small in cetotheriids, but large in Caperea [12]; (iv) the attachment of the anterior process of the periotic to the body of the periotic, which is strong in cetotheriids but tenuous in Caperea [12]; (v) the shape and location of the lateral tuberosity of the periotic, thought to be small, shelf-like and variably positioned in Herpetocetus, but massive and projecting far anteriorly in Caperea [12]; (vi) the shape of the anterior process of the periotic in medial view, described as polymorphic in Herpetocetus, but L-shaped in Caperea [12]; and (vii) the shape of the ascending process of the maxilla, described as short and parallel-sided in Caperea, Balaena and Balaenella, but not cetotheriids [11]. Below, each of these points is discussed in turn. (i) The postglenoid process of Caperea is more transversely oriented in ventral view than that of Herpetocetus, but it is not perpendicular to the sagittal plane. Instead, adult individuals of Caperea consistently show a slight twisting of the postglenoid process in ventral view: clockwise on the left, anticlockwise on the right [4]. This twisting also occurs in somewhat more pronounced form in Miocaperea, where it is clearly evident despite substantial erosion of both postglenoid processes (Fig 9). Though still not as marked as in Herpetocetus, the orientation of the postglenoid process in Miocaperea is therefore consistent with a twisted ancestral postglenoid morphology. We agree that the articular surface in Caperea is more inclined than in Herpetocetus in lateral view, but note that the degree of inclination is relatively slight and exaggerated by the natural anterior slant of the Caperea skull (as judged from the orientation of the orbit) when resting on a horizontal surface. This is in stark contrast to balaenids, in which the articular surface is markedly more horizontal in lateral view. Erosion of the articular surface and uncertainty about the in vivo orientation of the skull in Miocaperea (owing to the apparent posterior orientation of the orbits in lateral view) currently hinder a confident assessment of the slope of the postglenoid process in this species. (ii) Caperea and balaenids strikingly differ from most other mysticetes in the shape of the pterygoid hamulus. Thus, instead of being finger-like, the hamulus of Caperea is developed as a small, triangular, somewhat hook-shaped projection (Fig 10); by contrast, that of balaenids is expanded laterally into a broad, robust horizontal blade [24]. Given these differences, we disagree that the condition of the hamulus in Caperea and balaenids represents a shared state, but note that Miocaperea does not preserve the hamuli, thus leaving open the question of character homology. It is interesting to note that Herpetocetus also shows a relatively unusual morphology of the hamulus, with the latter being triangular and somewhat more confluent with the remainder of the pterygoid than in other mysticetes [12]. This loss of "distinctiveness" could plausibly be interpreted as a first step towards a state similar to Caperea. (iii) Caperea is unusual in having an extremely large ventral exposure of the pterygoid, with the latter-uniquely among mysticetes-entirely surrounding the foramen pseudovale [19]. In this arrangement, Caperea differs from all other described mysticetes (including Miocaperea), in which the foramen pseudovale instead appears to be at least partially surrounded by the squamosal [19]. The extremely enlarged exposure of the pterygoid in Caperea therefore represents a phylogenetically uninformative autapomorphy. (iv) Like the ventral exposure of the pterygoid, the narrow connection between the anterior process and the body of the periotic in Caperea [12] is autapomorphic, and thus phylogenetically uninformative. Miocaperea instead shows the widespread plesiomorphic condition of a solidly attached anterior process (Fig 1). (v) The position of the lateral tuberosity in cetotheriids is variable, both inter-and intraspecifically, which has led to the phylogenetic value of this feature being questioned [12]. This is particularly true for Herpetocetus, in which the lateral tuberosity can occur posterolateral, lateral or anterolateral to the anterior pedicle of the tympanic bulla, depending on the specimen and species. Nevertheless, it appears that one of the extremes of this continuum-the plesiomorphic position of the tuberosity posterolateral to the anterior pedicle-only occurs in extremely juvenile individuals (e.g. SDNHM 38689), suggesting that change in position may be ontogenetic [12]. (Fig 11). In Caperea, the lateral tuberosity is developed as a broad, relatively massive shelf extending all along the anterior process, well past the level of the anterior pedicle [4]. By contrast, Miocaperea preserves a more Herpetocetus-like state, in which the lateral tuberosity is located further posteriorly and approximately lateral to the anterior pedicle (Fig 1). The question of whether the shape of the lateral tuberosity is comparable between neobalaenines and Herpetocetus is more problematic. The lateral tuberosity of Herpetocetus is relatively variable in both size and shape, ranging from small and triangular (e.g. H. transatlanticus: USNM 182962) to proportionally large and rounded (e.g. H. bramblei: UCMP 82465; Herpetocetus sp.: NMNS PV19540). In all cases, however, the tuberosity is bent laterally, articulates with the adjacent squamosal and, in general, is oriented anterolaterally in ventral view, relative to the long axis of the anterior process (Fig 11). The lateral tuberosity of Caperea provides the closest match for that of Herpetocetus in also being oriented anterolaterally and in articulating with the squamosal [4], yet at the same time clearly differs in being considerably more massive. Miocaperea currently offers little to clarify this situation-partly, because the anterior face of the lateral tuberosity remains covered in matrix, and partly because the rim of the squamosal surrounding the anterior process is crushed. (vi) The shape of the anterior process is demonstrably variable among cetotheriids: twobladed, or L-shaped, in Kurdalagonus mchedlidzei, Herpetocetus transatlanticus, H. bramblei and, possibly, Brandtocetus chongulek (Fig 11); and triangular or squared in Piscobalaena nana, H. morrowi and Metopocetus durinasus. In Caperea, the anterior process is also L-shaped [4]. In Miocaperea, the outline of the anterior process is partially obscured by the lateral lamina of the pterygoid; it appears, however, that the anterior border of the process, if not L-shaped, is at least concave (Fig 1). Such polymorphism can be a challenge to phylogenetics, but does not in itself invalidate a particular character. In this case, the outline of the anterior process is admittedly not clear-cut, but we note that an irregularly-shaped anterior process is relatively rare among mysticetes, and currently confined to cetotheriids, neobalaenines and eomysticetids [6,30]. One previous study questioned the homology of the L-shaped anterior process in Caperea and cetotheriids, pointing out that the irregular anterior border in Caperea is formed entirely within the process, whereas in H. transatlanticus it arises from an interaction of the anterior process and the pars cochlearis [12]. This distinction is, however, somewhat arbitrary, and in itself variable. Thus, the L-shape is clearly formed entirely within the anterior process in K. mchedlidzei (NMRA 10476/1), B. chongulek (TNU skull 2), H. bramblei (UCMP 82465; Fig 11) and a periotic of Herpetocetus sp. from the Lee Creek Mine, North Carolina, USA (USNM 360765). The condition in the holotypes of H. transatlanticus (USNM 182962) and Miocaperea is less clear, but largely depends on where one draws the line between the pars cochlearis and the anterior process. In our view, it is entirely reasonable to argue that the L-shape, or concavity, in both is also entirely formed by the anterior process. (vii) The shape of the ascending process of the maxilla in adult Caperea and balaenids is difficult to determine because its size is restricted by the anterior telescoping of the supraoccipital (see below). Nevertheless, a clearly parallel-sided ascending process is evident in several neonate (NMNZ MM002262, MM002898) and juvenile Caperea (NMNZ MM002254 ; Fig 12), as well as Miocaperea, which-irrespective of the length of the process-therefore seem to share this condition with most cetotheriids and balaenopterids [29]. A parallel-sided ascending process of the maxilla also occurs in some individuals of Balaena mysticetus [11] (e.g. LACM 54464), but in other specimens, including neonates (e.g. LACM 54485), it is much more triangular (Fig 12). Other balaenids, including Balaenula astensis (MSNTUP I12555), neonate specimens of Eubalaena spp. (e.g. CNPMAMM 746 and LACM 54763), Morenocetus parvus [29] and, contrary to previous claims [11], Balaenella brachyrhynus (NMB 42001), also have a triangular or rounded ascending process with posteriorly convergent medial and lateral borders, and thus clearly differ from balaenopterids, most cetotheriids and neobalaenines in this regard (Fig 12). Proposed balaenoid synapomorphies. In terms of proposed similarities between Caperea and balaenids, one recent study listed 15 morphological features which it argued were shared by both, and concluded that the weight of the available evidence therefore spoke against a relationship of Caperea with cetotheriids [11]. Specifically, these features included: "(1) massive elongation of supraoccipital; (2) supraoccipital covering the parietal and excluding the parietal to be exposed in dorsal view; (3) anterior end of supraoccipital covering the posterior portion of the interorbital region of the frontal; (4) parietal not extending anteriorly to the posteromedial elements of the rostrum; (5) short ascending process of the maxilla that may be squared in some individuals; (6) premaxilla evident laterally to the nasal; (7) lack of parietal exposure at cranial vertex; (8) development of a concave posterior wall of the temporal fossa; (9) zygomatic process of the squamosal strongly shortened; (10) low tympanic cavity; (11) low conical process of the tympanic bulla; (12) dorsoventrally oriented mandibular condyle; (13) presence of a depression or a groove for mylohyoidal muscle on the medial side of the dentary; (14) fusion of cervical vertebrae; and (15) long baleen. " [11: 15] Of these 15 purported balaenoid synapomorphies, seven directly or indirectly describe the same feature, namely, the anterior extension of the supraoccipital shield. Thus, as the supraoccipital extends anteriorly (1), it excludes the parietal both from the vertex (7) and from dorsal view (2), and covers the interorbital region of the frontal (3). As it extends forward, the supraoccipital leaves no room for the rostral bones (maxilla, premaxilla and nasal) to project posteriorly on to the vertex, thus precluding overlap of the rostral bones with the parietal (4) and resulting in a short ascending process of the maxilla (5), with the latter-like the nasal-also being unable to extend posterior to the premaxilla (6). All seven of these characters (in particular chars 1-5 and 7) are therefore linked by a simple, reciprocal geometrical relationship: the more the supraoccipital extends anteriorly, the less room there is for the rostral bones to telescope backwards, and vice versa (Fig 13). Functionally, this relationship is presumably constrained by the relatively short intertemporal region of crown mysticetes, which reduces the space available for telescoping; the need to maintain enough of an attachment surface for the semispinalis capitis muscle to support the skull; and the need to position the external nares as far posteriorly and/or dorsally as possible to facilitate breathing. Balaenids, neobalaenines and cetotheriids provide perfect case studies: in the former two, the supraoccipital occupies much of the intertemporal portion of the cranium at the expense of the rostral bones; by contrast, exactly the opposite is true in cetotheriids, where the pronounced posterior elongation of the rostral bones confines the supraoccipital shield to the posteriormost portion of the cranium (Fig 13). Note, however, that in balaenids the ascending process of the maxilla may be genuinely (i.e. ancestrally) short, as judged from the often relatively large exposure of the frontal on the vertex (e.g. in Balaenella brachyrhynus and neonate Eubalaena glacialis, LACM 54763). This condition is especially obvious in the oldest described balaenid, Morenocetus parvus [31], thus making it the only right whale in which the ascending process of the maxilla can be coded without its relative position being obviously compromised by the supraoccipital. In extant balaenopterids, the anterior telescoping of the supraoccipital and the concurrent posterior shift of the rostral bones are more balanced, resulting in an anteriorly truncated supraoccipital shield that meets the equally truncated, squared ascending processes of the maxillae roughly halfway along the vertex. Overall, the above examples demonstrate that the first seven features cited in support of grouping Caperea with balaenids are interdependent, and thus-for taxa showing pronounced telescoping-should be coded only once to avoid incidental character weighting. In the present analysis, all of these features are therefore subsumed in char. 90, "Anteriormost point of supraoccipital in dorsal view", which we accept as a potential balaenoid synapomorphy. Consider, however, that neobalaenines differ from balaenids in the detailed arrangement of their skull vertex: whereas the frontal has virtually disappeared from view behind the nasals in both Caperea and Miocaperea (Fig 14), it has escaped obliteration by the supraoccipital in balaenids by insertion between the posterior portions of the rostral bones (Fig 12) (pl. 42 in [32]). This difference in vertex architecture suggests that the pronounced telescoping of the supraoccipital and attendant changes in balaenids and neobalaenines are a result of evolutionary convergence. Miocaperea further differs in having posteriorly convergent nasals, accompanied by equally convergent ascending processes of the maxilla and premaxilla (Fig 14). By contrast, the lateral margins of the nasals in balaenids, and indeed Caperea, are nearly parallel (unclear in the holotype of Balaenella brachyrhynus, in which the nasals are missing, contra [33]; Fig 12). This condition is consistent with the common ancestor of Caperea and Miocaperea having posteriorly convergent maxillae (as seen in other cetotheriids), which were later shortened and made less convergent by the anterior telescoping of the supraoccipital. Out of the remaining characters, we concur that a short zygomatic process of the squamosal (9) and long baleen (15) could be seen to unite Caperea and balaenids-assuming that the presence of long baleen is coded in lieu of the presence of an arched rostrum. Nevertheless, the case for a homologous reduction in the size of the zygomatic process is somewhat speculative, given the rather disparate morphologies of neobalaenines and balaenids in this regard (Fig S4 in [4]). In Caperea and, as far as can be told, Miocaperea, the zygomatic process is reduced to the point of obliteration, but seems to be oriented anteriorly judging from the position of the (usually juxtaposed) postorbital process of the frontal. By contrast, the zygomatic process of balaenids is typically much better developed, twisted in anterior view, and oriented anterolaterally. Further evidence is needed to demonstrate that these conditions are plausibly homologous. It is unclear how to interpret the statements that Caperea and balaenids share (8) a "concave posterior wall of the temporal fossa" (p. 15 in [11]), as the direction of the concavity was not specified. Confusingly, the corresponding character (number 76, "Strong concavity in temporal fossa posterior to emergence of supraorbital process of frontal"; p. 10 of the suppl. material of [11]) appears to be coded as "present" (state 1) for Caperea, Zygorhiza kochii, Aetiocetus weltoni and balaenopterids (including grey whales), but not a single balaenid, thus invalidating this character as a potential balaenoid synapomorphy. Likewise, we are unsure about the correct interpretation of the presence of a "low tympanic cavity" (10). We agree that the tympanic bulla of Caperea is dorsoventrally flattened in medial view, but the same cannot necessarily be said for balaenids, in which this compression appears to be limited to the main ridge (Fig 15). Peripolocetus vexillifer is an exception, but even in this case the dorsoventral compression of the bulla appears less marked than in Caperea. Additional data on the bulla morphology of Miocaperea and archaic balaenids, such as Morenocetus and Peripolocetus, are necessary to test whether a flattened bulla might indeed represent a shared feature. We disagree with the claim that the presence of a low conical process of the tympanic bulla is demonstrably homologous in balaenids and Caperea, but not Herpetocetus (11) [11]. All three of these taxa appear to show a comparable degree of reduction and dorsal flattening of the conical process, and thus do not allow an a priori distinction into different character states. A more stringent test of homology might be whether the conical process of fossil neobalaenines is taller than in Caperea and, if so, whether its morphology more closely resembles that of cetotheriids or balaenids. At present, there is no material that could provide such insights. We also disagree that Caperea and balaenids share a dorsally oriented articular condyle of the mandible (12); rather, the condyle in Caperea appears to point posterodorsally (Fig 16), but we note that its mandible is difficult to orient (owing to the pronounced curvature of the mandibular body), and that the in situ position of the condyle needs to be ascertained via dissection. Finally, we agree that the presence of a mylohyoid groove or depression (13) and fusion of the cervical vertebrae (14) characterise both Caperea and balaenids. We also note, however, that the attachment of the mylohyoid is less clearly developed in Caperea, and that an apparent mylohyoid groove also occurs in at least some cetotheriids (e.g. Herpetocetus morrowi [12], and likely also Piscobalaena nana). Likewise, incipient fusion of the cervical vertebrae is present in some specimens of Herpetocetus, with C2 and C3 being incipiently fused in the type specimen of H. morrowi (UCMP 124950) [12], and C2-4 being partially fused in Herpetocetus sp. from Japan (NMNS PV-19540). Both characters could thus potentially also unite Caperea with (certain) cetotheriids. In summary, only three of the 15 characters cited as balaenoid synapomorphies, namely, numbers 1, 9 and 15, unequivocally support a neobalaenine-balaenid clade. The remainder either code for the same feature (2-7), are equivocal or do not apply (8, 10 and 12), or are shared by neobalaenines, balaenids and at least some cetotheriids (11, 13 and 14). This relatively weak morphological support for Balaenoidea is outweighed by the morphological evidence supporting a neobalaenine-cetotheriid clade, as well as the fact that molecular analyses consistently group Caperea with extant balaenopteroids [8,9]. As a result, we therefore here strongly reaffirm our previous referral of neobalaenines to Cetotheriidae [4], and our identification of Caperea as the last survivor of this once diverse family.
2018-04-03T02:26:27.798Z
2016-10-06T00:00:00.000
{ "year": 2016, "sha1": "d357209cb0956202a1112d502f105d99a0b2b192", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0164059&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d357209cb0956202a1112d502f105d99a0b2b192", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
270699796
pes2o/s2orc
v3-fos-license
Classification Criteria for ANCA Associated Vasculitis – Ready for Prime Time? Purpose of Review This review aims to summarize the evolution and recent developments in the classification of ANCA associated vasculitis (AAV) and to summarize evaluations of the 2022 ACR/EULAR classification criteria of AAV in several cohorts. Recent Findings The classification of AAV has been a field of controversy for some time. The parallel existence of classification criteria and disease definitions produced some overlap in classification, leading to challenges when comparing different cohorts. The 2022 ACR/EULAR classification criteria derived from the largest study ever conducted in vasculitis account for significant changes in vasculitis classification with the integration of ANCA and modern imaging. These criteria show good performance compared to previous ones but also raise questions as ANCA serotypes have substantial impact on classification. In addition, there are some discrepancies with earlier agreed histopathological features of AAV disease phenotypes. Summary During the last 35 years, several sets of classification criteria have evolved to facilitate epidemiologic studies and clinical trials in AAV. While some of these criteria have been in use for many years, they were criticized due to either not using ANCA or not integrating surrogate markers for vasculitis but also due to overlapping when used in parallel. The long-awaited new ACR/EULAR criteria for AAV were published in 2022 and are the result of a large international study, introducing for the first time ANCA and modern imaging in the classification of AAV. Though the criteria show good performance, they bring several other challenges with practical application. Introduction The systemic vasculitides (SV) comprise several diseases with considerable variation in clinical presentation depending on type and size of blood vessel involved and the organ system affected.Vasculitides are defined as inflammation and necrosis within blood vessel walls with infiltration of inflammatory cells, resulting in blood vessel destruction and impairment of blood supply to the affected territory.Clinical presentations are heterogenous, from single organ involvement to severe multi-systemic, life-threatening disease.The antineutrophil cytoplasmic antibodies (ANCA)-associated vasculitides (AAV) are a subgroup of SV, characterized by vasculitis in small and medium blood vessels.The AAV comprise granulomatosis with polyangiitis (GPA), microscopic polyangiitis (MPA), as well as eosinophilic GPA (EGPA).The three AAV-phenotypes are differentiated according to clinicopathological characteristics defined by the CHCC [1].ANCA are found in most cases of GPA (mainly Proteinase 3, PR3) and MPA (mainly myeloperoxidase, MPO) but only in 25%-40% of EGPA (mainly MPO-ANCA).On biopsy, granuloma is a key feature of GPA and EGPA (eosinophil rich) but not MPA [1].Acute kidney disease and pauci-immune glomerulonephritis are encountered in all the three AAV phenotypes, though much more prevalent in MPA.GPA commonly affects the upper and lower respiratory tract with symptoms from the ENT region as well as primarily nodular lung disease, EGPA can exhibit similar symptoms with a less destructive and more allergic component, but it can also exhibit symptoms as asthma, cardiomyopathy, and polyneuropathy.MPA usually involves the kidney and the lungs, the latter can present as interstitial lung disease even before symptoms of vasculitis in other organ systems.Studies have shown considerable geographic differences in distribution with MPO-positivity and MPA being more frequent in Asia, and PR3-positivity and GPA more frequent in western countries [2].AAV are severe diseases that can carry high risk of mortality if untreated [3].The prognosis of AAV has greatly improved since the introduction of glucocorticosteroids (GCs) and cytotoxic/ immunosuppressives in the treatment arsenal in 1970s [4].The rarity of AAV poses a challenge to epidemiologic and clinical studies due to small number of cases and paucity of large well-validated cohorts [5].An important prerequisite when studying these rare and heterogenous diseases is a common nomenclature of agreed definitions and classification criteria.These are essential to compare results from different regions, cohorts, and time periods.Unlike diagnostic criteria that aim to differentiate vasculitis from other diseases, classification criteria aim to differentiate one type of vasculitis from another.Classification can be based on different parameters.Histological parameters have been widely used, describing the size of affected vessels or the type of inflammatory process (granulomatous, necrotizing, eosinophilic etc.).Other parameters are mechanisms for example secondary to infectious agents, formation of antibodies or immune-complexes, further organ tropism or localized versus systemic involvement and classification according to clinical presentation. Since the earliest proposed classification of SV in the middle of the last century, several key efforts have been made in the field of classification and definition, thereby greatly advancing the research on vasculitis with clinical trials and epidemiological studies. During the last 35 years, the American College of Rheumatology (ACR) classification criteria and the Chapel Hill Consensus Conference (CHCC) definitions have dominated in the field of epidemiologic studies of SV and facilitated these kinds of studies.Since their introduction, advancements in immunology and imaging have revolutionized our understanding of the pathogenesis and epidemiology of SV.Further, the availability of better treatment options, both in induction of remission and maintenance, has transformed the traditional perception of SV from a fatal disease to one characterized by relapses and remissions, allowing more individuals to live with these conditions.These developments have prompted a re-evaluation of the necessity for new criteria, considering the recent advances in our understanding of SV. Beginning in 2010, a large international collaborative study to develop new classification and diagnostic criteria (DCVAS) has been conducted [6].This study has yielded classification criteria for the AAV, Takayasu arteritis (TAK) and giant cell Arteritis (GCA). In this review, we aim to trace the evolution of the classification of AAV from its inception, beginning with a brief historical overview, through the development of the most widely utilized criteria over the past 30 years and finally a discussion of our evaluation as well as results of other researchers when using the new ACR/EULAR (European Alliance for Associations for Rheumatology) criteria. Epidemiology Epidemiologic studies demonstrated variable incidence and prevalence of AAV across different geographic areas in the world [7].Reasons for these variations are probably related to differences in methodology used in case retrieval, case definition and classification.Furthermore, differences in geographic and genetic background as well as different time periods.Several earlier studies have indicated rising incidences of the AAV during the last 30 years [8][9][10].Most recently, a 23-years incidence study on epidemiology of AAV by our group found an incidence of 30.1/million, comparable incidence of GPA (15.4/million) and MPA (12.8/ million), EGPA (1.8/million) incidence is considerably lower [11].The rising incidence reported by previous studies, could not be verified in our area of southern Sweden in our large population-based cohort, with cases from 23-years and use of same retrieval method and classification criteria overtime.We believe that the apparent increase in incidence of AAV reported by previous studies was mainly impacted by the introduction of ANCA testing in mid 1980s [12], the ACR 1990 classification criteria in 1990 [13•], as well as the CHCC definitions in 1994 [14•]. Historical Overview of Classification Criteria Early descriptions of symptoms compatible with vasculitis can be found in ancient medical literature, for example Hippokrates description of a case with oral and genital ulcers and ocular inflammation [15], symptoms today included in current criteria for Behçet's disease [16].The description by Kussmaul and Maier [17] of a patient with fever, weakness, weight loss and pain, where autopsy later showed nodular thickening of medium sized arteries is often cited as the first modern description of vasculitis.In this description from 1866, the term periarteritis nodosa was first introduced, a term, along with its alternative polyarteritis nodosa (PAN), used synonymous for all vasculitis for many years.Different vasculitis syndromes were often described in relation to their similarity or diversity from PAN. Klinger described a rhinogenic granulomatosis [18] later known as Wegener´s granulomatosis [19], Churg and Strauss described a syndrome including vasculitis, asthma and eosinophilia, later bearing their names [20].In the twentieth century, efforts to better define and classify vasculitis were first made by the pathologist Pearl Zeek, who combined literature review and own observations to suggest 5 different forms of necrotizing vasculitis in 1953.Zeek introduced the concept of vessel size, later refined by Giliam and Smiley [21], however Wegener´s granulomatosis or Takayasu Arteritis were not included in Zeek´s classification, presumably as they had not been described in English literature at that time.In 1990 the ACR developed a set of classification criteria for seven forms of systemic vasculitis.The ACR 1990 criteria did not include classification criteria for MPA, neither used ANCA serology.To address these and further questions a consensus meeting was held in Chapel Hill, USA 1992.The results, definitions of different SV published in 1994 [14•] and revised in 2012 [1], were widely used in epidemiologic studies.As the parallel use of classification criteria and definitions produced overlap and sometimes confusion, a group of vasculitis experts introduced an algorithm incorporating both systems, the European Medicines agency (EMA) algorithm in 2007.In the last 2 years classification criteria endorsed by ACR and EULAR have been published for the AAV as well TAK and GCA, these criteria were the result of the largest study ever conducted in vasculitis the DCVAS [6]. Classification According to ANCA Serotype ANCA were first observed by Davies et al. in 1982, patients with segmental glomerulonephritis exhibited a factor in their sera that stained the cytoplasm of neutrophil granulocytes [22].A few years later van der Woude et al. [12] described a cytoplasmic immunofluorescence pattern of antibodies directed against components in granulocytes in patients with active GPA (then designated as ACPA (anticytoplasmic antibodies), suggesting an association between these antibodies and vasculitis.During the following years PR3-ANCA [23] and MPO-ANCA [24] were identified as target antigens for the cytoplasmic and perinuclear pattern respectively.ANCA are routinely used in clinical context today.Due to variability of IIF and good performance and fast evolution of immune assays, latest consensus now recommends the use of ELISA as first line test for ANCA [25].There is an ongoing discussion among researchers in the field, regarding the use of ANCA serotypes as a classification system for these diseases [26][27][28].The discovery of distinct genetic subsets of GPA and MPA that exhibit stronger association with serotype than phenotype [29], a finding that even could be made for EGPA when stratifying according to MPO-status [30] supports a shift towards serotype classification.As we try to demonstrate below, it can be argued that this shift clearly is represented with respect to the considerable weight granted to ANCA in the new ACR-EULAR criteria. American College of Rheumatology Criteria 1990 (ACR 1990) The ACR criteria from 1990 for SV were the first to combine different parameters into one set of classification criteria.The criteria were developed from a dataset with 1020 cases of vasculitis from North America (USA, Canada, and Mexico).Through identification of features typical for different types of vasculitis, criteria for seven different forms of systemic vasculitis (GPA, EGPA, GCA, TAK, IgA vasculitis, PAN and hypersensitivity vasculitis) [13•] were proposed.The criteria were widely used and facilitated clinical and epidemiological research in the field of vasculitis, however with time several shortcomings became obvious: i) the criteria lacked ANCA, though assays became widely available in the following years, ii) MPA was not included as a separate disease entity, iii) the underlying dataset only included cases from North America not accounting for geographical differences in phenotype distribution, in addition certain subtypes were more represented than others in the dataset, and iv) cases could be classified to different subtypes at the same time.Though not intended as such, the criteria were used for diagnosis in individual cases in routine clinical practice.The criteria performed poorly in an evaluation as diagnostic criteria [31].The criteria had considerable impact on epidemiologic research in vasculitis. Chapel Hill Consensus Conference (CHCC) In 1992 a meeting of a group of experts in Chapel Hill yielded a consensus document [14•] defining different types of vasculitis and establishing a nomenclature.The experts emphasized that the goal of the CHCC was not to develop classification or diagnostic criteria rather give definitions and define a common nomenclature.Ten different types of vasculitides were defined according to histological and clinical criteria and grouped on predominantly affected vessel size.The system accounted for the presence of ANCAs.The meeting further established MPA as a disease entity separate from GPA and EGPA as well as the association of ANCA with these small vessel vasculitides.As cases earlier attributed to PAN now became MPA, the former became a very rare disease.In an update 2012 [1], the group of small vessel vasculitides was further divided into the immune-complex related-or pauci-immune ANCA-associated vasculitides.Regarding the AAV, the consensus suggested using a prefix indicating ANCA reactivity, the concept of surrogate criterion (cavitary lung lesion on imaging can be a surrogate criterion for granulomatous disease without available histologic examination) is discussed (as it was in 1994) in the publication but without detail or specific guidance.In 2000 a study by Sorensen et al. tried using the CHCC definitions with surrogate markers as diagnostic classification criteria but it was not successful [32] European Medicines Agency (EMA) Algorithm 2007 The two available widely used systems, the ACR 1990 and the CHCC 1994 definitions performed differently when applied in parallel in same cohort [33], leading to overlapping as patients may be classified into different disease phenotypes or remain unclassified by one of these systems.To avoid this overlapping and to gain a consensus on how the ACR 1990 classification and CHCC 1994 definitions should be used in epidemiologic studies, a group of experts met in London in 2004 and agreed on a stepwise algorithm [34 •].The algorithm, which is widely known as Watts algorithm or EMA algorithm, integrated ACR 1990 and CHCC in a hierarchic way.The algorithm starts with ACR EGPA [35] as it had the highest sensitivity and specificity of the ACR 1990 criteria, then the following steps ACR 1990 GPA [36] and CHCC 1994 MPA are applied.The algorithm also introduced the combined use of ANCA and surrogate markers of vasculitis and/or granuloma.The EMA algorithm has been used in several epidemiological studies in the last 15 years [8,10,11] and showed minimum of overlapping or unclassified cases. The Diagnostic and Classification Criteria in Vasculitis (DCVAS) The evolution of imaging, widespread use of ANCA-testing and the improved knowledge and understanding of pathophysiology in vasculitis initiated a discussion on revision of classification of the primary vasculitides.The multinational observational DCVAS study was announced in 2010 [6] with the goal to develop and validate diagnostic criteria and improve and validate classification criteria.The project accumulated an impressive cohort and invested great efforts and applied advanced statistics to develop totally new criteria that are intended to be used independent of the prior ones.However, succeeding with the endeavour to even develop diagnostic criteria seems highly unlikely at this point [37].The DCVAS final cohort included 6991 patients from 136 sites in 32 countries (Europe 59%, North America 21%, other regions 20%).Patients ≥ 18 years with a diagnosis within 2 years of GPA, EGPA, MPA, GCA, anti glomerular basement membrane disease (anti GBM), PACNS, IgA vasculitis, aortitis, other large vessel vasculitis, PAN (within 5 years) or TAK (within 5 years) or a condition mimicking vasculitis secondary to tumour, infection, or other inflammatory condition as diagnosed by the submitting physician, were included.First a final set of high standard cases for each subtype was identified and included into the study, in the next step a set of candidate items were identified and further refined by expert opinion and a data driven approach (exclusion of items with low prevalence and/or not clinically relevant).In the next step a regression model was used which yielded the final criteria that were in the next step validated in a validation data set.Comparators used in the derivation and validation of the criteria for each subtype were the other AAV or to a minor extent other small or medium vessel and not large vessel vasculitides with generally very different phenotype.All these efforts have culminated in the introduction of new classification criteria for the AAV [38••, 39••, 40••] in 2022 as well as for GCA [41] and TAK [42]. The 2022 ACR/EULAR 2022 Criteria for AAV The new 2022 ACR/EULAR (in the following text designated as ACR/EULAR) criteria derived from the largest cohort ever established in systemic vasculitis.The cohort included mainly patients from western countries, however the recruited cases were much more international than those used for earlier criteria as the ACR1990.Unlike the ACR1990 criteria diagnosis of the submitting physician was reviewed by an expert panel, thereby minimizing investigator bias.The criteria account for the evolution of diagnostics and clinical practice by including ANCA and modern radiology.Another innovation is the weighting of items to reflect clinical importance of features. The final criteria include a different number of weighted items for the subtypes with threshold scores to be classified.Some items can have negative impact on the score, as for example nasal symptoms with MPA, or the presence of eosinophilia with GPA and MPA. The following is a short summary of main items in respective criteria, described in detail in Table 1: GPA The GPA dataset consisted of 1537 cases (724 GPA and 813 comparators), 20% were used in the validation set.The final criteria consist out of 10 weighted items with a threshold score of 5 needed for classification.The authors report a sensitivity of 92.5% and specificity of 93.8% in the validation data set [39 Evaluation of the ACR/EULAR Criteria An important question with new classification criteria or disease definition is if they improve shortcomings of earlier criteria and if they exhibit better sensitivity and specificity.The 1990 ACR-criteria did not include MPA, but this was added with the CHCC as well as ANCA.The lack of guidance concerning ANCA as a surrogate marker could be improved by the EMA algorithm.ACR/ EULAR introduces weighted items with positive and negative scores.In addition, ANCAs are introduced, with considerable weight.Further, new items such as interstitial lung disease (ILD) are introduced.ILD has been observed especially in patients with MPO-associated disease [43], often before the onset of vasculitis [44] and has therefore become more recognized in AAV. As of January 2024, the new criteria have been evaluated in studies from South Korea, Turkey, Netherlands, Japan, Portugal, India, Ukraine, and Sweden (Table 2).Some reports are only available conference abstracts.Below is a summary of main findings of some of these evaluations: South Korea Three separate studies for the AAV-subtypes from Korea have investigated reclassification of AAV-cohorts into disease phenotypes using the ACR/EULAR.For EGPA [45] the concordance rate with EMA was 86.3%, 7 former EGPA cases met criteria for two subtypes with the new criteria and 3 cases could not be classified.The authors consider the exclusion of the fixed pulmonary infiltrate item from ACR1990 as reason why these 7 EGPA cases could not be classified the same way with ACR/EULAR.Concerning GPA only 74% of cases remained GPA with ACR/EULAR [46], most were reclassified as MPA, predominantly due to MPO-ANCA positivity.The authors also encountered cases with granuloma on biopsy, now being classified as MPA and suggest score modification of the granuloma item and other clear GPA surrogates in the new criteria.In a further study on MPA cases [47] classified using the EMA algorithm, a high rate of agreement was observed with the ACR/EULAR (96.6%).A small number of patients were classified to GPA and MPA due to positivity to both antibodies.Almost 50% of the MPA cohort exhibited ILD, a feature now having considerable weight towards MPA classification. Netherlands Hospital records of 337 patients with AAV from a tertiary centre with a confirmed clinical diagnosis were used in this evaluation [48].After exclusion of a relatively high number of cases due to unclear diagnosis or insufficient records, 264 (GPA = 183, MPA = 54, EGPA = 27) patients were reclassified with ACR/EULAR.Information on prior classification with established criteria is not provided.The study reports sensitivities of 88% for GPA, 94% for MPA and 74% for EGPA, 17 patients could not be classified by ACR/EULAR.ANCA-serology is provided for 261 cases with 59% PR3positives, 31% MPO-positives and 9% ANCA-negatives.The researchers noted a large impact of serology on reclassification to GPA and MPA reflected in the finding that 94% of GPA patients were PR3 positive and 100% of the MPA patients MPO-positive in their results.The study therefore considers the new criteria to be step closer to a serologybased classification. Turkey A Turkish study [49] with 164 patients with AAV (clinical diagnosis: 77% GPA, 15% MPA, 8% EGPA) diagnosed between 2016 and 2022, recruited from two academic centres and reclassified by ACR-EULAR, describes a kappa for agreement with EMA classification of 0.79 for GPA, 0.7 for MPA and 0.82 for EGPA.There were two double-classified cases (MPA + GPA) and four former GPA patients are now classified to MPA due to positive anti-MPO.Only 7% of cases could not be classified (2 with granuloma on biopsy).Four patients, earlier classified as GPA, all with granuloma on biopsy, were now assigned MPA due to MPO positivity. Japan Two nationwide inception cohorts were used in this evaluation study [50].All patients were classified using EMA algorithm and were then validated using the ACR/EULAR 2022 criteria.A total of 477 patients were included in the study.The majority of the cases were MPO-ANCA positive (85%), 9% PR3-ANCA positive and 6% were ANCA-negative.Using these two classifications, MPA was the dominant phenotype (EMA:57%, ACR/EULAR: 76%).Details on the composition of the cohort are given in Sweden Our evaluation of the new criteria employed a populationbased cohort of 374 (47% female) validated cases with AAV [54], previously classified by the EMA algorithm.We compared classification outcome of ACR/EULAR to i) EMA and to a ii) strictly ANCA serology-based classification without any impact from clinical or histologic features.In general, the new ACR/EULAR criteria showed good agreement with EMA, with 96% for EGPA, 85% for GPA and 75% for MPA.We observed a low number of cases that could not be classified (3.5%) or are classified to two categories (1.1%).Fourteen percent of cases (69% MPA, 31%GPA) are assigned a different diagnosis with the new criteria due to ANCA-specificity.We further observed 4 cases exhibiting granuloma on biopsy but assigned to MPA.Regarding ANCA serologybased classification; very high agreement was observed with 98.9% for GPA and 84% for MPA, upon excluding cases with EGPA (n = 23), these numbers increased to 99.5% and 88.2% respectively (Fig. 1).There were 13 unclassifiable cases, 6 were ANCA-negative which corresponds to a third of all ANCA-negatives in the cohort.This might indicate difficulties classifying ANCA-negatives with ACR/EULAR.ANCAs are less frequent in EGPA, the predominant ANCA serotype MPO, presumably favouring classification to EGPA in certain cases is not even included in the EGPA category, the other serotype is included with negative three points thus making classification to EGPA more unlikely.Of 23 EGPA (EMA) in the Swedish cohort eight are MPO positive which does not impact classification at all, there are no PR3 positive patients, but assuming all the EGPA patients were indeed PR3 positive, all would qualify to be classified to EGPA as they all generate sufficient score with the other items.Of course, some of these patients would then even be classified as GPA, so many would shift to the double classified group.This means inclusion of ANCA in the EGPA category does not change the classification towards EGPA score in our patients at all. Challenges in Using the new ACR/EULAR 2022 Criteria Based on available evaluations, there are some challenges when applying/using the new 2022 ACR/EULAR classification criteria of AAV; these are summarized as: 1. Overlapping classification; cases are classified to more than one phenotype.2. Unclassifiable cases are described in all studies. 3. Disagreement with current understanding of AAV histopathology; the CHCC defined granuloma as a key feature of GPA, but cases with granuloma on biopsy are classified to MPA in several of the cited studies.Granuloma generates low score in the GPA towards GPA classification, but it is not included in the MPA at all, thus granuloma can now be a feature of MPA given MPO positivity or other items classifying towards MPA are present.In this case a serotype feature (MPOpositivity) clearly outweighs a phenotypic feature as the weight of granuloma is low or not existing in the MPA category.4. Interstitial lung disease (ILD) is poorly defined in the ACR/EULAR criteria.It is not clear when the criteria of ILD are met.However, given the results from the Japanese study, the addition of ILD is important, as more patients can be classified in populations where ILD as a clinical manifestation of AAV is common.5. Challenges related to ANCA a. ANCA-negative cases: potentially ANCA-negative cases could be become more difficult to classify, which our own findings support, however more studies are needed in this subgroup for clarification.b.ANCA weight and test method: the inclusion of immunofluorescence in the criteria can, in our eyes, be problematic, as positive IIF is observed in several other diseases, we consider the use of ELISA based test to be more specific.c.A considerable number of patients are classified to another phenotype.This applies primarily to MPOpositive patients who will often be assigned MPA primarily due to MPO positivity.Features typical for non MPA phenotypes might, due to ANCAs weight, be "outscored" and classified to MPA. Conclusion There have been efforts to classify vasculitides for many decades.Considerable progress has been made overtime facilitating epidemiological and clinical research in the field.Classification criteria from different time-periods reflect the knowledge on a given disease at that time and evolution of diagnostic possibilities need to be reflected in newer criteria.The ACR1990 criteria were a milestone but lacked ANCA, as they were developed before the widespread introduction of ANCA.Later efforts as CHCC and EMA tried to improve shortcomings but raised new question and challenges.So new criteria might improve deficits of the older ones by incorporating new knowledge and diagnostic possibilities, at the same time new questions arise.This pattern seems to continue with the 2022 ACR/ EULAR criteria, on the one hand the criteria incorporate ANCA and new imaging and they were developed with advanced statistical methods and the largest cohort ever established in AAV.They are easy to use with an innovative scoring system and they show good performance compared to the prior systems.On the other hand, unclassifiable cases remain, and questions as the disagreement with earlier definitions (granuloma) arise.The criteria grant considerable weight to ANCA, indicating a clear shift towards serotype classification.As we could show the differences compared to a pure serotype classification (in GPA and MPA) are quite low.The next step could be a shift to serotype classification, as others have argued [55].Recent discoveries including big data cluster analysis however can be interpreted as an argument to not abandon phenotypic characteristics totally. Table 2 biopsy.A large proportion of patients that could not be classified by EMA, could be classified by ACR/EULAR, all of these patients were MPO-positive and a majority exhibited ILD, thus most were assigned to MPA.The new criteria might be more useful in Asian cohorts with dominating MPO-positivity and a high degree of ILD.On the other hand the authors also report difficulties in classifying ILD in MPO-positive cases, ILD has been shown to occur in EGPA and GPA in a Japanese cohort study but most cases will be classified to MPA with ACR/EULAR.
2024-06-25T06:17:01.719Z
2024-06-24T00:00:00.000
{ "year": 2024, "sha1": "e71b234f1f1e6de925459ebf261cc44375e5f711", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11926-024-01154-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f82ea2196a616a336b1d0b7d6000397501f7f1db", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4419736
pes2o/s2orc
v3-fos-license
Calorie restriction regulates circadian clock gene expression through BMAL1 dependent and independent mechanisms Feeding behavior, metabolism and circadian clocks are interlinked. Calorie restriction (CR) is a feeding paradigm known to extend longevity. We found that CR significantly affected the rhythms in the expression of circadian clock genes in mice on the mRNA and protein levels, suggesting that CR reprograms the clocks both transcriptionally and post-transcriptionally. The effect of CR on gene expression was distinct from the effects of time-restricted feeding or fasting. Furthermore, CR affected the circadian output through up- or down-regulation of the expression of several clock-controlled transcriptional factors and the longevity candidate genes. CR-dependent effects on some clock gene expression were impaired in the liver of mice deficient for BMAL1, suggesting importance of this transcriptional factor for the transcriptional reprogramming of the clock, however, BMAL1- independent mechanisms also exist. We propose that CR recruits biological clocks as a natural mechanism of metabolic optimization under conditions of limited energy resources. The clock and clock-controlled transcriptional factors generate transcriptional output of the circadian clock in metabolism and cellular biochemical processes 20 . In this study we have assayed the effects of 30% CR on the expression of circadian clock and clock-controlled genes at the mRNA and protein level in the liver. We found that CR significantly affected circadian clockworks in a manner distinct from time-restricted feeding and fasting. We also analyzed the effects of 30% CR on clock genes expression in the liver of mice deficient for the clock transcriptional factor BMAL1. Finally, the longevity candidate genes, reported to be regulated by CR, analyzed in the circadian manner showed that the effects were gene-and time of the day-dependent. Results Effect of CR on core clock genes expression. 30% CR is one of the most commonly used CR for C57B6 mice 21,22 , therefore, we selected this feeding paradigm to perform the gene expression analysis. To assay the effects of CR on the molecular circadian clockworks, we compared expression of the core circadian genes across the day in the liver of mice that were subjected to different feeding paradigms. Ad libitum (AL) fed mice had unlimited access to food throughout the day. CR mice received 30% less of their daily AL food intake for two months. In this group food was provided two hours after the light was turned off (ZT14); indeed, mice are nocturnal animals, so the nighttime feeding is most physiologically relevant. In the third, time restricted (TR) feeding group, animals were provided with the 100% of their average daily AL food intake at ZT14 (the same time as the CR group) for two weeks; this time period is sufficient to reset circadian clocks in peripheral tissues 23,24 . We noticed that animals in both CR and TR groups consumed all the food within the first 3-5 hours, in agreement with our previous report on TR feeding 25 ; therefore, the TR group represents an appropriate control for the CR group. Indeed, animals consume the food at the same time as the CR group, but there is no reduction in the amount of consumed calories. The fourth group of animals has also been on the TR regimen for two weeks, but the animals did not receive food on the day of tissue collection. Thus, this fasting (F) group experiences food deprivation for 24 to 48 hours, and thus represents another control for the CR group: the control for sharp reduction in the calorie intake without any metabolic adaptation, in contrast to CR. As expected, the expression of core clock genes exhibited oscillatory pattern of expression in the liver of AL mice ( Fig. 1) in agreement with previously reported data. We observed that 30% CR dramatically affected the expression of circadian clock genes. According to the two way ANOVA analysis, mRNA expression for Bmal1, Per1, Per2, Cry2, (Fig. 1) and Rev Erb β, Per3 and Ror γ ( Supplementary Fig. S1) were significantly affected by CR. Cosinor wave analysis of circadian rhythms in the expression of the circadian clock genes revealed that most feeding regimens did not significantly affect rhythmicity or acrophase for most of the genes, however, CR and fasting disrupted circadian rhythms in the expression of Per1 and CR induced circadian rhythms in the expression of , (e) Bmal1 and (f) Clock -was assayed in the liver of mice (n = 3 per time point) subjected to the following feeding regimens: ad libitum (AL) -blue circles, solid line; 30% calorie restriction (CR) -red squares and solid lines; time restricted feeding (TR)-orange triangles and solid lines, fasting (F)green cross and solid lines . For convenience all data are double plotted. Data represents mean ± SD; statistically significant (p < 0.05) effects of the feeding (analyzed by two ways ANOVA) at a given time point are indicated by: (a)-between AL and CR groups, (b) -AL and TR, c-AL and fasting, (d)-CR and TR, (e) -CR and Fasting, (f)-TR and Fasting, Light and dark bars at the bottom represent light and dark phase of the day. ZT0 is the time when light is on and ZT12 is the time when light is off. Food for CR and TR groups was provided at ZT14. Clock gene. CR did not affect Cry1, Rev Erb α and Ror α mRNA expression ( Fig.1 and Supplementary Fig. S1) and even statistically significantly reduced the expression of the Clock gene. TR had a similar effect as CR on the expression of Rev Erb β and Clock genes. Thus, for Per1, Per2, Per3, Bmal1 and Cry2 genes the effect on the expression was CR specific and different from the effects of TR and Fasting. Effect of CR on clock-controlled genes expression. Several clock-controlled transcription factors, such as Hlf, Tef, Dbp, E4bp4, Dec1 and Dec2, have been shown to be the transcriptional targets of CLOCK/BMAL1 complex, and reported to oscillate robustly with different phases in the liver 26,27 . Previous reports have shown feeding/fasting cycle to have variable effects on the mRNA expression of some of these transcription factors 28,29 . We assayed the mRNA expression of these clock-controlled genes in the liver of 30% CR, TR, Fasting and AL mice by qPCR. CR significantly induced the expression of Hlf and Tef (Fig. 2a, Supplementary Fig. S1f) at several time points; TR also affected the expression of these genes. CR up regulated the expression of the D box transcription factor (Dbp,) at ZT6 and ZT10, whereas Fasting reduced Dbp expression (Fig. 2b), in agreement with earlier report in fasting animals 30 . E4bp4 mRNA levels were not affected by CR or TR, but were up regulated by Fasting later at night time points compared to AL (Fig. 2c). There was statistically significant reduction in Dec1 expression at ZT22 in CR, it was increased in TR group and, as previously demonstrated, it was down regulated at several time points during the day in the Fasting group (Fig. 2d). Dec2 mRNA expression was significantly down regulated in CR animals and it was upregulated in TR and Fasting at several time points (Fig. 2f). Variable results have been observed in Pparα mRNA expression upon CR 31,32 . We showed that the effect of 30%CR on Pparα mRNA expression was not statistically significant (Fig. 2e). These results indicate that CR affects the expression of clock-controlled transcription factors; the effect of CR on Dbp, Dec1 and Dec2 is statistically significantly different from TR or Fasting. CR leads to reduced levels of CRY1 protein. Next we assayed the protein levels of the core clock genes (BMAL1, CLOCK, PER1, PER2 and CRY1) in the liver of mice from different feeding groups (30% CR, TR, Fasting and AL). Rhythmic oscillations at the protein level in our AL group were similar to that demonstrated in previous reports 33,34 . BMAL1, PER1 and PER2 have shown circadian rhythms, while daily changes in CLOCK and CRY1 were not circadian according to the cosinor wave analysis 33,34 . Based on the two way ANOVA, CR significantly reduced the protein level of core clock gene, CRY1, while the expression of CLOCK, BMAL1, PER1 and PER2 were downregulated but the effect was not statistically significant as shown in Fig. 3. CLOCK protein levels was down regulated at one time point upon Fasting ( Fig. 3 and Supplementary Fig. S2). PER1 and PER2 showed temporal changes in AL and TR as expected, having a peak around ZT 14, whereas in Fasting maximum abundance was around ZT 22-2 for Per2 and around ZT 2 for Per1. The CRY1 protein level was dramatically reduced in both CR and Fasting groups compared to AL and TR ( Fig. 3 and Supplementary Fig. S2) at all tested time points. Thus, both Fasting and CR led to significant down regulation of CRY1 on protein level at every time point, at the same time the effect of Fasting on the expression of CLOCK was time of the day dependent. TR effect on the expression for all tested clock genes was not statistically different from AL. BMAL1 is involved in CR effects on clock genes expression. Because CR had a significant effect on clock genes mRNA expression and protein levels, our next question was if the effect of CR was through the regulation of BMAL1 transcription activity. We analyzed the mRNA expression of clock genes in whole body Bmal1− /− mice subjected to 30% CR for 2 months. Similar to others, our results also demonstrated that expressions of Per1, Per2 and Cry1 were arrhythmic in AL fed Bmal1− /− mice ( Fig. 4 and cosinor analysis). According to three way ANOVA, the effect of CR was statistically significant for all three tested genes in wild type but in Bmal1− /− mice the effect of CR was significant only for Cry1. Thus, BMAL1 is required for the effect of CR on Per1 and Per2 but not for Cry1 mRNA expression. Effects of feeding regimens on expression of longevity candidate genes. Effects of CR on the expression of Per1 and Per2 were previously reported: Per1 and Per2 genes were identified, using multiple microarray data analysis, among the group of genes with differential expression in the multiple tissues such as liver, brain, skeletal muscles of mice subjected to lifespan extending diets such as CR 35 . These genes were proposed as longevity-associated candidate genes and later for some of them their differential expressions were confirmed in tissues of long lived dwarf mice 35 . However, one important caveat in these studies is that the time of tissue collection was not defined, making the comparison between samples complicated. Indeed, microarray analysis of the expression upon CR was performed in multiple groups independently and for every selected gene, including Per1 and Per2, the effects of CR on the expression was not confirmed in every study 35,36 . We hypothesized that the observed discrepancy in results between different studies is due to different time of sample collections and the expression of longevity candidates genes might significantly change across the day. Indeed the effects of CR on Per1 and Per2 expressions in our study were significant at some time and not significant at another time of the day (Fig. 1). We selected ten longevity candidate genes: the expressions of Fmo3, Cyp4a14, Parp16 and Igfals genes were reported to be up regulated upon CR while the expressions of Cyp4a12b, Mup4, Hes6, Hsd3β5, Serpina12 and Alas2 were reported to be down regulated upon CR. We assayed the mRNA expression of these genes across different time of the day in the liver of mice maintained on different diets. Interestingly, some of the genes tested such as Hsd3β5 and Serpina12 exhibited rhythmic expression around the day. We did not find statistically significant effect of CR on the expressions of Igfals and Serpina12. The expressions of 8 candidate genes was statistically significant either up or down regulated upon 30% CR ( Fig. 5 and Supplementary Fig. S3) at least at some time points, in agreement with published microarray data; however, only for a subset of longevity candidate genes the effect was CR-specific. Expression of flavin monoxygenase (Fmo3) (Fig. 5a) was up regulated significantly with high amplitude rhythms upon CR, but not upon TR or Fasting. Expression of Cyp450 gene, Cyp4a12b, (Fig. 5b) was down regulated upon CR, while there was no effect of TR or Fasting. Expression of Parp16 (Supplementary Fig. S3a) was statistically significantly up regulated, and expression of, Mup4 and Alas2 ( Fig. 5f and Supplementary Fig. S3c) was statistically significantly down regulated upon CR at several time points, while at other time points there was no difference between CR and AL, TR or Fasting samples. Effects on the expression of other tested target genes were not CR-specific: for example, while the expression of Cyp450 gene, Cyp4a14 (Fig. 5c) was dramatically up regulated upon CR, and expressions of Hes6 (Fig. S3b) and Hsd3β5 Discussion The calorie restriction leads to multiple physiological and metabolic changes, which may contribute to longevity 12,13 . Two major features of the CR paradigm in mammals are (1) reduced calorie intake and (2) periodicity in availability of food. This periodic feeding resembles another feeding paradigm, known as time-restricted feeding. It was reported that TR could affect the circadian clocks 3 . The expressions of circadian clock genes in the liver can be affected by feeding time: for example, restricting the time of food availability to the light phase of the day (non-physiological for rodents) leads to a shift in the expression phase of the clock genes in the liver compared with AL fed mice, which predominantly feed during the dark phase of the day 24,37,38 . The composition of food also has a significant effect on gene expression: the high fat diet results in reduced amplitude of the expression for most clock genes in the liver 39 , and time-restricted feeding applied for the high fat diet restores the rhythms in the expression 39 . Interestingly, the expression of clock genes is also affected by pathological conditions: for example, development of insulin resistance and diabetes in streptozotocin-treated mice is accompanied by a significant induction of expression of several clock genes (Per1 and Per2, Bmal1 and Cry1) 40 . Detailed molecular mechanisms connecting feeding and nutrients with clock gene expression need to be studied; nutrient or energy status-responding chromatin modifying enzymes interacting with clock machinery have been proposed as important mediators 41 . In the present study we compared short-term 30% CR with TR and fasting. Food was provided at the same time for all groups during the dark phase of the day (the time of normal feeding for AL group); as expected, the phase of clock gene expression was not significantly affected by CR or TR. CR significantly induced the expression of several clock and clock-controlled genes. Circadian clocks in tissues are formed by individual cellular oscillators; thus, one possible explanation of the observed effect of CR is robust synchronization of these individual cellular oscillators. However, upon synchronization one must expect increase in the amplitude of the rhythms rather than the effect on average expression. We found that CR affected the daily average in the expression of Per1, Per2, Cry2, Dec2 and Hlf genes, suggesting that increased synchronization cannot be the only mechanism. In addition, expression of some clock genes such as Rev-Erb α was not affected by CR. Finally, CR-induced changes in gene expression were different from those of TR, suggesting that periodic feeding is not the major contributor to the effects of CR on clock genes expression. In the study by Mendoza et al., effect of CR on the expression of circadian clock genes has been assayed in the SCN 42 . While we cannot compare the results of this study with our results directly because different tissues have been assayed and the food was provided at different times of the day (at ZT6 in Mendoza et al. and at ZT14 at our study), in both studies CR has significant effect on the expression of several but not all tested clock genes. Interestingly, analysis of published microarray data on the influence of CR on mRNA expression in different tissues in mice pooled from multiple independent studies performed by different groups 43,44 identified two clock genes, Per1 and Per2, among several genes whose expression was affected upon CR in many tissues. However, the up regulation of Per1 and Per2 expression was not detected in every study, for example, up regulation of Per1 and Per2 mRNA expression in the liver was detected in two out of seven studies 43 . Our data confirmed that the expressions of Pers are indeed affected by CR and provided a potential explanation of the discrepancy between results published by different groups, as different groups may utilize different time points for tissue collection. Sharp reduction in calorie intake also affects expression of several clock genes: Per1 expression is up regulated, and Per2 expression is down regulated upon fasting 28,30 . In our study fasting also led to an increase in Per1 expression; this change was similar to that one induced by the CR; however, the magnitude of the effect was modest compared to CR. In sharp contrast to the effects of CR, which led to increased expression of Per2, Per3, Cry2 and Ror γ mRNAs, fasting resulted in decreased expression of these genes. Finally, fasting did not affect expression of Bmal1 (induced by CR) and Dec2 (suppressed by CR). Our data argue that CR results in the changes in the molecular circadian clockwork in the liver; these changes are, most likely, a result of metabolic adaptation to CR, because they are different from the effects of periodic feeding and sharp reduction in calorie intake. Expression of circadian clock genes is controlled by several transcriptional factors. The BMAL1/CLOCK (BMAL1/NPAS2) complex is considered as a major regulator of Per1, Per2, Cry1, Cry2, Revs, Rors, Decs, Ppars and Dbp gene expression. The effect of CR on expressions of some of these genes was significantly impaired in the liver of Bmal1− /− mice, which supports the involvement of BMAL1-containing transcriptional complexes in the observed changes in the expression. On the other hand, CR affected not all of the BMAL1 targets; moreover, the induction of Cry1 expression by CR was intact in Bmal1− /− mice, suggesting existence of both BMAL1-dependent and BMAL1-independent mechanisms. The existence of BMAL1-independent mechanisms is not surprising, indeed, it was reported that circadian food anticipation rhythms are BMAL1-independent 45,46 . The circadian clockwork is organized as a network of many positive and negative feedback loops 2 : for example, E4BP4 contributes to the circadian control of the expression of Per2, Cry1, Clock and Ror γ genes, and transcriptional factors, REV-ERB and ROR family control the expression of Bmal1 and E4bp4 27 . Furthermore, clock gene expression is regulated on posttranscriptional and posttranslational levels 47 , therefore, the effect of CR on the expression may be complex, and cannot be explained exclusively through the activation of only one transcriptional factor. Effects of different feeding regimens on the expression of clock genes on the protein level are much less studied. In contrast to increased expression of several clock genes on the mRNA level, we detected statistically significant down regulation of CRY1 and tendency towards reduction for other genes (the difference has not reached statistical significance) upon 30% CR. This discrepancy implies a regulation at the post-transcriptional level, in agreement with that, we observed reduced levels of BMAL1 and CLOCK proteins and increased mRNA levels of their transcriptional targets. CRY1 is a suppressor of CLOCK/BMAL1 transcriptional activity. We found that CR led to dramatic reduction of the CRY1 protein level, which supports an increase in mRNA expression of BMAL1/ CLOCK targets. Fasting had a similar effect as CR on the CRY1 protein level. Interestingly, in cells depleted of glucose, AMPK phosphorylates CRY1, which leads to CRY1 degradation 48 . Under both CR and fasting conditions, the blood glucose level is significantly reduced 49,50 . Hence, it is possible that AMPK plays a role in the observed reduction of the CRY1 protein level. At the same time, reduction in the blood glucose level upon CR or fasting is relatively moderate (in comparison with severe glucose depletion in cell culture), and it is currently a matter of debate whether it is sufficient to activate AMPK in vivo 51 . Therefore, other mechanisms such as reduced CRY1 translation or degradation can be involved 52,53 . It is also necessary to mention that while CR and fasting have similar effects on CRY1 protein level, currently we cannot say if the down regulation of CRY1 protein upon CR and fasting occur through the same molecular mechanism or through different mechanisms. We also did not detect a direct correlation between Per1 and Per2 expression on the mRNA and protein levels. It is known that stability of PERs is regulated by phosphorylation and formation of complex with CRYs; thus, one may expect that a reduced level of CRY1 is associated with an increased mRNA and decreased protein levels of Pers. Importantly, regardless of the exact mechanisms affecting circadian protein levels, fasting and CR have very different effects on the circadian mRNA expression, in contrast to the effects of CR, which affects the mRNA expression of multiple circadian clock and clock controlled genes, fasting did not affect mRNA expression of the same gene significantly. Therefore, while reduced level of CRY1 protein may contribute to the increased expression of clock genes, absence of the effect upon fasting argues for existence of CRY-independent mechanisms also. Mechanisms of CR are not well understood; several dozens of genes were put forward as targets and mediators of CR through the transcriptome analysis and called the candidate longevity genes. However, in most of these studies the expression analysis was performed at only one time point. We assayed circadian profiles of the expression of ten candidate longevity genes: selected genes known to influence different processes such as xenobiotic metabolism, protein binding and transport, steroid biosynthesis, heme binding and cell proliferation 35 . Although their importance in aging and lifespan extension has not been directly established, some of these genes could be promising candidates required for longevity. We found that two out of ten genes passed the criteria to be regulated exclusively by CR at all the time points tested: expression of Cyp4a12b was suppressed and expression of Fmo3 was induced by CR but not by TR or fasting. Four genes (Parp16, Alas2, Igfals and Mup4) were also regulated by CR, but the effect on mRNA expression was time of the day-dependent. This time-of-the-day dependence of CR may explain the contradictory results for some CR-regulated genes: for example, Parp16 36 , which according to our data is significantly up regulated only at ZT2, 10 and 14 but not at ZT6, 18 and 22. Importantly, the expression of 7 out of 10 tested genes have shown changes across the day and 2 out them have shown circadian rhythms in the expression. Analysis of gene expression in circadian manner is still not a common practice in many fields but our data argue that analysis of the circadian rhythms must be taken into account for future studies. Both CR and fasting up or down regulated four candidate longevity genes; therefore, we conclude that regulation of these genes is not CR-specific. These results argue for taking data accumulated in different studies for the analysis of CR transcriptome with great caution. Interestingly, 6 out of 11 tested circadian clock genes, and 6 out of the 10 tested longevity-associated candidate genes were regulated by CR, suggesting that clock genes may be considered as longevity-associated candidate genes. Indeed, analysis of longevity among different species recognized circadian rhythms as one of the candidate target contributing towards the evolution of longevity 54 further strengthening our hypothesis. CR and TR have different effects on circadian clocks: in contrast to TR, which does not affect the clock in the SCN, CR can entrain the SCN clock 55 . Metabolic benefits of TR were demonstrated for rodents fed on the high fat diet 39 ; however, metabolic benefits of TR for animals on the regular chow have not been assayed. While our data cannot exclude a potentially beneficial impact of TR, there is a clear difference between CR and TR in terms of expression of circadian clock, clock-controlled and longevity candidate genes. Interestingly, in contrast to CR, TR does not extend lifespan in rodents; therefore, beneficial effects of CR on longevity correlate with the effects of CR on the circadian clocks. CR results in significant metabolic changes in mammals and other organisms 13 , the circadian clocks are major regulators of metabolism 56 , and according to our data, CR significantly affected circadian clockworks, including the expression of the core circadian clock genes and circadian clock controlled transcriptional factors, which provide circadian output in metabolism. One possible interpretation if the results is that the CR disrupts the circadian molecular clockwork, because the expression of many clock genes has been significantly changed. This is to some extent paradoxical because clock disruption is associated with many pathological conditions and was proposed as a contributing factor to the diseases. However, CR has beneficial effects on the longevity in spite of clock disruption. Alternatively, we propose that CR-dependent effect on the circadian clocks is a necessary component of the metabolic adaptation to CR. Indeed, maintenance of physiological homeostasis under conditions of limited energy supply requires essential optimization of biochemical processes. According to the existing paradigm, the circadian clocks synchronize metabolic processes through the control of expression of multiple rate-limiting enzymes as a result of regulation of circadian clock output transcriptional factors 57 . CR recruits circadian clock-dependent mechanisms for optimization in order to increase fitness of the Scientific RepoRts | 6:25970 | DOI: 10.1038/srep25970 organism. Future study focused on the effects of CR in circadian clock mutants will help to clarify connections between clock and CR. Mice were of C57B6J background. Bmal1− /− mice were obtained from laboratory of Dr. Bradfield 58 . Animals were maintained on the 12:12 light:dark cycle with lights on at 7:00 am, and fed the 18% protein rodent diet (Harlan). The ad libitum (AL) group had unrestricted access to food. Calorie restriction (CR) was started at 3 months of age. For the first week animals had been on 10% restriction, for the second week on 20% restriction and on 30% restriction for the rest of the experiment. The CR group received their food once per day at ZT14 (two hours after lights were off). After two months of CR, tissues were collected at six different time points across the day. The time restricted (TR) feeding group started to receive 100% of their average daily intake as one meal at ZT14. TR started at 4.5 months of age, mice were on TR for 2 weeks before tissue collection. The fasting group (F) was on the same TR feeding regime for 2 weeks, but did not receive food on the day of tissue collection. All groups had unrestricted access to water. All tissue collection experiments were performed on 5 months old wild type (WT) and Bmal1− /− male mice. For all experiments three animals of each genotype, feeding regimen and time point were used. RNA isolation and analysis of mRNA expression. For gene expression studies, liver tissues from 3 male mice on each diet (AL, CR, TR and Fasting) and for both genotype (WT and Bmal1− /− ) were collected every four hours throughout the day, and stored at − 80 °C. Total RNA was isolated using TriZol reagent (Invitrogen, Carlsbad, CA) as per the manufacturer's protocol. Briefly, frozen liver piece was minced in 1 ml TriZol reagent with pestle on ice. Following chloroform extraction step, total RNA was precipitated with isopropanol by centrifugation and pellet obtained was washed with 70% Ethanol. RNA pellet was diluted in 30 μ l of RNAse-free water and quantified on Nanodrop. RNA integrity was checked on 1% agarose gel run at 90 V for 30 minutes. 20 μ l of RT mix was prepared using 1 μ g of RNA, 50 ng of 50 uM random hexamer (N8080127, Invitrogen), 10 mM dNTP (DD0058, Biobasic), 0.1 M DTT and RNaseOUT ™ Recombinant RNase Inhibitor (10777-019, 40 units/μ l). It was then reverse transcribed by qPCR machine using 200 u/μ l of SuperScript ® III Reverse Transcriptase (18080-044, Invitrogen) as per the manufacturer's instructions. Incubation conditions used were: 65 °C for 10 minutes followed by incubation on ice for 1 minute; 25 °C for 5 minutes; 50 °C for 60 minutes; Inactivate the reaction by heating at 70 °C for 15 minutes. RNA quantification was performed using qPCR with Universal Syber Green mix (1725125, BioRad). The reaction was carried out in triplicates for the gene of interest and in duplicates for the normalizing control using CFX96 qPCR Detection System (BioRad) with 50 ng of cDNA. Thermal cycling conditions used were according to the instructions of SYBR Green mix protocol and are briefly described in and relative mRNA abundance was calculated using the comparative delta-Ct method with ribosomal 18S rRNA and Gapdh as reference genes as described in 25 . Water was used as the negative control for the qPCR analysis. Product specificity was confirmed by melting curve analysis while primer pair efficiency was calculated by generating standard curve using serial dilutions of standard. Primers used for the analysis of expression are listed in Supplementary materials. Immunoblot analysis. For analysis of protein expression tissues from three male mice per time point were used for each feeding regimens and both genotypes (WT and Bmal1− /− ). Figure 3 represents Western blotting when three liver samples from three different mice were pooled together at each time point for each diet. For quantitative data presented in Supplementary Fig. S3, liver samples from individual mice (N = 3) were run to estimate a variability between biological replicates and calculate means and errors. For lysates preparation, frozen liver pieces were lysed in cell signaling lysis buffer with Protease/Phosphatase Inhibitor Cocktail (Cell Signaling Technology, Beverly, MA, USA) using sonicator. Protein concentration was determined by Bradford protein assay kit according to manufacturer's protocol using spectrophotometer and lysates were stored at − 80C. 45 ug of protein was loaded on 3-8% tris-acetate and 4-12% bis-tris gels (Invitrogen). Protein was transferred on PVDF membrane at 110 mAmp. Equal loading of proteins was checked by Ponceau stain. Primary antibodies anti-CRY1 (SAB biosciences), anti-BMAL1 (Santacruz Biotechnology), anti-CLOCK (kindly provided by Dr Marina Antoch, Roswell Park Cancer Institute), anti-PER2 (Alpha Diagnostics), anti-PER1 (Thermoscientific) and anti-GAPDH (Cell Signaling) were used for Immunoblot analyses. Protein analysis and quantification was done using Scientific Imaging film and Odyssey FC imaging system (LI-COR). Statistical analysis. For each feeding type and for each genotype, at least three animals for every time point were used for all experiments. Data are shown as average + /− S.D. To assay the effect of feeding and time of the day on mRNA and protein level we performed two-way ANOVA. If the effect of feeding and/or time of the day was found to be statistically significant, Bonferroni correction was used to calculate p value for pairwise comparison between each feeding regimen at every time of the day. To assay the effect of genotype, feeding and time of the day on mRNA levels first analysis was performed using three-way ANOVA, If the effect of feeding, genotype and/or time of the day was found to be statistically significant, Bonferroni correction was used to calculate p value for pairwise comparison between each feeding regimen at every time of the day. IBM SPSS Statistics 20 and GraphPad Prism Version 5.04 software packages were used for statistical analysis. P < 0.05 was considered as statistically significant difference.
2018-04-03T03:32:59.745Z
2016-05-12T00:00:00.000
{ "year": 2016, "sha1": "bfd7aa3b22526e8e36f3174a335cf92576e8887f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep25970.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bfd7aa3b22526e8e36f3174a335cf92576e8887f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
37453846
pes2o/s2orc
v3-fos-license
Control of a cantilever array by Periodic Networks of Resistances In this paper, we present a two-scale model including an optimal active control for a one-dimensional cantilever array with regularly spaced actuators and sensors. With the purpose of implementing the control in real time, we propose an approximation that may be realized by an analog distributed electronic circuit. More precisely, our analog processor is made by Periodic Network of Resistances (PNR). The control approximation method is based on two general concepts, namely on functions of operators and on the Dunford-Schwartz representation formula. We conducted validations of the control approximation method as well as of its effect in the complete control loop. Introduction In the past decade, a number of papers have been focused on semi-decentralized distributed optimal control for systems with distributed actuators and sensors. Most of them are dealing with infinite length systems, see [1] and [10] for systems governed by partial differential equations, and [3] for discrete systems. In the papers [4] and [5] the authors have introduced an approximation of an optimal control to a finite length beam endowed with a periodic distribution of piezoelectric sensors and actuators. Even if it was giving satisfactory results, it was suffering from some limitations. In [9] it has been extended so that to cover a larger range of systems and to increase its precision and robustness. Indeed, the new method does not require that each operator of the state equation and of the cost functional be functions of a same operator but they must be only functions of a same operator up to some change of variable operators. Regarding precision, the Taylor series approximating a function of an operator has been replaced by the use of the Dunford-Schwartz representation formula followed by a quadrature rule for the contour integral. Here we apply our new method to a recently developed and validated two-scale model of cantilever arrays, submitted in the paper [8]. It is rigorously justified thanks to an adaptation of the two-scale approximation method introduced in [6] and detailed in [7]. Its main advantage is that in the same time it requires little computing effort and it is reasonably precise. This paper presents results from an implementation of the new semi-decentralized optimal control strategy on the two-scale model of cantilever arrays. We provide results regarding precision and cost. However our calculations have been carried out using the simplest optimal control strategy, namely a Linear Quadratic Regulator. As in [5], we also provide a realization of the semi-decentralized control scheme through a Periodic Network of Resistances (PNR), implementing a finite difference scheme for the partial differential operator in the Dunford-Schwartz formula. Finally, we quote that the entire approach can be extended to other linear optimal control problems, i.e. LQG or H ∞ controls as well as to more physical actuating and sensing principles. A Two-Scale Model of Cantilever Arrays We consider a one-dimensional cantilever array comprised of an elastic base, and a number of clamped elastic cantilevers with free end, see Figure 1. Assuming that the number of cantilevers is sufficiently large, a homogenized model was derived using a two-scale approximation method. This is reported in the detailed paper [7] devoted to static regime. The corresponding model extended to dynamic regime is introduced in the letter [6]. The modelling papers were written in view of Atomic Force Microscopy application. Fig. 1: Array of Cantilevers After a number of simplifications, the approximate homogenized model expressed in the two-scale referential, which is a rectangle Ω = (0, L B ) × (0, L * C ). The parameters L B and L * C represent respectively the base length in the macroscale x−direction and the scaled cantilever length in the microscale y−direction. The base is modelled by the line Γ = {(x, y) | x ∈ (0, L B ) and y = 0}, and the rectangle Ω is filled by an infinite number of cantilevers. We describe the system motion by its bending displacement only. So, the base is governed by an Euler-Bernoulli beam equation with two kinds of distributed forces, one exerted by the attached cantilevers and the other, denoted by u(t, x, 0), originating from an actuator distribution. The bending displacement, the mass per unit length, the bending coefficient and the scaled cantilever width being denoted by w(t, x, 0), ρ B , R B and ℓ * C , the base governing equation states The base is assumed to be clamped, so the boundary con-ditions are at its ends. Each cantilever is oriented in the y-direction, and its motion is governed by the Euler-Bernoulli equation distributed along the y-direction. It is subjected to a control force u(t, x, y) taken as distributed along each whole cantilever. It can be replaced by any other realistic force distribution. Denoting by w(t, x, y), ρ C and R C cantilever bending displacements, the mass per unit length, and the bending coefficient, the governing equation endowed with the boundary conditions representing an end clamped in the base, and a free end. The weak formulation associated to (1)(2)(3)(4) states as for any regular function v, satisfying in particular the conditions: v = ∂ x v = 0 at both end of the base and ∂ y v = 0 at y = 0 at the junction. Model Reformulation To simplify the model, but keeping its distributed feature, we discretize in the y-direction projecting on a basis K n (y) = ∫ y 0 yT ′ n (y)dy, where T n (y) is the basis of Chebyshev polynomial. We define the approximations of the displacement and of the control where w n (t, x) and u n (t, x) are the polynomial coefficients in the approximation of w and u respectively. We The boundary conditions are (6), we use the notations The LQR problem is set for control variables (u n ) n=1,2,··· ,N ∈ L 2 (Γ) N and for the cost functional The choice of the functional is related to vibration stabilization of the microcantilever array. Classical Formulation of the LQR Problem Now, we write the above LQR problem in a classical abstract setting, see [2], even if we do not detail the functional framework. We set the control opera- the observation operator, and S = I N ×N the weight operator. Consequently, the LQR problem, consisting in minimizing the functional under the constraint (6), may be written under its usual form as with the minimized cost functional (7). We know that (A, B) is stabilizable and that (A, C) is detectable, in the sense that the system is controllable and observable. It follows that for each z 0 , the LQR problem (8) admits a unique solution where K = S −1 B * P, and P is the unique self-adjoint nonnegative solution to the operational Riccati equation Semi-Decentralized Approximation This Section is devoted to formulate the approximation method. The mathematical derivation has been introduced in a paper [9]. We denote by Λ, the mapping: Λ : The spectrum σ (Λ) is discrete and made up of real eigenvalues λ k . They are solutions to the eigenvalue problem Λϕ k = λ k ϕ k with ∥ϕ k ∥ L 2 (Γ) = 1. In the sequel, I σ = (σ min , σ max ) refers to an open interval that includes the complete spectrum. Factorization of K by a Matrix of Functions of Λ In this part, we introduce the factorization of the controller K under the form of a product of a matrix of functions of Λ. To do so, we introduce the change of , from which we introduce the ma- , (9), the optimal controller K admits the factorization where q (λ) = s −1 (λ) b T (λ) p (λ) , and where for all λ ∈ σ, p(λ) is the unique self-adjoint nonnegative matrix solving the algebraic Riccati equation Approximation of the Functions of Λ We build the approximation in two steps. Firstly, we use a rational approximation k R (Λ) of k(Λ), then it is approximated by another function k R,M which is simple to discretize, and yields an accurate approximation. To do so, we use the Dunford-Schwartz formula, see [12], representing a function of an operator, because it involves only the operator (ζI − Λ) −1 which may be simply and accurately approximated. Since the function k(Λ) is not known, the spectrum σ (Λ) cannot be easily determined, so we approximate k(λ) by a highly accurate rational approximation k R (Λ), then the Dunford-Schwartz formula is applied to k R (Λ) with a path tracing out ellipses including I σ but no poles. Since the interval I σ is bounded, for each function k ij (λ) have a rational approximation over I σ , we write under a global formulation, (which may be understood component wise) where d m , d ′ m ′ are matrices of coefficients and R = ( is the couple comprised of the matrices R N of numerator polynomial degrees and the matrices R D of denominator polynomial degrees. The path C, in the Dunford-Schwartz formula, is chosen to be an ellipse parameterized by ζ(θ) = ζ 1 (θ) + iζ 2 (θ), with θ ∈ [0, 2π]. The parametrization is used as a change of variable, so the integral can be approximated by a quadrature formula involving M nodes (θ l ) l=1,..,M ∈ [0, 2π], and M weights (ω l ) l=1,..,M , I M (g) = ∑ M l=1 g (θ l ) ω l . In the following equations, we state that the matrices k R (ζ) associated to the rational approximation with the numerator polynomial degrees R N and the denominator polynomial degrees R D . So, for each z ∈ L 2 (Γ) 2N and ζ ∈ C, we introduce the 2N -dimensional vector field Decomposing v ζ into its real part v ζ 1 and its imaginary part Thus, combining the rational approximation k R and the quadrature formula yields an approximate realization This formula is central in the method, so it is the center of our attention in the simulations. A fundamental remark is that, a "real-time" realization, k R,M (Λ) z, requires solving M systems like (13) corresponding to the M quadrature nodes ζ(θ l ). The matrices k R (ζ(θ l )) could be computed "off-line" once and for all, and stored in memory, so their determination would not penalize a rapid real-time computation. In total, the ultimate parameter responsible of accuracy in a real-time computation, apart from spatial discretization discussed in next Section, is M the number of quadrature points. Circuit Implementation To realize an optimal control by a set of distributed circuits, we introduce a spatial discretization and synthesis of Equation (13). The interval Γ is meshed with regularly spaced nodes separated by a distance h, we introduce Λ −1 h the finite difference discretization of Λ −1 , associated with the clamping boundary condition. In practice, the discretization length h is chosen small compared to the distance between cantilevers. Then, z h denoting the vector of nodal values of z, for each ζ we introduce , solution of the discrete set of equations, Finally, an approximate optimal control, intended to be implemented in a set of spatially distributed actuators, could be estimated from the nodal values, Fig. 3: Five adjacent interior cells. estimated at mesh nodes in the following. We propose a synthesis of (15-16) by a distributed electronic circuit. The system is rewritten under the manageable form (17-18) and for the sake of simplicity, we use the notations α = Re Analog computation of The analog computation of Λ h v 1 and Λ h v 2 are made by Periodic Network of Resistances(PNR) circuits [11]. These electronic circuits have been developed to solve a large class of PDEs by analog computation. More exactly, PNR circuits compute the finite difference solution of a PDE. PNR circuits are gathering of cells (Figure 2), the interior cells are indexed by k = 1, . . . , N − 1, while the boundary cells correspond to k = −1, 0, N and N + 1. We will show that the circuits solve the equations Λ −1 h u 1 = i 1 . If the current sources i 1 are replaced by a voltage controlled current sources defined by i 1 = gv 1 (with g is a real number), the voltage outputs of the circuits u 1 solve g(Λ h v 1 ) and so Λ h v 1 . The computation of Λ h v 2 is done in the same way. The interior cell k which compute (Λ h v 1 ) k is represented on Figure 3 with its two adjacent cells on each side. We call ρ 1 the resistances between the potentials u If one choose the negative potential ρ 1 = −h 4 ρ 0 and positive potential ρ 2 = h 4 ρ 0 /4, then the potential at node u (k) 1 is expressed as a function of its neighbor voltages as which is the stencil of the differential operation Λ −1 . Consequently, the whole electronic circuit composed of N − 1 cells computes the finite difference approximation . The numerical value of ρ 0 only changes the magnitude of the voltages u (k) 1 . The values of the resistances inside a cell depend only on the circuit topology and are easily expressed as a function of ρ 1 or ρ 2 , here the resistances of the cells can be taken as r 1 = r 3 = r 4 = r 6 = ρ 1 /4 and r 2 = r 5 = ρ 2 /2. The VCCS (Voltage Controlled Current Source) i to the clamping boundary condition. Remark that the terminals denoted by a cross are not connected, so the resistances are linked by one side at them can be removed without changing the behavior of the circuits. They are saved to show clearly the real difference between interior cells and boundary cells. Analog computation of equation (17) The analog computation of Equation (17) can made by an array of classical non inverting summing amplifiers of Figure 5. Notice that there is no current exchange between these circuits and PNR inputs and outputs, see buffers in Figure 3. Analysis of the circuit of Figure 5 leads to (19). With a proper choice of resistances, Figure 5 solve (17), Analog computation of equation (18) In a very similar way, the analog computation of Equation 18 can made by an array of classical difference summing amplifiers of Figure 6 Fig. 6: Analog computation of the k-th equation (18). Numerical Simulation In this Section, we validate the approximation method, established in Section 5, by a numerical simulation. We consider a silicon array comprised of an elastic base clamped of 10 elastic cantilevers, with base dimensions L B × l B × h B = 500µm × 16.7µm × 10µm, and one cantilever dimensions L C × l C × h C = 41.7µm × 12.5µm × 1.25µm. The model parameters of base and cantilever: the bending coefficient R B = 1.09×10 −5 N/m, R C = 2.13× 10 −4 N/m the mass per unit length ρ B = 0.0233kg/m, ρ C = 0.00291kg/m. In the rational approximation, the numerator polynomial degrees R N and the denominator polynomial degrees R D can be chosen sufficiently large (namely R N = R D = 20) so that the relative errors between the exact solution k and its rational approximation k R , e = ||kR−k|| L 2 (Iσ ) ||k|| L 2 (Iσ ) , can be in the order of 10 −8 . This choice has no effect on real-time computation time. , between the exact control function and final approximation are shown in Figure 7, for the number of nodes M varying from 5 to 20. It may be easily tuned without changing spatial complexity associated with the finite difference discretization of Λ −1 . We also present the ratio of the computation time of solving the whole system for varying number of nodes M to the reference computation time of solving the whole system for M = 20, see Figure 8. Conclusion In this paper, we have presented a semi-decentralized approximation of an optimal control operator applied to a two-scale model of microcantilever arrays. This model is discretized in y-direction projecting on a transformed basis of Chebyshev polynomials. It has been shown that the semi-decentralized optimal controller can be implemented by a set of distributed electronic circuits. Numerical simulations have been carried out to validate the method and study its performances. This method can be extended to other optimal control theories, such as LQG or H ∞ .
2017-02-18T15:26:08.020Z
2010-04-26T00:00:00.000
{ "year": 2010, "sha1": "c1b2ad0ed427c8d9fae6c870e359bd0f2be772ae", "oa_license": "CCBY", "oa_url": "https://hal.archives-ouvertes.fr/hal-00677432/file/Hui2010.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1425e48682db8ac6873f0c896fef654d1e357080", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
260399181
pes2o/s2orc
v3-fos-license
FIRST OCCURRENCE OF PLIORHINUS CF. MEGARHINUS (PERRISSODACTYLA, RHINOCEROTIDAE) IN GREECE Pliocene rhinoceros’ bearing fossiliferous localities are very limited in Greece. The rhinocerotid from the locality of Allatini, near Thessaloniki presented here, has long been cited in the literature but has never been studied in detail up to now. This taxon is represented by a single specimen, a radius of a sub adult individual, which is herein studied in detail to clarify its systematic position. Both morphological and metrical data suggest its assignment to the genus Pliorhinus, and more specifically to the species P. megarhinus. Pliorhinus megarhinus thrived in Eurasia from the latest Miocene to the Late Pliocene, however its records are restricted so far to a few localities, mainly in Italy and France. This is the first known occurrence of Pliorhinus in Greece resulting a slight enrichment of the local and European Pliocene Rinocerotidae record. Introduction During the Pliocene, the European Miocene rhinocerotids (e.g., Dihoplus, Miodiceros, Brachypotherium, Aceratherium) are replaced by the genera Stephanorhinus and Pliorhinus, known in this period by four species, P. megarhinus, P. miguelcrusafonti, S. jeanvrieti and S. etruscus. Pliorhinus megarhinus, and S. jeanvireti are large sized species, especially P. megarhinus, which preserves a massive skull and wide thick nasal bones without a nasal septum (Pandolfi and Rook, 2017). Its first appearance in Kávás (Hungary), is recorded in Mammal Neogene Zone MN12-MN13 (Turolian), and the species survived until the late Early Pliocene in Europe (MN14-MN15; Ruscinian) and probably until the latest Pliocene (MN15-MN16; early Villafranchian) in Russia (Guérin, 1980;Fukuchi et al., 2009;Pandolfi, 2013;Pandolfi et al., 2015Pandolfi et al., , 2016. The species P. miguelcrusafonti is chronologically restricted, limited to a few Spanish French, and Georgian localities (Guérin, 1980;Pandolfi et al., 2021;Pandolfi et al., 2022). This medium to small sized Pliocene taxon, which is larger than S. etruscus but smaller than the rest of the Pliocene species, was found along with P. megarhinus. There hadn't been any new records of the species since the 1900's, until the recent findings in the locality of Kvabebi in Georgia and in Spain (Guérin and Santafe-Llopis, 1978; Volume 60 Pandolfi et al., 2021;Pandolfi et al., 2022). The Kvabebi (Georgia) record of Pliorhinus in MN16a (early Villafranchian) also represents the last known European occurrence of the genus (Pandolfi et al., 2021), which did not survive into the Pleistocene. During the same time, the genus Stephanorhinus initially represented by S. jeanvireti and S. etruscus, made its appearance and continued into the Pleistocene with several species, i.e., S. kirchbergensis, S. hemitoechus, S. hundsheimensis. To these species that roamed Europe the genus Coelodonta was later added; it arrived in Western Europe from Asia during the Middle Pleistocene (Kahlke andLacombat, 2008, Uzunidis et. al., 2022). (Guérin and Tsoukala, 2013;Tsoukala, 2018). Stephanorhinus sp. has been reported from the Upper Pliocene lower fossil layers of Sesklo, and Rhinocerotidae indet. from the Lower Pliocene site of Apolakkia in Rhodes Island (Symeonidis, 2006;Athanassiou 2018;Giaourtsakis, 2022). Here we report the first Greek occurrence of a single specimen of Pliorhinus from the Lower Pliocene site of Allatini, nowadays, within the urban fabric of Thessaloniki and discuss the geographic distribution of the genus. Geological Setting and Age of the Site The site of Allatini is located in East Thessaloniki and was named after a private company that exploited clay pits (Syrides, 1990;Vlachos et al., 2015 and references there in). The deposits nowadays are either fully exploited or covered and turned into urban landscapes. The stratigraphy of western Chalkidiki, including the area of Thessaloniki was described by Syrides (1990), who divided the Neogene deposits into six formations (Fm) (Fig. 1). The local stratigraphy of Allatini site was summarized by Sickenberg (1972), based on unpublished data provided by Prof. Gardikas and those by Stevanovic (1964). According to these lines of evidence, the stratigraphy of Allatini starts with a unit of brown-grey sandy clays with brown sandstone intercalations. It continuous with clays, marls, and sands characterized by the presence of fossil invertebrates of Paratethyan origin. The succession continues with a unit of sandy clays where the poor vertebrate fossil remains of Allatini most likely come from and ends with a unit of brownish cross-stratified sands with conglomerate intercalations. The Volume 60 local data are well-correlated with the upper layers of Trilophos Fm and the lower layers of Gonia Fm (Syrides 1990) representing the gradual transition between these two formations. Syrides (1990) dated this transition at the end of the Upper Miocenelowermost Pliocene. Studies on the vertebrate fauna recovered from Allatini are very few. Marinos (1965) reported findings of Elephas sp., Rhinoceros sp. and Vulpes sp., all taxonomic definitions are considered outdated today. Further studies were only focused on the Canidae, more specifically on the mandible of Eucyon odessanus which is, along with the Nyctereutes from Megalo Emvolon, the earliest evidence of canids in Greece (Sickenberg, 1972;Koufos, 1997;. As for the single Rhinocerotidae specimen known from this site, no meticulous research has been done and the findings were simply assigned to Rhinocerotidae indet. (Sickenberg, 1972;Symeonidis et al., 2006;Giaourtsakis, 2022). Systematic Paleontology Order diagnostic characteristics (i.e., Pandolfi et al., 2016). The species is also referred to the genus Dihoplus based on Heissig's (1999) hypothesis of an evolutionary lineage from the Late Miocene Dihoplus schleiermacheri to the Late Pliocene Dihoplus megarhinus. Besides, Fortelius et al. (1993) and Cerdeño (1995) included the latter species into Stephanorhinus, though there are no important morphological characters in common (i.e., Pandolfi et al., 2016). The present study follows the most recent review by Pandolfi et al. (2021) according to which the species is included in a new genus, namely Pliorhinus along with the species P. miguelcrusafonti from Spain. Description The specimen from Allatini is a well-preserved right radius (Fig. 2), belonging to a subadult individual; the suture between the distal epiphysis and the diaphysis is not completely fused. In anterior view ( Fig. 2A), the coronoid process is prominent forming an obtuse angle with the proximal border. In the same view, the radial and lateral tuberosities are evident; the posterior process is damaged; the proximo-medial border is convex and the proximo-lateral border is straight and slightly shorter than the medial one. In anterior view, the medial border is longer and more downwards directed than the lateral one. In posterior view (Fig. 2, B), a triangular lateral articular surface for the ulna is present, while the medial one is not preserved. In proximal view (Fig. 2C), the medial articular surface is sub-squared, with a convex anterior border and a roughly convex medial one. The anterior border of the proximal articulation is slightly concave at the level of the coronoid process. The posterior-lateral border of the proximal epiphysis is roughly straight forming a ~45 o angle with the postero-medial one. On the distal epiphysis and in anterior view, the styloid process is prominent. The distal border of the articular surface for the semilunar is convex, with a convex distal outline. In distal view (Fig. 2D), the posterior portion of the articular surface for the scaphoid extends backwards. The anterior border of the epiphysis is concave at the level of the extensor carpi radialis. The articular surface for the semilunar is mediolaterally concave; that for the scaphoid has a rather concave anterior portion and a convex posterior one. The medial border of the articular surface for the scaphoid is straight, and the lateral border of the articular surface for the semilunar is slightly concave. Measurements are given in Table 1. Comparison The radius from Allatini differs from that of S. jeanvireti which has, in anterior view, a straight medial border, and in posterior view, a less protruding posterior process (Tsoukala, 2018). Additionally, the radius of S. jeanvireti from Angelochori, has in proximal view a more marked concavity on the anterior border in comparison with the studied specimen. In distal view, the radius from Allatini differs from the radius of S. jeanvireti from Vialette by the more convex posterior border of the articular surface for the scaphoid (Guérin, 1972;Tsoukala, 2018). In anterior view, the radius of P. miguelcrusafonti differs from that of Allatini in the less developed lateral tuberosity and concave lateral and straight medial proximal borders (Guérin and Santafe-Llopis, 1978;Pandolfi et al., 2021). The studied specimen differs from S. etruscus which has a less developed brachii biceps in anterior view and a weakly concave posterior border, as well as a straight posteriormedial border in proximal view. The specimen from Allatini shares several common characters with P. megarhinus, such as an enlarged posteriorly articular surface for the scaphoid in distal view; a convex medial-proximal border and a straight lateral-proximal border in anterior view (Pandolfi et al., 2016(Pandolfi et al., , 2021. The proximal transversal diameter (PTD) and proximal anteroposterior diameter (PAPD) (Fig. 3) Discussion & Conclusion Based on the combination of morphological and metrical data, the Allatini rhino radius may quite safely be attributed to the species P. megarhinus, though the absence of adequate material and the poor stratigraphy and chronology of the site require a more open nomenclature, i.e., Pliorhinus cf. megarhinus. Based on our knowledge, this is the first record of this species and genus in Greece. Given the available stratigraphic and Volume 60 biochronologic evidence (Koufos 2006), the Allatini rhino radius should be chronologically placed into Ruscinian and more likely in its early part. European sites with P. megarhinus and of more or less the same age as Allatini are Venta del Moro, Spain, dated to the Miocene-Pliocene transition (Cerdeño, 1992;Pandolfi et al., 2022), and Montpellier, France dated to the Lower Pliocene (MN14) (Guérin, 1980), Alcoy Mina, Spain, dated to the Lower Pliocene (Montoya et al., 2006;Pandolfi et al., 2022), Vera Basin, Spain dated to the Lower Pliocene (Pandolfi et al., 2022), and Val di Pugna, Italy dated to the late Lower Pliocene as well (base MN15; Pandolfi, 2013). Based on the younger occurrence of P. megarhinus in Russia, Fukuchi et al. (2009) suggested that the taxon dispersed directly from Europe to Asia. Conversely, Pandolfi et al. (2015) advocate that P. megarhinus could have spread from Asia to Eastern Europe and the youngest occurrence in Udunga is the consequence of the persistence of this species in the area. However, this record has not been recently revised in the light of the record of P. miguelcrusafonti in Georgia. P. megarhinus firstly occurred in Hungary during the Late Miocene, in Italy at the end of the Miocene (MN13) and during the Mio-Pliocene transition and the Early Pliocene (MN14/base MN15) it spread in Western Europe. Additionally, in Turkey it is present during the second half of the Pliocene (Guérin, 1980;Guérin and Sen, 1998;Pandolfi, 2013;Pandolfi et al., 2015). Hence, the Allatini record enforces with a E-SE European distribution of the species at the beginning of the Pliocene.
2023-08-03T15:20:29.505Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "3c9dc05f0697f580f583fbf26d97f7676ea7c47d", "oa_license": "CCBYNC", "oa_url": "https://ejournals.epublishing.ekt.gr/index.php/geosociety/article/download/33711/26583", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "804281c45fcb74cc73640fe2eef84c5894c37616", "s2fieldsofstudy": [ "Geography", "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
212538319
pes2o/s2orc
v3-fos-license
Implementation of Software Controlled Radio for Long Range Communication with High Data Rate and Optimal Capacity  Abstract In the recent past, the software controlled radio (SCR) plays major role in wireless communication for long range communication. The wireless communication based SCR has communicated the information for a short distance due to the low data rate and poor capacity of frequency modulation scheme. The OFDM technology has adapted the SCR for providing elimination of error due to the communication of information for long range. The synchronization error is highly reduced by applying the matched filter between transmitter and receiver antennas. The distributed antenna system is implemented with OFDM SCR communication for information transfer through the wireless channels. The AWGN, Rayleigh and Rician fading channels are used for long range communication of SCR information. The number of base stations (BS) is reduced by finding the energy efficient transmission using SCR for long range communication. The heuristic approach is used for improving the capacity of the SCR system in terms of Mbps. The Inter Symbol Interference (ISI) and Inter Carrier Interference (ICI) are reduced by applying the cyclic prefix of OFDM by applying an Advanced Per Subcarrier Channel Equalizer (APSCE). The equalizer technique is used to provide reduced Bit Error Rate (BER) with improved Signal to Noise Ratio (SNR). I. INTRODUCTION The Orthogonal Frequency Division Multiplex (OFDM) is the most effective in wireless communication technique, it transmit the different form of information. The digital technologies are improves the evolution of wireless communication. The process of communication can be changed by make a changes to its software. The new radio model of the telecommunication system is called as Software Controlled Radio (SCR). The behavior and the operating characteristics such as bandwidth, modulation, code rate, of the system are depends upon the SCR. Software defined radio is no longer static then it becomes a dynamic element with the help of their circuits. Software defined radio have specific application in development of digital radio [1]. Software defined Radio have major role in cellular and radio standards communications. It can be implemented in advanced nanometer complementary metal oxide semiconductors (CMOS) technology. Transistors are used to implement the discrete-time radio receivers. Integration will be essential for most application. The ADC is can receive frequency channel directly from antenna for battery powered devices. SDR can make reception and transmission of several standards at a time which have flexibility and limitless platform. To downloading new configuration is used to upgrading the SDR itself. Digitally inspired receivers increase the switching speed of nano-scale CMOS [2]. The Sample rate conversion (SRC) is essential for orthogonal frequency division multiplexing based software defined radio (SDR) and B-Spline interpolation algorithm. Signal -to-peak distortions analyze the SDC performance. SRC is used to convert the sample rates from the SDR transmitter and SDR receiver it reduces complexity. B-Spline interpolation improves the SRC performance.SRC architecture with interpolation algorithm can used for advanced orthogonal frequency division multiplexing with SDR [3]. In industrial application, wireless sensor networks are composed of wireless sensor nodes powered by batteries with limited capacity. For long range communication, multi hop techniques are used. LoRA technology is used between the nodes for extend the communication for long range communication. Floating device is done the self sustaining for energy harvesting. Sensor nodes have long term power supply using thermoelectric power. LoRA technology can technically transmit data over much long range with floating device [4]. The IEEE 802.11ah WLAN protocol is used to provide longer transmission range between the WLAN access points (APs) and stations (STAs). The physical layer (PHY) and media access control (MAC) have specific importance for WLANs interference. Link reliability of WLANs is used to improve itself. To connect meter device in outdoor utility infrastructure for remote sensing services as an alternative method. Surrounding WLANs and ambient noise are affecting the WLANs in outdoor environments. Invalid path loss model is a major problem in outdoor environments [5]. The Design and implementation of long-range and broadband aerial communication system is done by exploit directional antennas (ACDA) it integrates the Wi-Fi for quick establish in wireless channel. ACDA exploits unmanned aerial vehicle (UAV) for extend the communication range of antennas. A GPS based control algorithm is used to avoid the wind disturbance and formulate antennas according with movement of unmanned aerial vehicle. A received signal strength indicator with decentralized initial algorithm is implementing to establish the connection between the unmanned aerial vehicles. The ACDA framework is used to address routing, protocol and other networking related issues for aerial networks [6]. The Energy -Aware cooperative content distribution over wireless networks with mobile to mobile cooperation is assumed as numbers of mobile terminals (MTs) are connected close to each other. These are waiting to download same content from server using long range wireless technology. The content downloads by a particular mobile terminal and transmits to other terminal using short range wireless technology with energy efficiency such as content segmentation, resource allocation, fairness consideration. Designing of novel communication architectures, protocol, solutions, and services can reduce the energy consumption [7]. The Hybrid beam forming is having specific applications in reduced the number of costly radio frequency chains in massive multiple-input multiple-output (MIMO) systems. The Works on hybrid beam forming are limited to single user equipment (UE) or single group of UEs and it cannot be applied for multiple groups of UEs in different frequency resources in an orthogonal frequency-division multiplexing (OFDM) system. The novel practical subspace construction algorithm (SCA) based on partial channel state information can be applied to massive MIMO-OFDM systems in both time division duplex and frequency division duplex [8]. The Optimal rate and power allocation are maximizes the utility function fading OFDMA multiple access and broadcast channels. Broad band wireless transmissions are related to inter symbol interference. Superposition coding allowed per subcarrier for reduce the allocation and complexity. The Inter symbol interference significantly limits the achievable data rates. Orthogonal narrowband flat fading subcarriers from orthogonal frequency division multiple accesses are modulated by a low data rate stream and reduce the complexity based Lagrange dual based approach and the stochastic optimization tools [9]. The Heterogeneous network is most efficient technology for the factors of spectrum efficiency system capacity and it deal with power allocation and interference management in multi cell network structure. Heterogeneous network with one macro cell network and multiple cell networks based power allocation algorithm is reduce the power allocation and complexity the of the system which also increase the Quality of the Service (QoS), energy and spectrum efficiency. System parameter can be changed using cognitive radio. Reuse of idle frequency band resources are done by using cognitive radio [10]. II. RELATED WORKS V. I. Rodríguez and J. Sánchez et.al [11] have presented GNU ratio functionality with open projects. Software radio is more effective in communication design, the new wireless communication implementation have low cost and less time when it used a physical layer as simple software stages. In C++ and Python language programs digital signal process implementation have software ratio as GNU radio. To add IT++ library into GNU ratio open projects it to be assessed and named as out-of tree module. GNU provides the needs of physical layer. Amiya Ranjan Panda and Debahuti Mishra et.al [12] have presented the implementation process of software defined radio in FPGA based flight termination system. Highly reliable and ruggedized platform have demands of real time flight termination operation therefore FTS is implemented in FPGA, in design procedure it replaces multiple platforms based system with a single platform and it also gives effective reconfigurable, interoperable, portable, handy FTS and maintains error free, bug free and reliable implementation. David Carey and Robert Lowdermilk et.al [13] have presented the cognitive radios based software defined synthetic instruments (SDSI). Embedded processor and FPGAs are used digital signal processing techniques for software implementation of filtering, frequency translation and modulation. Software defined radios are wireless system. The cognitive radios are special class of SDRs, it is cost effective. The architecture of SDSI is similar to SDR/CRs and SDSIs are shows essentially high performance SDR/CRs with added intelligence and measurement science, it have unique applications. Rodolfo Alvizu and Sebastian Troia et.al [14] have presented prediction technique for software defined mobile metro core network with machine learning. The mobile phone used level increases when commuting via public transportation, during lunch breaks and at night it creates predictable spatiotemporal fluctuations in traffic patterns. Machine learning to use for prediction of tidal traffic variation in mobile metro core network that make process of optimal routing reconfiguration and solution of optimization problem. Mustafa Y. Arslan and Karthikeyan Sundaresan et.al [15] have presented cellular radio assess network from software defined network based potential and challenges. Software defined networking has effective performance and management benefits to wired network. The decoupling control data planes of SDN are already exists in RANs in the form of self -organizing networking solutions. The front haul network in a C-RAN programmed by application of SDN in the RAN based potential and challenges. FU Yonghong and BI Jun1, 3 et.al [16] have presented flexible dormant multi controller model (DMC) based on centralized multi controller architecture. Part of controllers from DMC models are used to saving system cost. Analyzes from real traffic of china education network and use the results provide effect of parameters on the system characteristics that establish total expected cost function. Optimal values minimize the system cost for deployment decision making. Valentin Goverdovsky and David et.al [17] have presented multichannel software defined radio test bed. It has low cost wideband and highly reconfigurable. It develops rapid prototyping and evaluation array processing algorithm. It can't find the direction, the combination of hardware and software technique exists between the individual and SDR peripherals. The Test radio bed has accurate phase synchronization. Licai Fang and Defeng Huang et.al [18] have presented the linear-minimum-mean-square error with channel estimator requires a matrix inversion with cubic complexity is called orthogonal frequency division multiplexing system (OFDM). From theprocess, K terms Neumann series expansion to approximate the matrix inversion. The computational complexity is reduced per channel as O (N log L) .N -number of subcarriers, L-number of non-zero time domain channel taps. The result from extensive simulation shows that even with small K (K ≤ 2). Mahdi Khosravi and Saeed Mashhadi et.al [19] have presented optimal set or cyclic difference set from pilot pattern forming. The design of pilot power and pattern for sparse channel estimation in OFDM system based the coherence of DFT sub matrix. The pattern and power of pilots as a solution get from the optimization of deterministic procedure. The pilot pattern formed before the allocation of power numerically to different pilots. Finally minimum coherence was selected from the available pairs. Slavche Pejoski and Venceslav Kafedziski et.al [20] have presented minimization of atomic norm uses for estimation of sparse time dispersive channels. The combination of atom norm minimization, a resolution method and least square method is intended for pilot aided channel estimation in OFDM system it allows for griddles estimation of arbitrary delays it estimated using LS method. The reweighted atomic norm minimization gives the effective performance. Slavche Pejoski and Venceslav Kafedziski et.al [21] have presented analyzes of OFDM system with asymptotic capacity and pilot aided channel estimation based Bernoulli-Gaussian sparse channel with Lasso compressed sensing (CS), using replica method results to the mean square estimation error is used to obtain an asymptotic capacity lower bound for the OFDM system. The channel estimation used the average fraction of pilot subcarriers. The asymptotic capacity bound increases due to CS. Jianping Zheng and Ru Chen et.al [22] have presented the linear processing technique for OFDM based on capacity maximization. The linear process methodology to alleviate the inter carrier interference (ICI) for orthogonal frequency division multiplexing. The capability of OFDM-IM in the RTV channel springs. Then, the pre-coding and post processing matrices are designed to maximize this capability lower bound through utilizing the particle swarm improvement algorithm. III. SYSTEM MODEL A SCR in its efficient kind consists a transmission and reception antenna connected to an analog to digital convertor (ADC) for receive information and a D/A converter (DAC) for transmit functions. The ADC/DAC is connected to a high performance Digital Signal Processing unit (DSP). Within the receiver network, analog signals are received at the receiver antenna are converted through a intermixture method and a neighborhood generator an intermediate frequency (IF). The analog IF is converted into digital format for exploitation of the ADC and another sampling clock method. The digital signal is presented to the DSP unit that applies algorithms to draw out and perform the radio signals. The transmitter system performs similar processes except in reverse. The DSP process performs the modulation of information digitally and processes the transmit signal, generating a digital bit stream that can be presented to the DAC. The DAC converts the digital bit stream to an analog signal to become an output of SCR unit. The analog waveform is then up-converted and transmitted by the antenna. For global applications, implementing the SCR is not, since limits in ADC/DAC information measure and dynamic variation, also as DSP process limitations preclude such implementations to process with these limitations of SCR. Figure 1: Block diagram of proposed SCR system The SCR OFDM system is given by Heuristic approach pilot channel estimation with cyclic prefix as shown in Figure 1. The binary data is given as input data that is sorted and mapped reliable with the modulation using Quadrature Amplitude Modulation (QAM) technique known as signal mapping technique. The pilot data is given on any of the carrier frequency through a selected amount with normally on the information process, IFFT size can be employed to rework information process with N sequence of information into spatial domain information on the following mathematical equation: Where, Nsample total number of bits to be transmitted. Following IFFT sequence with the pilot data and guard bit interval that is selected with the high data rate than the expected delay is not occurred, is provided to prevent the interferences. The interferences are eliminated by guard bit insertion and the cyclic prefix extended a part of SCR modulation information so as to remove the interferences. The resultant SCR modulation for long range communication information can be written as given below: Here, Ng denotes length of guard bit intervalAfter following digital to analog conversion, this information is given to be sent from the transmitter with the reliable of the baseband system at the base station. The transmitted signal can have the frequency selective time variable noisy channel such as AWGN (Additive White Gaussian Noise) channel. The received signal is given by Here, W noise (bits) is AWGN noise and h(n) is that the channel time-varying noise, which might be diagrammatical by: The AWGN is the unpredictable noise present in the channel that adds considerably with the transmission information as given in the equation 3. Here, total range of propagation ways is denoted as r, the impulse response of the ith path is denoted as hi, the ith path Doppler frequency shift can be represented as f Di, delay path index is represented as λ, T is that data rate of samples and i is that the ith latency of data is reduced by sampling time of intervals.During receiving information, the data is passed through the analog to digital conversion following the low pass filter in discrete domain, the guard time has to be removed as follows, Thus, the data is given to FFT is given function as: Assuming there's no Inter Symbol Interference (ISI), gives the correlation of Youtput(k) to Houtput(k)=DFT, I with the interval K that's interference provides to frequency of Doppler Shift (SF) and Wnoise with intervals=DFT, subsequent relation is, Himpulse samples Xinput samples Youtput (7) Following FFT size, guard bit data are segmented with the calculable state of channel information Himpulse (Ith) for the information sub-channels is estimated in channel estimation sequence. The data after the transmission is estimated by, After that, the digital information is calculated from the information de-mapping block as shown in the figure 1. The above methodology is used to eliminate the ICI and ISI. Proposed Equalizer for SCR channel estimation The SCR OFDM channel estimation and maximization is calculated using Advanced Per Subcarrier Channel Equalizer (APSCE). In proposed cyclic prefix pilot data methodology channel estimation, the pilot data is inserted on the sub-carriers with a selected amount of data bits. The channel is constant throughout the transmission and it is frequency selective using selective mapping technique. Since the pilots are sent the least bit carriers, there is no interpolation error. The channel estimation is performed based on either APSCE. The equation of APSCE is given as, Where, the pilot value is denoted as xi at the subcarrier of I and yi is denoted as the received data at the ith subcarrier. The spatial domain Gaussian parameter and de-correlated parameters are given Gaussian noise during the data transmission. The Heuristic approach APSCE equalization can be estimated as, If the instant field channels vector g is Gaussian factor and de-correlated among the channel noise n, frequency-domain Heuristic approach APSCE guesstimate of g is agreed by: Where, Rgy and Ryy is cross variance and covariance g and y respectively. Once channel is lower attenuation, the transformation block is used to define the estimation of time varying channel and the choice of APSCE equalizer at every transmission of bits. The channel reaction at the kth subcarrier is calculated commencing the previous data H freq (Ith) is accessed for calculate channel estimation of data rate of information H freq (Ith). The information {Xfreq(ith)}is modulated to the digital information using signal mapping technique and estimated again back through the signal de-mapping X freq (ith). The channel estimation of SCR OFDM using APSCE is {H e (k)} is given by: Since the performance of APSCE equalizer needs to send the information for long range communication noisy channel can provide entire gain with calculable channel state information. The channel estimation provides high speed data transmission from transmitter to the receiver, during the transmission of data through SCR transmission between the channel estimation errors provides to the decimal interpolation to convert from binary to decimal and therefore the less information error provides to gain of time vary channel state estimation in terms of SNR by reducing BER. For high data rate attenuation channels, as are shown in simulations, the APSCE equalizer based SCR OFDM mostly channel estimation performs better. Data rate improvement for long range communication The long range communication using Heuristic approach should give the high data rate due to its coverage area improvement. The entire digital data transmission may well be designed around based special purpose FFT and IFFT that is mathematically equivalent of distinct DFT and IDFT severally. DFT remodel correlates the signal with every of its curved unrelated basis functions. The correlation for a given subcarrier solely sees energy for that corresponding subcarrier. This separation of signal energy is that the reason that the SCR OFDM subcarriers overlap while not inflicting interference. The properties of FFT imply the data rate high during the transmission of information from transmitter to receiver of SCR OFDM for long range communication. The figure 2 shows the data rate improvement for SCR OFDM using IFFT property. Figure 2: Data rate improvement using IFFT Methodology for improving data rate using IFFT The SCR OFDM Heuristic approach gives the high data rate by following this given methodology. Serial to parallel regenerate low rate Forward Error Correction coded and interleaved bit stream initial regenerate to some in-phase (XI(k)) and quadrature-phase XQ(k) element based on 256 M Array QAM modulation. This method can be mapped the given M bits to a logo X(k) = XI(k)+jXQ(k), that represents part and amplitude of a selected kth subcarrier. At the transmitter, SCR OFDM system treats these symbols as if they are within the FFT frequency domain. The IFFT takes in N symbols at a time wherever N is that the several of subcarriers within the system, that depends on the appliance. Not all the carriers are wont to transmit information. As in IEEE 802.11a normal, 4 subcarriers are used for pilot data and 12 subcarriers are left out of 64 out there subcarriers. Every input information acts sort of a advanced weight for the corresponding orthogonal function. The IFFT output is that the summation of all N sinusoids. The block of N output samples from the IFFT makes up one OFDM image. Cyclic Prefix (which is that the copy of few last samples of the OFDM time symbol) is side to the output of IFFT so parallel to serial conversion takes place. For long range communication, the SCR OFDM signal is generated at base-band communication, and then modulated up to the specified RF frequency quotient modulator (analog techniques or a Digital Up Converter). A transmitted RF signal is usually a true signal because it is simply a variation in strength. It is but doable to directly generate a true OFDM signal mistreatment IFFT however therein case for 2N purpose IFFT are needed for N information carriers. Input given to the IFFT is conjugated isosceles so the output of IFFT is real. SCR OFDM for AWGN, Rayleigh and Rician fading channels The Rayleigh and Rician fading channels manifest itself in 2 effects. First, time spreading (in τ) of the information length among the signal, and/or, a time variant behavior (in t). In such case received signal y(t) is expressed as a convolution of the transmitted signal x(t) with the channel impulse response h(t,τ) and inherent AGWN n(t). For AWGN channel and fading channel SCR OFDM behaves same as one carrier system. One APSCE equalizer is employed to correct the amplitude and physical change due to the flat fading channel. Synchronization of SCR OFDM In SCR OFDM systems orthogonal of the subcarriers is crucial. Time domain information and carrier frequency offsets (CFO) could cause the loss of subcarrier orthogonal. If not equalized, they will limit the performance of SCR OFDM system as a result of they cause ICI and ISI. The phase information noise is additionally given all sensible oscillators and it manifests itself within the type of random modulation of the carrier. The synchronization method is often split into a testing part and a trailing part, if the characteristics of the random frequency and temporal order errors are legendary. In acquisition part, an initial estimate of the errors is no inheritable, exploitation a lot of complicated algorithms and presumably the next quantity of synchronization info within the knowledge signal, whereas later the trailing algorithms must correct for short-run deviations. Synchronization in SCR OFDM systems is completed using: The maximum frequency error which will be calculable exploitation coaching sequence is given by: Fmax=1/2DTs (13) Where, Ts is the sampling frequency and D is the delay of transmission of information from transmitter to the receiver of SCR OFDM. Power control for SCR OFDM The technique for interference coordination to reduce the synchronization error is power control for SCR OFDM, where each base station adjusts it is transmit power to reduce the interference it causes to the users in the neighboring base stations. A central controller can cipher the optimum power level for each base station to interference, but it should additionally account for the performance degradation for the users served by a base station at reduced power and maintain a suitable performance level for such users further of SCR OFDM. For sensible realization during a centralized architecture interference management ought to not rely on per-frame programming selections executed by every base station. For this reason, combination the interference for all the users during a transmission and perform the optimizations at the cell level. Since interference is eliminated at the transmission level, alternating between totally different user transmissions in one cell does not impact the amount of interference it causes different unit of SCR OFDM. Improvement of Quality of Service of SCR OFDM The Quality of Service (QoS) of SCR OFDM covers all aspects of making certain an appropriate delivery of service during a data transmission through SCR network. This includes management of system tolerance and reliability, packet delivery latency, event detection, strength and maintenance. The SDN design permits for the QoS management tasks to require place within the controller by virtue of being logically centralized. The SDN-based WSN environment varies thanks to factors like topology changes, node failure, information measure accessibility and energy fluctuations so, there is have to integrate a mechanism for reliability of the network. The reliability of the SCR design supported OFDM through the application of models based on continuous time domain and frequency domain process using FFT process and continuous random logic techniques. The results show that network reliability is improved with the quantity of controllers and sensors and their individual error rates reduction. From the results a proposal to integrate quite one reliable controller as a reliability management strategy is created. Management of reliability has been classified into the reliability of communication systems and reliability of tasks. Management of communication reliability will be done through strategies such as providing duplicate nodes and redundant links. Operate alternation has been planned to manage reliability of detector tasks by facultative neighboring detector nodes to require over broken sensor node tasks mechanically providing a improving mechanism. The OFDM system will also offer a back-up for sensing tasks and knowledge to boost the reliability of tasks. Another QoS parameter is strength that could be a characteristic that defines a system ability to meet its expected performance underneath unreliable environmental conditions. To manage system robustness it's necessary to think about factors moving and busy with the system and monitor the interference patterns during a planned applied math machine learning technique. This system would yield the bar of potential interference supported previous activity patterns of interference The figure 7 shows the probability of detection for SCR OFDM for long range communication. The probability of detection is high for lower rate of false alarm as shown in the figure. The figure 7 analyzes different ways that a one graph could relatively accurate and efficient for using various sensor and base stations in terms sensing and wireless transmission. The link quality improvement is achieved as shown in the figure 8. V. CONCLUSION In this paper, we have proposed and analyzed a Software Controlled Radio (SCR) using OFDM technology with 256 QAM modulation techniques. It is implemented using multisensory network and base stations to communicate the information for long range. The signal to noise ratio is highly improved by applying the proposed algorithm APSCE compare to LMMSE equalizer. The capacity is calculated for proposed algorithm with high efficiency and accuracy. The link quality is calculated to provide the distributed antenna system improvement. The SCR OFDM is proposed here for achieve the long range communication using wireless technology.
2019-09-17T01:09:15.208Z
2019-08-10T00:00:00.000
{ "year": 2019, "sha1": "bd596706dff6831c7e85f0bba3d8b0e28cbb62ed", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijitee.j1048.0881019", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "32f8c53fac54d6749dca50ac9a8893baf02558bc", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [] }
209423963
pes2o/s2orc
v3-fos-license
Cardiovascular Complications of Chronic Opium Consumption: A Narrative Review Article Opiates are the second most prevalent abused illicit substance after cannabis in the world. The latest United Nations Office on Drugs and Crime (UNODC) report estimated 30% increment in opium cultivation worldwide. High prevalence of opium consumption in eastern countries may be due to the high availability and traditional misconceptions. Opium consumption has been linked to hypertension, diabetes mellitus, dyslipidemia, and coronary artery diseases (CAD). In this review, we will review the association between opium use, cardiovascular diseases, and clinical outcomes. The present evidence suggests that chronic opiate consumption may increase the risk of cardiovascular diseases and related mortality. Introduction Poppy (Papaver somniferum L.) is one of the ancient plants that was cultivated for millennia and used for both medicinal and recreational purposes (1). Opium is a dark sticky or crumbly mass exuded from the ripening capsule of opium poppy and consists of several alkaloids including approximately twelve percent morphine with lesser amounts of noscapine, codeine, papaverine and thebaine (2). Raw opium is the second most prevalent abused substance after tobacco in most Asian countries. The 2017 report of the United Nations Office of Drugs and Crime (UNODC) estimated the overall production of opium to be 6,380 tons worldwide with 30% increase compared to the previous year and they reported the number of illicit users to be up to 17.7 million in 2015 (3). The high rate of opium consumption in Asian countries could be due to several factors such as widespread availability and long-standing misconceptions among ordinary people and even healthcare professionals regarding the purported alleviating effects of opium on coronary artery diseases (CAD), dyslipidemia, hypertension, and diabetes mellitus (DM) (4) (Fig. 1). Patients in the Middle East usually approach physicians with questions about the effectiveness of opium in controlling diabetes or cardiovascular conditions (5). The lack of sufficient evidence makes it difficult to answer those inquiries (6). Tachycardia, bradycardia, and orthostatic hypotension are common cardiovascular problems seen in opium-addicted individuals (10). Plasma fibrinogen levels, coagulation, and atherosclerosis are adversely affected by opium abuse (11,12). Low-density lipoprotein (LDL) cholesterol and triglyceride (TG) and blood glucose levels are other important factors changing in addicted people. They are known risk factors of CVD (13). Moreover, the deleterious effects of opium consumption on CVD risk factors were found to be proportional to the duration of consumption and the route of administration (14). There is a paucity of consistent and reliable information on the association between opium dependency and CVD progression. Therefore, in the present article, we have reviewed the latest findings related to this issue. Pharmacology of Opiates In 1806, Friedrich Wilhelm Sertürner isolated morphine as the active component of the opium poppy, and there the modern opioid pharmacology was born (15). More than 40 alkaloids exist in the milky latex fluid obtained from the opium poppy. The six major alkaloids that account for almost all of the natural alkaloid composition in opium are morphine, noscapine, codeine, papaverine, thebaine, and narceine (16). Thebaine is not used therapeutically, but several drugs such as naloxone, naltrexone, oxycodone, and buprenorphine are synthesized from thebaine. The opioid receptors are categorized according to the International Union of Pharmacology (IU-PHAR) recommendation to μ-(MOP), ĸ-(KOP), and δ-opioid (DOP) receptors, which are Gprotein-coupled receptors (17). Cyclic AMP and/or ion channels (K+) are second messenger systems of opiate receptors. Studies suggested that modifications in the levels of cyclic AMP during chronic opiate consumption are associated with the development of tolerance and physical dependence (18). Interactions with cardiovascular medications Opioid addicted patients may concurrently suffer from other comorbidities. Cardiovascular and pulmonary diseases are common among chronic opiate abusers but dose-dependent exact interactions are not studied well. Hence, the use of opiates (either therapeutic or on an abusive basis) along with cardiovascular medications (including anticoagulants, antiarrhythmic, cardiotonic, and antihypertensive drugs) may increase the risk of drug-drug interactions. Concomitant administration of opiates with cardiovascular drugs may potentiate or reduce pharmacologic effects of cardiovascular medications such as warfarin or digoxin (19,20). The interaction can affect the pharmacokinetics (absorption, distribution, metabolism or elimination) or pharmacodynamics (molecular mechanism of action) and the ultimate therapeutic status of the CV drugs (21,22). Opium contains alkaloids that have a direct impact on the cardiovascular or hemostatic systems. They may also have several indirect effects through interactions with the effect of other medications. Therefore, health care professionals should have a good knowledge base to identify possible opiatedrug interactions and to warn the patients about the possibility of complications. The lack of information on the interactions of opiates with concurrent medications needs to be addressed by well-designed clinical trials to assess the potential interactions and unknown side effects. Table 1 summarizes some important interactions of opiates with cardiovascular medications. The effects of opium on CV risk factors It seems that despite a few old studies, recent articles have emphasized the adverse effect of opium on cardiovascular risk factors. In the following sections, we will review the studies that explored the effect of opium consumption on cardiovascular risk factors such as hypertension, etc. Blood pressure and hypertension There are misconceptions in the general population or even among some medical professionals, that opium could have favorable impacts on lowering blood pressure in hypertension while many experts may disagree. Opium use had no significant ameliorative effect on hypertension in either occasional or dependent users (9). Similarly, several other studies failed to find a correlation between opium consumption and hypertension prevention (23)(24)(25)(26). However, not all of these studies were consistent. A cohort study of 9,264 adults showed a lower prevalence of hypertension in opium users as compared to their counterparts, probably because of the younger age of opium users in that study population (27). In the contrary, in a cohort study on 5,332 participants, high systolic and diastolic blood pressures were more prevalent in opium users than in others (3). A case-control study reported the rate of hypertension to be significantly higher in addict patients with ischemic stroke than controls (28). Endogenous opioid systems and opioid receptor agonists are purported to modulate the arterial pressure to some extent. Stimulation of peripheral opioid receptors may reduce arterial hypertension, especially in those with pronounced sympathetic hyperactivity or stress-based high blood pressure (29). Morphine administration could decrease systolic and diastolic blood pressures, however, in several other cases, morphine exposure enhanced hypertension through its effects on brain noradrenergic mechanisms or central opiate receptors (30)(31)(32). The dosage and dura-tion of drug abuse seem to be critical factors here. Generally, short term and low dose exposure to opium lower blood pressure, whereas long-term dependence leads to hypertension. The former effect is attributed to vasodilation and opium's effect on reducing sympathetic tone. Opium-induced high blood pressure might be secondary to its long-term deleterious impact on the structure and function of body organs, particularly in the cardiovascular system, including microvascular coronary dysfunction, elevated plasma levels of homocysteine and fibrinogen, atheroma formation and related vascular stenosis (7,11,12). In general, data on opium-induced blood pressure alterations suffer from inadequacy and inconsistency, and further studies are warranted to shed some light on this issue. Effects on coronary disease and myocardial infarction Opium users have shown to have a higher susceptibility to coronary artery disease compared to non-users with a dose-response relationship reported between these two (9,33). Chronic opium consumers have an overall increase in their ECG abnormalities. The ECG abnormalities are more frequently observed in male opiate consumers than in females. QTc prolongation (13%), R and/or S wave abnormalities (11%), and poor R progression (10%) were the most reported ECG changes (34). Niaki et al. showed that opium consumption was a significant risk factor of MI with an adjusted odds ratio of 26.3. However, they did not find any association between opium abuse and extent of MI (35). In patients admitted with the diagnosis of MI the prevalence of opium addiction was 19%, while it was 2.8% in general population (36). Sadeghian et al. presented the opium abuse as a major risk factor for ischemic heart disease (8). In a study, they introduced diabetes mellitus in women and opium abuse in men as two major risk factors of CAD in Iran (37). Hosseini et al. found a significant association between the dose of opium used by addicted patients and the Gensini score (β = 0.27, p = 0.04) (38). On the other hand, Dehghani et al. studied 239 opium addicts and 221 non-addicts with first MI and reported a lower rate of anterior wall MI and lower related early mortality in addicted as compared to non-addicted patients (39). Killip class and left ventricular ejection fraction (LVEF) were similar between addicted and non-addicted groups (39). None of the oral or inhaled routes of opium abuse have increased the occurrence of CAD in multivariate analysis (39). Patients with MI should receive medical care promptly. Some addicted patients use opium to get relief from chest pain (41). Opium uses by relieving chest pain and inducing drowsiness can increase the time elapsed from symptom onset to admission to the hospital. This delay increases the mortality among opium users. Nevertheless, Bafghi et al. reported similar rates of chest pain between opium users and non-users, and hence, they attributed this delay in seeking medical care to other factors such as socioeconomic issues (36). Effect on heart failure Heart failure and functional class seem to be no different in opium-addicted patients after MI than in non-addicted individuals. Davoodi et al. reported no difference between addicted and non-addicted patients regarding functional class, angiographic findings, and the need for CABG (42). Similarly, post-MI LVEF was not different between addicted and non-addicted patients (p = 0.4) (43). Safaei also reported no difference between the LVEF of addicted and non-addicted patients before and 6 months after CABG (44). The effect of opium in patients with heart failure (HF) is still unclear. Morphine can lower some of the symptoms occurring in the late stages of HF such as dyspnea (45). Similarly, morphine seems to relieve the ischemia symptoms in patients with cardiovascular risk factors (46). The underlying pharmacological mechanism of morphine has encouraged some of the researchers to conclude that morphine may be cardioprotective in HF patients (47). However, one study revealed that patients with HF who use morphine along with nitroglycerine and furosemide had higher mortality rates (48). However, there is no enough data to provide evidence-based recommendations regarding the cessation of opium abuse in cardiac patients (49). Effects on cerebrovascular disease and stroke A few studies assessed the effect of chronic opium consumption on the development of ischemic stroke. A case-control study showed a significant rise in ischemic stroke in opium-addicted patients in Kerman (P < 0.0001) (50). In another cross-sectional sonographic study conducted on 97 patients with ischemic stroke shown that there was no significant difference in the frequency of atherosclerosis and the type of involved vessels among opium addicts and non-addict patients (51). Rezvani and Ghandehari studied 558 opium users with a mean age of 56.2 years. They claimed that opium inhalation did not have a significant effect on occurrence cerebrovascular events and they concluded that opium may have a protective effect on ischemic stroke (40). Khademi et al in Golestan cohort study conducted on 5000 Iranian adults demonstrated that chronic abuse of opium was associated with an increased risk of ischaemic heart disease and cerebrovascular events (52). A recent case-control study presented opium addiction and hypertension as two independent predictors of stroke. Af-ter adjusting for tobacco smoking, opium abuse increased the risk of stroke by 2.3-fold (28). Effect on other serum parameters In the majority of studies, opium addicts had higher levels of potassium (53-56), whilst opium consumption had either no or enhancing the effect on sodium levels (53,55). Fe 2+ levels were higher and total iron-binding capacity was lower in addicted diabetics as compared to non-addict diabetics (54). Albumin was lower among opium addicts; however, no significant difference was observed in albumin/globulin ratios (54). Urea and creatinine levels were enhanced by opium consumption in some studies (55,57), but unchanged in others (58). Serum transaminases are shown to be associated with the severity of coronary atherosclerosis, and both cardiovascular and all-cause mortality (4). Radmard et al conducted a cohort study on 1599 participants demonstrated a significant reduction in aspartate transaminase (AST) and alanine transaminase (ALT) in normal addicted people, while a statistical notable elevation in alkaline phosphatase (ALP) and gamma-glutamyl transferase (GGT) (59). As well as Karam et al. reported lower aspartate transaminase (AST) and alanine transaminase (ALT) in diabetic addicts as compared to non-addict diabetics (54). However, several animal and human studies have shown the opium consumption to be associated with increased levels of AST, ALT, ALP, lactate dehydrogenase (LDH) and GGT (55,57,60,61). Opium abuse could potentiate cardiometabolic risk factors such as apolipoprotein B/apolipoprotein A-I index, which is a strong predictor of cardiovascular death (62). Other cardiac biomarkers including factor VII, fibrinogen, and homocysteine could be changed by chronic opium consumption (63). These findings might explain the increased risk of MI or stroke in opium users. Complication related to opium contamination Opium impurities can impose a great health risk to chronic opium consumers. Chemical compounds such as lead, arsenic, and thallium are frequently presented in opium extract in different amounts. Depending on the area of opium production, product's appearance and rout of consumption, smugglers forge opium by adding other impurities to maximize their profit (64). The motive behind adding lead to opium by drug smugglers is to increase the product's weight (65)(66)(67)(68)(69). This will enhance the risk of lead poisoning among opium consumers. Signs and Symptoms include abdominal pain, constipation, anorexia, anemia and nephropathy (67)(68)(69)(70). Acute and chronic accumulation of lead in the body causes cardiac and vascular damages with potentially life-threatening consequences (69). The blood lead level was found to be higher in opium consumers than non-addict individuals (65, 66, 70), but not across all studies (69). Opium inhalation seems to cause more lead surge in addicted subjects than oral use (65). Iron and calcium levels may affect lead absorption and their serum level may confound the lead-mortality association (71). Opium consumption and clinical outcomes Despite common misconceptions among opium consumers, there is a higher rate of mortality and morbidity in opium addicts than non-users. There is a paucity of data on this matter mainly due to the lack of comprehensive large-scale, long-term, controlled studies. In-hospital mortality was significantly higher in opium-addicted patients compared to non-users. Due to the pain masking properties of opium, drug users may delay one-hour longer than non-users since the onset of MI symptoms to reach the emergency department (36). Morphine therapy in acute decompensated heart failure increased cardiac biomarker (troponin I) levels significantly and raised the rate of adverse events such as mechanical ventilation, intensive care unit (ICU) admissions, prolonged hospitalization, and death (72). Safaii et al. reported chronic opium consumption to be associated with rehospitalization with cardiac cause within 6 months after CABG surgery (73). Other studies found no association between opium use and in-hospital mortality (39,75), but it should be noted that these studies suffered from limitations such as small sample size and short-term follow-up periods. Conclusion The incremental rise of opioid abuse requires us to inform and educate the patients regarding its possible detrimental effects. Several large-scale studies imply that opium is hazardous cardiovascular and metabolic disorders and even may worsen these complications. Successive data demonstrated that there is an association between habitual opium consumption and the risk of ischemic events. While vernacular wrong believes and short-term and small-sized studies accentuate the usefulness of opium consumption. Therefore, clinicians and patients should be noticed about the deleterious effects of opium addiction on various vascular events. Besides these, further studies are warranted to elucidate the effects of opium use in cardiovascular conditions. Nevertheless, the existing data points to opium consumption as a risk factor of CAD and hence, strategies should be developed and implemented for the prevention and cessation of opioid abuse in the at-risk population. Ethical considerations Ethical issues (Including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, redundancy, etc.) have been completely observed by the authors.
2019-12-20T21:44:22.182Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "2b04f0f6e21c28f5b81bb71baa12d6a4b95c3405", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "385507d793ce211fb8fd1c2c510e11c08f4ee09b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249487367
pes2o/s2orc
v3-fos-license
An explanation for negligible senescence in animals Abstract Negligible or negative senescence occurs when mortality risk is stable or decreases with age, and has been observed in some wild animals. Age‐independent mortality in animals may lead to an abnormally long maximum individual lifespans and be incompatible with evolutionary theories of senescence. The reason why there is no evidence of senescence in these animals has not been fully understood. Recovery rates are usually very low for wild animals with high dispersal ability and/or small body size (e.g., bats, rodents, and most birds). The only information concerning senescence for most of these species is the reported lifespan when individuals are last seen or caught. We deduced the probability density function of the reported lifespan based on the assumption that the real lifespan corresponding to Weibull or Gompertz distribution. We show that the magnitude of the increase in mortality risk is largely underestimated based on the reported lifespans with low recovery probability. The risk of mortality can aberrantly appear to have a negative correlation with age when it actually increases with increasing lifespan. We demonstrated that the underestimated aging rate for wild animals with low recovery probability can be generalizable to any aging models. Our work provides an explanation for the appearance of negligible senescence in many wild animals. Humans attempt to obtain insights from other creatures to better understand our own biology and its gain insight into how to enhance and extended human health. Our advice is to take a second glance before admiring the negligible senescence in other animals. This ability to escape from senescence is possibly only as beautiful illusion in animals. Negligible or negative senescence in wild animals has been questioned Nussey et al., 2013), as it can lead to unreasonable predictions (e.g., inconceivable maximum lifespans) and be incompatible with evolutionary theories of aging. Taking into account the assumption of constant mortality, Botkin and Miller (1974) predicted the potential longevity of several bird species. For example, in a population with 1000 individuals, the predicted maximum longevity is 170 years for herring gulls, 102 years for both gannets (Botkin & Miller, 1974). Morus bassanus and fulmars Fulmarus glacialis However, according to the ringing recoveries from EURING (https:// euring.org/data-and-codes/ eurin g-databank), the observed maximum longevity for herring gulls, gannets, and fulmars is 34, 37, and 43 years, respectively. Clearly, there are huge gaps between the predicted and observed maximum longevities in these species. From an evolutionary perspective, mutations that are detrimental late in life will tend to equilibrate at higher frequency than in early life, causing the force of natural selection to weaken with age (Medawar, 1952). Increasing mortality can also result from the pleiotropic gene effects, positive genetic effects in early life while negative effects in late life (Williams, 1957). Both of these evolutionary theories predict that senescence is an inevitable process for organisms (Hamilton, 1966). In accordance with the predictions of evolutionary theories, there is an abundance of evidence that shows an increase of mortality risk toward old ages in both wild animals, especially mammals and birds Nussey et al., 2013), and zoo animals (Peron et al., 2019;Tidière et al., 2016). However, the evidence for negligible or negative senescence in some wild animals has yet to be fully addressed or explained (Baudisch, 2011;Bernard et al., 2020;Jones et al., 2014). In this study, we employed mathematical models and simulated data to show that the magnitude of increase in mortality is largely underestimated based on reported lifespans with low recovery probability. Mortality risk can show a descending trend toward old age even if it subsequently increases as an organism ages. Our results reveal that the evidence of negligible or negative senescence is probably an illusion in animals. | MATERIAL S AND ME THODS Senescence depends on both pace (i.e., the time scale on which mortality progresses) and shape (i.e., how sharply mortality changes with age) (Baudisch, 2011). Within a given population, the shape of senescence can be directly displayed by the plot of mortality risk against age or reflected by the parameters (or the combinations of parameters) in aging models (Ricklefs & Scheuerlein, 2002;Ronget et al., 2020). In this study, we focused on the shape of senescence-mortality changes with age-based on two widely employed aging models: the Weibull and Gompertz models. Both these models target the changes of mortality with age from an age of onset of senescence, commonly assumed to correspond to the age at first reproduction (Pinder et al., 1978;Wrycza et al., 2015). We extend the main results to any other aging models in the discussion. | The frame of Weibull and Gompertz model The Weibull model provides a close approximation to the distribution of the lifetime for the object consisting of many parts, which experiences death when any of its parts fails (Rinne, 2009;Sharif & Islam, 1980). This model is widely used to assess biological phenomena, especially in birds (Pinder et al., 1978;Ricklefs & Scheuerlein, 2002). In the Weibull model, the change in mortality risk (m) as a function of age (x) can be modeled by the expression: In Equation 1, m 0 is the age-independent mortality risk; c is the shape parameter reflecting the age-dependent change in mortality risk; b is the scale parameter linked with the pace of aging; and x is the age after the onset of the senescent stage. The corresponding probability density function of lifespan is and the survivorship at age (x) is The Gompertz model assumes an exponential increase in mortality with age. This model is the most popular and satisfactorily describes the shape of age-specific mortality changes in humans (Gavrilov & Gavrilova, 2015) and other mammals . For comparing with the previous research Tidière et al., 2016), Gompertz-Makeham model, which include a constant reflecting the age-independent mortality risk, was used. In the model, the change in mortality risk (m) as a function of age (x) can be modeled by the expression In Equation 4, m 0 , named as Makeham term/constant, reflects the age-independent mortality risk; θ is the initial mortality at the onset of senescence; λ reflects the rate of aging (Baudisch, 2011;Finch, 1990); and x is the age after the onset of the senescent stage. The corresponding probability density function of lifespan is and the survivorship at age (x) is For simplicity, we first omit the term m 0 , which determines mortality in the early subadult stage and has little influence on the mortality toward old age, and then consider its (m 0 ) effect on the changing of mortality risk. Following Rinne (2009) | Real lifespan and reported lifespan It is difficult, if not impossible, to know the real lifespan of many wild animals. In field studies, lifespan data is mostly collected using the mark-recapture method (Pradel, 1996). The longevity of individuals is recorded when they are last seen or caught, with the unknown remaining lifespan after that. The longevity collected in field studies is termed "reported lifespan" to distinguish it from the organism's actual lifespan (Moorad et al., 2012;Xia & Møller, 2021). For wild animals with high dispersal ability (e.g., birds, bats), the recovery rates are usually very low; for example, <1% of banded birds tend to be recovered (Cleminson & Nebel, 2012). When the recovery probability is <10%, which is the typical case for most bird species according to the ringing recoveries from EURING (https://euring.org/ data-and-codes/ eurin g-databank), the probability of an individual being recaptured twice is <1%. Thus, we can safely assume that a marked individual can only be found at most once after marking. We also assume that the time when a particular individual is recovered belongs to a uniform distribution, and the domain is its real lifespan. Under the above prerequisites, given that the real lifespan is x, the probability density function of the reported lifespan (y) is Equation 7 means that an individual who died at age (x) could be observed at any time before age (x) but could not be observed after age (x). Using the condition probability formula, we can obtain the joint probability density function for the real lifespan (x) and reported lifespan (y) Then, we can deduce the probability density function for the reported lifespan (y) Equation 9 implies that the probability of a reported lifespan at age (y) is contributed by the individuals with a real lifespan (x) equal to or larger than age y. Combining Equations 2 and 9, the probability density function for the reported lifespan from the Weibull model is Combining Equations 5 and 9, the probability density function for the reported lifespan from the Gompertz model is The change in mortality risk against age relates to parameter c in the Weibull model, with a decreasing mortality risk (c < 1), constant mortality risk (c = 1), decelerated increase in mortality risk (1 < c < 2), linear increase in mortality risk (c = 2), and accelerated increase in mortality risk (c > 2). (b) The change in mortality risk against age relates to the parameter λ in the Gompertz model, with the more accelerated increase in mortality risk for a larger λ value The corresponding survivorship and mortality risk against age for reported lifespan (y) can be deduced from the probability density function . As the aging rate λ increases, there is a more accelerated increase in mortality risk ( Figure 1b). | Parameter estimation The 7). Based on the reported lifespan, we used maximum likelihood methods to estimate the ĉ or aging rate ̂ value. All simulated data were generated in R software (R Core Team, 2021), with maximum likelihood methods conducted in the "EnvStats" package (Millard, 2013) and "VGAM" package (Thomas et al., 2015). We realize that it is improper to estimate the shape parameter ĉ or aging rate ̂ from the reported lifespan, as the real lifespan, rather than the reported lifespan, belongs to the Weibull distribution or Gompertz distribution. Our aim is to demonstrate the impropriety of this estimation: the increase in mortality risk against age is largely underestimated based on reported lifespan. An alternative approach is to fit equation (e.g., linear regression) for mortality risk against age. We evaluate the inherent problems of this approach in the discussion. Figure 5a). The increasing mortality risk is demonstrated when the λ value is relatively large; however, the magnitude of increase is still underestimated as the mortality risk at the beginning is overestimated (red lines in Figure 5b-d). | The influence of non-negative m 0 value Theoretically, m 0 could be any non-negative value. From Equations 1 and 4, it is evident that the value of m 0 has no effect on the change in mortality risk based on the real lifespan, as m 0 only influences the intercept of the trajectory. This property still holds up for reported | 5 of 10 XIA And MØLLER lifespan. From the trajectory of the mortality risk (Figures 6 and 7), we can see that the mortality risk at the beginning is further overestimated as m 0 increases, and the trajectories are nearly parallel toward old age. Thus, the increase in mortality risk is still underestimated for nonzero values of m 0 in both the Weibull and Gompertz models. | Parameter estimation in the model In the Weibull model, the change in mortality risk against age is reflected by the value of the shape parameter c. The estimated ĉ value based on the reported lifespan is largely underestimated, with an upper bound of approximately 1.5 (Figure 8a). In the Gompertz F I G U R E 2 The survivorship curves based on real lifespan (blue lines) and reported lifespan (red lines). The simulated data were generated by Weibull distributions with shape parameters equal to 1.25 (a), 1.5 (b), 2 (c), and 3 (d). As the shape parameter increases, the convexity of the survivorship curve based on real lifespan (blue lines) is increasingly evident, while survivorship curves are almost straight, with a somewhat concavity, based on reported lifespans (red lines) F I G U R E 3 Mortality risk plotted against age based on real lifespan (blue lines) and reported lifespan (red lines). The simulated data were generated by Weibull distributions with shape parameters equal to 1.25 (a), 1.5 (b), 2 (c), and 3 (d). For the real lifespan (blue lines), the mortality risk increases with age. For the reported lifespan (red lines), the mortality risk decreases with age (a) or remains nearly constant (b). The increasing mortality risk is shown in (c) and (d); however, the magnitude of increase is less for the reported lifespan (red lines) than for the real lifespan (red lines) | DISCUSS ION Negligible or negative senescence-a pattern of stable or decreasing mortality with age-has been apparently observed in some wild animals (Altwegg et al., 2007;Jones et al., 2014;Lack, 1943aLack, , 1943bNichols et al., 1997;Pinder et al., 1978;Slade, 1995). However, age-independent mortality may lead to F I G U R E 4 The survivorship curves based on real lifespan (blue lines) and reported lifespan (red lines). The simulated data were generated by Gompertz distributions with aging rates equal to 0.05 (a), 0.1 (b), 0.2 (c), and 0.5 (d). As the aging rate increases, the convexity of the survivorship curve based on real lifespan (blue lines) is increasingly evident, while survivorship curves are almost concave based on reported lifespans (red lines) F I G U R E 5 Mortality risk plotted against age based on real lifespan (blue lines) and reported lifespan (red lines). The simulated data were generated by Gompertz distributions with aging rates equal to 0.05 (a), 0.1 (b), 0.2 (c), and 0.5 (d). For the real lifespan (blue lines), the mortality risk increases with age (a-d). For the reported lifespan (red lines), the mortality risk remains nearly constant (a). The increasing mortality risk is shown in (b-d); however, the magnitude of increase is less for the reported lifespan (red lines) than for the real lifespan (red lines) an unrealistic maximum lifespan (Botkin & Miller, 1974) and run counter to evolutionary theory (Hamilton, 1966;Medawar, 1952;Williams, 1957). The inability to obtain evidence of senescence in the past was generally attributed to insufficient sample sizes in field studies Nussey et al., 2013). However, even in pioneering research that appears to show negligible senescence in wild animals, the sample sizes can range from hundreds to thousands (Lack, 1943a(Lack, , 1943bPinder et al., 1978), which is equivalent to the sample sizes based on captive animal research that clearly show evidence of senescence (Peron et al., 2019;Ronget et al., 2020;Tidière et al., 2016). Here, we provided an explanation for the negligible senescence in wild animals by showing that an increase in mortality risk was largely underestimated based on reported lifespans with low recovery probability. F I G U R E 6 The change in mortality risk against age based on reported lifespan is not affected by m 0 (i.e., age-independent mortality risk) in the Weibull model. m 0 is equal to 0 for the blank lines; m 0 is equal to 0.2 for the dashed lines; m 0 is equal to 0.5 for the dotted lines. The trajectories correspond to the Weibull model with shape parameters equal to 1.25 (a), 1.5 (b), 2 (c), and 3 (d) F I G U R E 7 The change in mortality risk against age based on reported lifespan is not affected by m 0 (i.e., age-independent mortality risk) in the Gompertz model. m 0 is equal to 0 for the blank lines; m 0 is equal to 0.2 for the dashed lines; m 0 is equal to 0.5 for the dotted lines. The trajectories correspond to the Gompertz model with aging rates equal to 0.05 (a), 0.1 (b), 0.2 (c), and 0.5 (d) | Technical issues Based on the framework of the Weibull and Gompertz models, it can be observed that the mortality risk at the start age of the senescent stage is overestimated (Figures 3 and 5), which leads to the underestimation of increased mortality risk with age. Here, we demonstrated that this property can be extended to any other aging models. Let f X (1) correspond to the probability of real lifespan in the first-time interval. As all individuals survive at the beginning (i.e., s X (1) = 1), mortality risk in the first-time interval based on the real lifespan is The mortality risk in the first-time interval based on the reported lifespan is: Therefore, the mortality risk at the start age of the senescent stage must be overestimated from the reported lifespan, which can be generalizable to any aging models. In this study, we used maximum likelihood methods to estimate the ĉ value in the Weibull model and the ̂ value in the Gompertz model based on the reported lifespan and showed that the increase in mortality risk against age was largely underestimated (Figure 8). An alternative approach is to fit the equation (e.g., linear regression) for mortality risk against age. Here, we discussed the limitations of this approach. From Equation 9, we can get: This implies that the probability density function of the reported lifespan is a decreasing function. In other words, there are more mature individuals with short, rather than long, reported lifespans. Thus, the fitted equation should be largely dependent on mortality risk at the start age of the senescent stage, rather than the mortality risk at a relatively old age, as there are smaller and smaller sample sizes for calculating mortality risk as organisms get older. We have demonstrated that the mortality risk at the start age of the senescent stage is overestimated (Equation 15). Thus, the likelihood of underestimating the increase in mortality risk against age based on the approach of fitting an equation is correspondingly high. | Biological significance Understanding how mortality risk changes with age is crucial to the study of evolutionary biology, conservation science, and senescence (Baudisch, 2011). The change of mortality risk can be determined or reflected by the parameters in the aging models (Ricklefs & Scheuerlein, 2002;Ronget et al., 2020). Among the numerous aging models, Weibull and Gompertz models are widely employed, especially in mammals (Gavrilov & Gavrilova, 2015;Ronget et al., 2020) and birds (Pinder et al., 1978;Ricklefs & Scheuerlein, 2002). Parameter c in the Weibull model and parameter λ in the Gompertz model reflect the change in the age-specific mortality risk. In theory, these parameters can be correctly estimated by observing lifespan from a sample of individuals. For wild animals, lifespan is most often inferred from capturerecapture/recovery data (Catchpole et al., 1998). These data often include numerous records with missing information, which limits the inferences that can be drawn based on mortality risk patterns (Metcalf et al., 2009;Ricklefs & Scheuerlein, 2001). There are many methods for parameter estimation from two types of censoring data (Marshall & Olkin, 2007;Rinne, 2009): type-I censoring data (i.e., monitoring is suspended when a fixed time has been reached) and type-II censoring data (i.e., monitoring is suspended when a fixed number of failures has been reached). These censoring data are clearly linked to the study span. However, the situation is more complicated for biological research in which, in addition to the influence F I G U R E 8 (a) In the Weibull model, the estimated value of the shape parameter c is underestimated based on the reported lifespan, with an upper bound of approximately 1.5. (b) In the Gompertz model, the estimated ̂ value has a linear relationship with the real λ value. However, the slope is <1, which indicates the underestimated value of the aging rate based on the reported lifespan of the study span, missing information always results from the recovery probability (Baylis et al., 2014;Gimenez et al., 2008). The missing information in capture-recapture/recovery data cannot be treated as either type-I censoring data or type-II censoring data, as we do not know the lower limit (i.e., the fixed time in type-I censoring data) for the missing values (i.e., the lifespans of unrecaptured individuals), and whether the lifespans of unrecaptured individuals are larger than the lifespans of recaptured individuals (i.e., fixed number of failures in type-II censoring data). Many analytical methods have been developed in the last two decades, allowing inferences about the age-specific survival rates/ mortality risk from capture-recapture/recovery data with missing information . For example, Müller et al. (2004) and Baylis et al. (2018) developed analytical tools that can deal with imperfect detectability and unknown birth time, given that the accurate death time is known. Zajitschek et al. (2009) andClark (2012) developed analytical tools that account for records with missing birth and death times; however, the recovery probability should be sufficiently larger for estimating survival parameters in this method (Colchero & Clark, 2012). In the study conducted by Zajitschek et al. (2009), the probability of a marked individual being recaptured at least once was 99.97714% for males and 99.99994% for females in the total survey period (5984 person-minutes). For wild animals with high dispersal ability and/or small body size (e.g., bats, rodents, and most birds), the recovery rates are usually very low (Cleminson & Nebel, 2012). The only information for most of these species concerning senescence is the reported lifespan based on when individuals are last seen or caught (Moorad et al., 2012;Xia & Møller, 2021). We deduced the probability density function of the reported lifespan based on the framework of the Weibull and Gompertz models. The key point is that the probability density function of the reported lifespan is not in accordance with the Weibull or Gompertz distribution. If this difference is ignored, the magnitude of the increase in mortality is largely underestimated. This study provides an explanation for the negligible senescence of wild animals with low recovery probability. Furthermore, our work can provide an analytical tool for evaluating the aging rate if the real lifespan belongs to Weibull or Gompertz models, as the probability density function of the reported lifespan using the same parameters (m 0 , b, c, θ, λ) as in the Weibull and Gompertz models. | CON CLUS IONS In conclusion, we show that the actuarial senescence (i.e., the increase in mortality against age) is largely underestimated based on the reported lifespan with low recovery probability, which can provide an explanation for the evidence of negligible or negative senescence in many wild animals. Humans attempt to obtain insights from other creatures to better understand our own biology and to determine how best to enhance and extend human health (Bernard et al., 2020;Jones et al., 2014). Our advice is that reports of negligible senescence in animals should be interpreted with caution and rigorously analyzed. Being able to escape from senescence is possibly a beautiful illusion. ACK N OWLED G M ENTS The authors are grateful for discussion about this study with L.J. Zhang. This work was supported by the National Natural Science Foundation of China (No. 32170491) and China Scholarship Council (No. 201906045020). CO N FLI C T O F I NTE R E S T S The authors declare that they have no competing interests. DATA AVA I L A B I L I T Y S TAT E M E N T All data are available in the main text.
2022-06-09T15:09:38.590Z
2022-06-06T00:00:00.000
{ "year": 2022, "sha1": "a2d24c4883be8d1f448ddda8b4b3be621b2afd2a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.8970", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "340cde08f6943dd0c8ac74937fea4eb0513fcc4e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
248630426
pes2o/s2orc
v3-fos-license
Phylogeny and Metabolic Potential of the Methanotrophic Lineage MO3 in Beijerinckiaceae from the Paddy Soil through Metagenome-Assembled Genome Reconstruction Although the study of aerobic methane-oxidizing bacteria (MOB, methanotrophs) has been carried out for more than a hundred years, there are many uncultivated methanotrophic lineages whose metabolism is largely unknown. Here, we reconstructed a nearly complete genome of a Beijerinckiaceae methanotroph from the enrichment of paddy soil by using nitrogen-free M2 medium. The methanotroph labeled as MO3_YZ.1 had a size of 3.83 Mb, GC content of 65.6%, and 3442 gene-coding regions. Based on phylogeny of pmoA gene and genome and the genomic average nucleotide identity, we confirmed its affiliation to the MO3 lineage and a close relationship to Methylocapsa. MO3_YZ.1 contained mxaF- and xoxF-type methanol dehydrogenase. MO3_YZ.1 used the serine cycle to assimilate carbon and regenerated glyoxylate through the glyoxylate shunt as it contained isocitrate lyase and complete tricarboxylic acid cycle-coding genes. The ethylmalonyl-CoA pathway and Calvin–Benson–Bassham cycle were incomplete in MO3_YZ.1. Three acetate utilization enzyme-coding genes were identified, suggesting its potential ability to utilize acetate. The presence of genes for N2 fixation, sulfur transformation, and poly-β-hydroxybutyrate synthesis enable its survival in heterogeneous habitats with fluctuating supplies of carbon, nitrogen, and sulfur. Introduction Aerobic methane-oxidizing bacteria or methanotrophs are a distinct group of bacteria that use methane as their main carbon and energy source [1,2]. The currently described aerobic methanotrophs are affiliated to Alphaproteobacteria (also known as type II), Gammaproteobacteria (type I), and Verrucomicrobia. The two methanotrophic families within Alphaproteobacteria are Methylocystaceae and Beijerinckiaceae [3][4][5]. These methanotrophs convert methane to methanol by using methane monooxygenase (MMO), which exists in particulate (pMMO) or soluble (sMMO) forms [2]. The pmoA gene encoding the beta-subunit of pMMO is present in all aerobic methanotrophs except Methylocella, Methyloferula, and a species of Methyloceanibacter [6][7][8]. The phylogenetic analysis of pmoA gene sequences in the GenBank database shows that about 20 pmoA lineages contain cultured representatives, and there are also more than 20 pmoA lineages have no cultured representative, such as upland soil cluster alpha (USCα), upland soil cluster gamma (USCγ), Rice Paddy Clusters, and the Lake Washington Clusters [9,10]. Currently, the analysis of metagenome-assembled genomes (MAGs) is an important approach to investigate the metabolism of these uncultivated lineages and some novel methanotrophs [11]. The reconstruction and analysis of a MAG of USCγ (type I) confirmed the presence of a nearly complete serine pathway of type II methanotrophs rather than the Phylogeny Analysis The full lengths of pmoA, nifH, and 16S rRNA genes extracted from methanotrophic MAGs were used to construct phylogenetic trees by using MEGA (version 6.06) to infer their phylogeny among known methanotrophs. A maximum-likelihood phylogenomic tree was also constructed with the FastTree v.2.1.10 [47] and visualized with ITOL after identifying and aligning a concatenated set of 120 marker proteins by using the GTDB-Tk v1.7.0 [48]. The genomic average nucleotide identity (gANI) and genomic average aminoacid identity (gAAI) values among methanotrophic MAGs and their related genomes were calculated by JSpeciesWS Online Service [49] and CompareM (https://github.com/dparks1 134/CompareM accessed on 4 April 2022), respectively. Tools of Kostas lab were also used to calculate gANI and gAAI [50]. Succession of MOB in N-Free Medium According to the amplicon-sequencing results of the 16S rRNA gene ( Figure 1A), MOB accounts for about 1.6% of the total microorganisms in the original paddy soil. Their proportion in the soil reached 55.9% after the headspace CH 4 was consumed, and after four additional rounds of enrichment in nitrogen-free liquid M2 medium, their proportion stabilized at about 36%. The dominant methanotroph in soil after enrichment is Methylosarcina (type I), which accounts for 84.3% of total methanotrophs. However, after four rounds of enrichment in N-free M2 medium, the dominant methanotrophs gradually changed into unclassified type II (Methylocystaceae), suggesting they are some novel taxa that have not been well-characterized. On the basis of the amplicon sequencing of the pmoA gene, we obtained similar results. After four rounds of enrichment, the dominant MOB is rapidly transformed from Methylosarcina to Methylosinus and MO3, of which the latter accounts for 33.4% of total MOB ( Figure 1B). The actual proportion of MO3 may be much higher, because Beijerinckiaceae methanotrophs (type IIb) to which MO3 belongs generally have a single pmoCAB operon [22,51], whereas Methylocystaceae (type IIa) and other type I methanotrophs commonly have two pmoCAB operons in their genomes [52][53][54]. dominant MOB is rapidly transformed from Methylosarcina to Methylosinus and MO3, of which the latter accounts for 33.4% of total MOB ( Figure 1B). The actual proportion of MO3 may be much higher, because Beijerinckiaceae methanotrophs (type IIb) to which MO3 belongs generally have a single pmoCAB operon [22,51], whereas Methylocystaceae (type IIa) and other type I methanotrophs commonly have two pmoCAB operons in their genomes [52][53][54]. In most methanotroph-enrichment experiments using paddy soil, MO3 is rarely enriched [25,55,56]. We are not able to enrich it with NMS (nitrate mineral salts), nitrate-free NMS, and M2 media. The M2 medium is a fivefold dilution of M1 medium and is first designed for methanotrophs from freshwater wetlands and mildly acidic soils [27], and nitrate-free M2 medium is subsequently successfully used for enrichment and/or maintenance of multiple strains of Beijerinckiaceae methanotrophs, such as Methylocella palustris [57], Methylocapsa acidiphila [19], Methylocella tundra [58], and Methylocapsa palsarum [59]. Therefore, MO3 should have physiological characteristics similar to other Beijerinckiaceae methanotrophs, such as the ability to fix N2 and low-concentration inorganic salt requirements. Reconstruction of MO3 MAGs DNA from the fourth round of enrichment is used for metagenomic sequencing. After reads assembly using three methods and contig binning using two methods, we obtained seven high-quality MOB MAGs (Table S1) with completeness > 92.5% and contamination < 2.63%. According to the classification results of GTDBkit, three MAGs belong to Methylomagnum, one MAG belongs to Methylosinus, and three MAGs belong to unknown Beijerinckiaceae. The gANI similarity of these three Beijerinckiaceae MAGs is over 99.6%, indicating that they belong to the same species, of which Bin.033 contains only eight contigs with completeness of 98.59% and contamination of 0.75% (Table 1). In In most methanotroph-enrichment experiments using paddy soil, MO3 is rarely enriched [25,55,56]. We are not able to enrich it with NMS (nitrate mineral salts), nitrate-free NMS, and M2 media. The M2 medium is a fivefold dilution of M1 medium and is first designed for methanotrophs from freshwater wetlands and mildly acidic soils [27], and nitrate-free M2 medium is subsequently successfully used for enrichment and/or maintenance of multiple strains of Beijerinckiaceae methanotrophs, such as Methylocella palustris [57], Methylocapsa acidiphila [19], Methylocella tundra [58], and Methylocapsa palsarum [59]. Therefore, MO3 should have physiological characteristics similar to other Beijerinckiaceae methanotrophs, such as the ability to fix N 2 and low-concentration inorganic salt requirements. Reconstruction of MO3 MAGs DNA from the fourth round of enrichment is used for metagenomic sequencing. After reads assembly using three methods and contig binning using two methods, we obtained seven high-quality MOB MAGs (Table S1) with completeness > 92.5% and contamination < 2.63%. According to the classification results of GTDBkit, three MAGs belong to Methylomagnum, one MAG belongs to Methylosinus, and three MAGs belong to unknown Beijerinckiaceae. The gANI similarity of these three Beijerinckiaceae MAGs is over 99.6%, indicating that they belong to the same species, of which Bin.033 contains only eight contigs with completeness of 98.59% and contamination of 0.75% (Table 1). In addition, we detected a complete operon of ribosomal rRNA genes and complete operon of pmoCAB and nifHDKENX genes in Bin.033 ( Figure 2, Tables S2-S4). Therefore, MAG Bin.033 was selected for subsequent analysis and labeled as MO3_YZ.1 (YZ indicates that this MAG originates from the soil sample collected from Yangzhou City). addition, we detected a complete operon of ribosomal rRNA genes and complete operon of pmoCAB and nifHDKENX genes in Bin.033 ( Figure 2, Tables S2-S4). Therefore, MAG Bin.033 was selected for subsequent analysis and labeled as MO3_YZ.1 (YZ indicates that this MAG originates from the soil sample collected from Yangzhou City). Table S4 shows the full names of enzymes encoded by these genes. The other three contigs less than 30k in length were not shown. Phylogeny of MO3 MO3_YZ.1 has a pmoA gene length of 873 bp, is within the pmoA length range of type IIb (Beijerinckiaceae) methanotrophs, and is much longer than that of other methanotrophs ( Figure 3, Table S5). The length of the pmoA gene can also serve as a taxonomic feature of methanotrophs. The pmoA genes of most type I methanotrophs are 744 bp in length, and only a few genera of type Ia such as Methylomarinum, Methylomonas, and Methyloprofundus have pmoA genes of 750 bp in length. Type IIa methanotrophs, including all species of Methylocystis and Methylosinus, have pmoA genes of 759 bp in length except Methylocystis bryophila S285 (762 bp). When the length of the pmoA-like sequence is 771 or 753 bp, it must be pmoA2 or pxmA ( Figure 3, Table S5). Therefore, in the future, when analyzing a methanotrophic MAG, the length of its pmoA gene sequence can help us make a preliminary judgment on the taxa to which it belongs. Table S4 shows the full names of enzymes encoded by these genes. The other three contigs less than 30k in length were not shown. Phylogeny of MO3 MO3_YZ.1 has a pmoA gene length of 873 bp, is within the pmoA length range of type IIb (Beijerinckiaceae) methanotrophs, and is much longer than that of other methanotrophs ( Figure 3, Table S5). The length of the pmoA gene can also serve as a taxonomic feature of methanotrophs. The pmoA genes of most type I methanotrophs are 744 bp in length, and only a few genera of type Ia such as Methylomarinum, Methylomonas, and Methyloprofundus have pmoA genes of 750 bp in length. Type IIa methanotrophs, including all species of Methylocystis and Methylosinus, have pmoA genes of 759 bp in length except Methylocystis bryophila S285 (762 bp). When the length of the pmoA-like sequence is 771 or 753 bp, it must be pmoA2 or pxmA (Figure 3, Table S5). Therefore, in the future, when analyzing a methanotrophic MAG, the length of its pmoA gene sequence can help us make a preliminary judgment on the taxa to which it belongs. The phylogenetic analysis of the pmoA gene from MO3_YZ.1 confirms its affiliation to the MO3 lineage, which is closely related but distinct from Methylocapsa, the sole pmoAcontaining genus of Beijerinckiaceae ( Figure 4A). The phylogeny of nifH genes also shows a close relationship of MO3_YZ.1 to Beijerinckiaceae methanotrophs ( Figure S1). However, when its 16S rRNA gene is used for phylogenetic tree construction, MO3_YZ.1 undoubtedly falls into the group of Methylosysits/Methylosinus, i.e., Methylocystaceae methanotrophs (type IIa, Figure 4B), and shows 98.5% of 16S rRNA sequence identity with Methylosinus sp. C49. The phylogenomic tree based on a concatenated set of 120 marker proteins confirms the placement of the MO3_YZ.1 within Beijerinckiaceae ( Figure 5A). The maximum values of gANI and gAAI between MO3_YZ.1 and other known Beijerinckiaceae MOB genomes are 74% (by JSpeciesWS Online Service) and 71% (by CompareM), respectively ( Figure 5B). When tools of Kostas lab are used for calculation, the maximum values of gANI and gAAI are 79% and 69%, respectively ( Figure S2). Based on these similarity values, whether MO3 should be a new genus of Beijerinckiaceae or a new species of Methylocapsa cannot be concluded yet. Table S5 shows more details. The phylogenetic analysis of the pmoA gene from MO3_YZ.1 confirms its affiliation to the MO3 lineage, which is closely related but distinct from Methylocapsa, the sole pmoA-containing genus of Beijerinckiaceae ( Figure 4A). The phylogeny of nifH genes also shows a close relationship of MO3_YZ.1 to Beijerinckiaceae methanotrophs ( Figure S1). However, when its 16S rRNA gene is used for phylogenetic tree construction, MO3_YZ.1 undoubtedly falls into the group of Methylosysits/Methylosinus, i.e., Methylocystaceae methanotrophs (type IIa, Figure 4B), and shows 98.5% of 16S rRNA sequence identity with Methylosinus sp. C49. The phylogenomic tree based on a concatenated set of 120 marker proteins confirms the placement of the MO3_YZ.1 within Beijerinckiaceae ( Figure 5A). The maximum values of gANI and gAAI between MO3_YZ.1 and other known Beijerinckiaceae MOB genomes are 74% (by JSpeciesWS Online Service) and 71% (by CompareM), respectively ( Figure 5B). When tools of Kostas lab are used for calculation, the maximum values of gANI and gAAI are 79% and 69%, respectively ( Figure S2). Based on these similarity values, whether MO3 should be a new genus of Beijerinckiaceae or a new species of Methylocapsa cannot be concluded yet. Table S5 shows more details. The phylogenetic analysis of the pmoA gene from MO3_YZ.1 confirms its affiliation to the MO3 lineage, which is closely related but distinct from Methylocapsa, the sole pmoA-containing genus of Beijerinckiaceae ( Figure 4A). The phylogeny of nifH genes also shows a close relationship of MO3_YZ.1 to Beijerinckiaceae methanotrophs ( Figure S1). However, when its 16S rRNA gene is used for phylogenetic tree construction, MO3_YZ.1 undoubtedly falls into the group of Methylosysits/Methylosinus, i.e., Methylocystaceae methanotrophs (type IIa, Figure 4B), and shows 98.5% of 16S rRNA sequence identity with Methylosinus sp. C49. The phylogenomic tree based on a concatenated set of 120 marker proteins confirms the placement of the MO3_YZ.1 within Beijerinckiaceae ( Figure 5A). The maximum values of gANI and gAAI between MO3_YZ.1 and other known Beijerinckiaceae MOB genomes are 74% (by JSpeciesWS Online Service) and 71% (by CompareM), respectively ( Figure 5B). When tools of Kostas lab are used for calculation, the maximum values of gANI and gAAI are 79% and 69%, respectively ( Figure S2). Based on these similarity values, whether MO3 should be a new genus of Beijerinckiaceae or a new species of Methylocapsa cannot be concluded yet. The phylogenies of 16S rRNA and pmoA genes from MO3_YZ.1 are not congruent as they affiliate to different families. Such case has not been reported within the known type II methanotrophs. The 16S rRNA genes often fail to assemble and bin due to their conserved and repetitive nature [60]. It should be treated with caution when the 16S rRNA gene of one MAG appears incongruent taxonomic classification with the taxonomic identity of this MAG [61]. Due to the conservation of the 16S rRNA gene, it is expected that the 16S rRNA gene of MO3_YZ.1 should be most related to Methylocapsa. Therefore, in this study, the assembled Methylosinus-like 16S rRNA gene in MO3_YZ.1 very likely does not belong to this MAG. It may be a fragment of contaminating sequence from a Methylosinus species due to the large proportion of Methylosinus in the enriched culture. Methane-Oxidation Pathway of MO3 We reconstructed the central metabolic pathways of MO3 on the basis of the gene-function annotation of MO3_YZ.1 ( Figure 6). MO3_YZ.1 possesses a complete operon of pmoCAB genes coding the particulate methane monooxygenase and has two orphan pmoC genes ( Figure 2). According to alignment of the deduced amino-acid sequences of pmoA genes, the amino acid of His38, Met42, Asp47, Asp49, and Glu100 for the tricopper cluster site is highly conserved in MO3_YZ.1 and other methanotrophs ( Figure S3) as previously reported [62]. Like Methylocapsa species, other pmoA-like genes (pxmA and pmoA2) and the soluble methane monooxygenase coding genes are absent in MO3_YZ.1 [3]. We further identified coding genes of mxaF-and xoxF-type methanol dehydrogenase (MDH), which require calcium and lanthanide in their active center, respectively [63,64]. The xoxF-type MDH is a homodimer of the canonical mxaF-type MDH, and appears to be more widespread than the later. The xoxF-type MDH uses rare-earth elements as part of its catalytic center, and therefore the expression and activity of these two MDHs depends on the availability of rare-earth elements [63]. The xoxF gene of MO3 shows an amino-acid identity of 86.6% to that of Methylocapsa aurea (WP_036262132), and The phylogenies of 16S rRNA and pmoA genes from MO3_YZ.1 are not congruent as they affiliate to different families. Such case has not been reported within the known type II methanotrophs. The 16S rRNA genes often fail to assemble and bin due to their conserved and repetitive nature [60]. It should be treated with caution when the 16S rRNA gene of one MAG appears incongruent taxonomic classification with the taxonomic identity of this MAG [61]. Due to the conservation of the 16S rRNA gene, it is expected that the 16S rRNA gene of MO3_YZ.1 should be most related to Methylocapsa. Therefore, in this study, the assembled Methylosinus-like 16S rRNA gene in MO3_YZ.1 very likely does not belong to this MAG. It may be a fragment of contaminating sequence from a Methylosinus species due to the large proportion of Methylosinus in the enriched culture. Methane-Oxidation Pathway of MO3 We reconstructed the central metabolic pathways of MO3 on the basis of the genefunction annotation of MO3_YZ.1 (Figure 6). MO3_YZ.1 possesses a complete operon of pmoCAB genes coding the particulate methane monooxygenase and has two orphan pmoC genes (Figure 2). According to alignment of the deduced amino-acid sequences of pmoA genes, the amino acid of His38, Met42, Asp47, Asp49, and Glu100 for the tricopper cluster site is highly conserved in MO3_YZ.1 and other methanotrophs ( Figure S3) as previously reported [62]. Like Methylocapsa species, other pmoA-like genes (pxmA and pmoA2) and the soluble methane monooxygenase coding genes are absent in MO3_YZ.1 [3]. We further identified coding genes of mxaFand xoxF-type methanol dehydrogenase (MDH), which require calcium and lanthanide in their active center, respectively [63,64]. The xoxFtype MDH is a homodimer of the canonical mxaF-type MDH, and appears to be more widespread than the later. The xoxF-type MDH uses rare-earth elements as part of its catalytic center, and therefore the expression and activity of these two MDHs depends on the availability of rare-earth elements [63]. The xoxF gene of MO3 shows an aminoacid identity of 86.6% to that of Methylocapsa aurea (WP_036262132), and more than 79% to that of other Beijerinckiaceae methanotrophs, such as Methylocapsa palsarum NE2 [51], Ca. Methyloaffinis lahnbergensis [13], and Methylocella silvestris [65]. We also recovered a complete gene set of the tetrahydromethanopterin-dependent pathway (H 4 MPT pathway) for C1-carbon transfer during the oxidation of formaldehyde to formate, and fdh gene for the nonreversible formate dehydrogenase. MO3_YZ.1 catalyzes the final oxidation step of formate to CO 2 and produces NADH, which can further drive the production of ATP through the respiratory chain. However, neither the coding genes of the carbon-monoxide dehydrogenase nor those of [NiFe] hydrogenase are identified in MO3_YZ.1, suggesting that MO3 cannot use CO and H 2 as alternative energy sources as Methylocapsa gorgona MG08 [22]. Microorganisms 2022, 10, x FOR PEER REVIEW 8 of 13 more than 79% to that of other Beijerinckiaceae methanotrophs, such as Methylocapsa palsarum NE2 [51], Ca. Methyloaffinis lahnbergensis [13], and Methylocella silvestris [65]. We also recovered a complete gene set of the tetrahydromethanopterin-dependent pathway (H4MPT pathway) for C1-carbon transfer during the oxidation of formaldehyde to formate, and fdh gene for the nonreversible formate dehydrogenase. MO3_YZ.1 catalyzes the final oxidation step of formate to CO2 and produces NADH, which can further drive the production of ATP through the respiratory chain. However, neither the coding genes of the carbon-monoxide dehydrogenase nor those of [NiFe] hydrogenase are identified in MO3_YZ.1, suggesting that MO3 cannot use CO and H2 as alternative energy sources as Methylocapsa gorgona MG08 [22]. Carbon Assimilation of MO3 We detected a complete gene set of the serine cycle for the assimilation of C1 from formate. Formate was condensed with tetrahydrofolate (H4F) to form formyl-H4F, which was transformed to methylene-H4F via the H4F pathway, and then methylene-H4F reacted with glycine to form serine ( Figure 6). The regeneration of glyoxylate is a key pathway for the carbon assimilation of type II methanotrophs possessing serine cycle [66]. The coding gene (aceA) of the key enzyme (isocitrate lyase) of glyoxylate shunt and a complete gene set of the tricarboxylic acid (TCA) cycle in MO3_YZ.1 are observed, implying that the acetyl-CoA produced in the serine cycle can be subsequently oxidized to glyoxylate in assistance of some TCA cycle enzymes. This regeneration pathway of glyoxylate is common in type IIb but absent in type IIa methanotrophs, which use the ethylmalonyl-CoA (EMC) pathway to accomplish the same task [67]. Although many encoding genes of the EMC pathway-related enzymes are also detected in MO3_YZ.1, the encoding genes of four enzymes are absent (croR for 3-hydroxybutyryl-CoA dehydratase, ccr for crotonyl-CoA carboxylase/reductase, msd for 2-methylfumaryl-CoA hydratase and mcd for methenyltetrahydromethanopterin cyclohydrolase), indicating that MO3, like other Beijerinckiaceae methanotrophs, cannot regenerate glyoxylate through the EMC Carbon Assimilation of MO3 We detected a complete gene set of the serine cycle for the assimilation of C1 from formate. Formate was condensed with tetrahydrofolate (H 4 F) to form formyl-H 4 F, which was transformed to methylene-H 4 F via the H 4 F pathway, and then methylene-H 4 F reacted with glycine to form serine ( Figure 6). The regeneration of glyoxylate is a key pathway for the carbon assimilation of type II methanotrophs possessing serine cycle [66]. The coding gene (aceA) of the key enzyme (isocitrate lyase) of glyoxylate shunt and a complete gene set of the tricarboxylic acid (TCA) cycle in MO3_YZ.1 are observed, implying that the acetyl-CoA produced in the serine cycle can be subsequently oxidized to glyoxylate in assistance of some TCA cycle enzymes. This regeneration pathway of glyoxylate is common in type IIb but absent in type IIa methanotrophs, which use the ethylmalonyl-CoA (EMC) pathway to accomplish the same task [67]. Although many encoding genes of the EMC pathway-related enzymes are also detected in MO3_YZ.1, the encoding genes of four enzymes are absent (croR for 3-hydroxybutyryl-CoA dehydratase, ccr for crotonyl-CoA carboxylase/reductase, msd for 2-methylfumaryl-CoA hydratase and mcd for methenyltetrahydromethanopterin cyclohydrolase), indicating that MO3, like other Beijerinckiaceae methanotrophs, cannot regenerate glyoxylate through the EMC pathway. For MO3, the acetyl-CoA produced in the serine cycle can also be converted to poly-β-hydroxybutyrate (PHB, Figure 6). This carbon-storage polymer is also an endogenous source of reducing power [68], and may help MO3 adapt to environments with fluctuating substrate supplies [55,69]. As expected, the major carbon-assimilation pathway in type I methanotrophs, the RuMP pathway, is not retrieved in MO3_YZ.1 because the coding genes of the two key enzymes (hps for 3-hexulose-6-phosphate synthase and phi for 6-phospho-3-hexuloisomerase) of the RuMP pathway are absent in MO3_YZ.1. The coding gene of ribulose-bisphosphate carboxylase, the key enzyme of the Calvin-Benson-Bassham (CBB) cycle for CO 2 fixation, is also absent in MO3_YZ.1. Thus, in this respect, MO3 is similar to Methylocapsa gorgona MG08 [22] and different to several other type IIb strains including Methylocapsa acidiphila [19], Methylocapsa palsarum NE2 [51], Methylocella silvestris BL2 [70], and Methyloferula stellata AR4 [71] which have a complete CBB cycle. MO3_YZ.1 encodes the Embden-Meyerhof-Parnas and pentose phosphate pathways for carbohydrate metabolism. In addition, MO3_YZ.1 carries all the necessary genes for enzymes involved in acetate metabolism, such as acs for acetate-CoA synthetase, ackA for acetate kinase, and pta for phosphotransacetylase ( Figure 6). However, whether MO3 can grow using acetate as sole substrate like Methylocapsa aurea [72] is unknown because Methylocapsa gorgona MG08, which also carries these genes, cannot grow on acetate as the sole carbon source as expected [22]. An efficient membrane transporter for acetate (acetate permease ActP) may be necessary, but we currently know very little about this [52,73]. Nitrogen and Sulfur Metabolism of MO3 For nitrogen metabolism, MO3_YZ.1 possesses a complete nifHDKENX operon for molybdenum-containing nitrogenase like other type II methanotrophs [22,74] and genes for assimilatory nitrate reduction (nasAB, and nirA), dissimilatory nitrite reduction to ammonium (nirBD), ammonium transporter (amt), nitrate/nitrite transport protein (nrt), and putrescine transport-system protein (potFGHI) (Figure 6). The presence of these genes suggests that MO3 can utilize multiple types of nitrogen sources. As expected, genes encoding the denitrification pathway are missing in MO3_YZ.1 as many other aerobic methanotrophs [75]. For sulfur metabolism, MO3_YZ.1 possesses a series of genes in the sulfur-assimilation pathway ( Figure 6). These genes include cysUWA (encodes sulfate/thiosulfate transport system permease/ATP-binding proteins); cysNC, cysH, and cysJ (encodes enzymes catalyze the subsequent sulfate-reduction steps to sulfide); and genes for sulfur-containing amino-acid production from sulfide (such as cysE and cysK) (Table S2). In addition, some genes encoding sulfur-oxidation enzymes, such as sulfite dehydrogenase (sor), thiosulfate/3-mercaptopyruvate sulfurtransferase (sseA) and S-sulfosulfanyl-Lcysteine sulfohydrolase (sox), are present in MO3_YZ.1. However, studies and discussions on the sulfur metabolism of aerobic methanotrophs are relatively few [14,76]. Whether sulfur metabolism is related to the carbon metabolism, energy acquisition, and environmental adaptability of methanotrophs remains to be investigated. Conclusions We enriched the uncultured Beijerinckiaceae methanotroph MO3 from paddy soil by using the nitrogen-free M2 medium and reconstructed a nearly complete genome of this lineage. Based on phylogenomic analysis, the closest relative of MO3 was Methylocapsa. In terms of the carbon-assimilation pathway, MO3 also exhibited similar characteristics to Methylocapsa. Its 16S rRNA gene was most related to Methylosinus rather than Methylocapsa, probably due to the typical misassembly of 16S rRNA gene from metagenomic data. MO3 encoded diverse metabolisms related to nitrogen, sulfur, and PHB, implying its ability to survive in a variety of stress environments such as low nitrogen availability. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/microorganisms10050955/s1, Figure S1: Neighbor-joining phylogenetic tree of the nifH gene from MO3_YZ.1. The tree based on 290 amino-acid positions is constructed using MEGA software (version 6.06) and evaluated with 1000 bootstraps. Bootstrap values higher than 50% are given at the branch nodes. Scale bar indicates 2% amino-acid sequence divergence; Figure S2: Matrix of pairwise genomic average nucleotide identity (gANI) and genomic average amino-acid identity (gAAI) values of MO3_YZ.1 and its relatives. The genomes' sequences were ordered as in Figure 4. The gANI was presented in the lower-left triangle and the gAAI was presented in the upper-right triangle. Bothe of gANI and gAAI in this figure were calculated by tools of Kostas lab. Figure S3. Alignments of amino-acid sequences of PmoA subunit from methanotrophs. The amino acids that form the tricopper cluster site are shown in blue. Table S1: Genome statistics of the obtained metagenome-assembled genomes (MAGs) affiliated to methanotrophs; Table S2: Gene features of Bin.033 predicted by prokka v1.14.5; Table S3: Gene features of Bin.033 annotated by RAST tool kit; Table S4: Gene functions of Bin.033 annotated by BlastKOALA through against the Kyoto Encyclopedia of Genes and Genomes database; Table S5: Length of pmoA-like genes in genomes of currently known aerobic methanotrophs. Author Contributions: Conceptualization, Y.C. and Z.J.; methodology, Y.C. and J.Y.; data curation, Y.C. and J.Y.; writing-original draft preparation, Y.C.; writing-review and editing, Y.C., J.Y. and Z.J.; project administration, Y.C. and Z.J.; funding acquisition, Y.C. and Z.J. All authors have read and agreed to the published version of the manuscript. Conflicts of Interest: The authors declare no conflict of interest.
2022-05-10T16:46:08.293Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "e958b7daa91af92a42476869572308723c574140", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/10/5/955/pdf?version=1652071329", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cbc5a4a0c956220fff6c10a642fdf8756b0f1b7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
123504255
pes2o/s2orc
v3-fos-license
A combined molecular dynamics and computational spectroscopy study of a dye-sensitized solar cell An organic dye-sensitized solar cell consisting of a squaraine molecule attached to a TiO2 surface is modeled using first-principles molecular dynamics and time-dependent density functional theory. The system is surrounded by solvent molecules that are treated at the same level of theory as the dye molecule and the surface. The effect of the solvent on optical properties is investigated by computing many absorption spectra for various configurations along a molecular dynamics trajectory. It is shown that the dynamical effects induced by thermal fluctuations have a strong effect on the optical properties and that satisfactory agreement with experiments is achieved only when those thermal effects are accounted for explicitly. Introduction In recent years, shipments of photovoltaic panels have grown drastically and reached an estimated 17.5 GW in 2010 [1]. At the same time, the average price per watt has fallen sharply. The vast majority of photovoltaic devices use semiconductor p-n juctions, and most devices are based on either silicon or CdTe thin films. Although these technologies are well established on an industrial scale, there are also alternative types of solar cells, which are currently attracting considerable attention. Such alternative technologies cannot yet compete with the more traditional devices in terms of manufacturing cost, stability, efficiency and mass production, but they hold, at least in principle, the potential to compete in future in this rapidly expanding market. One prominent example of such alternative systems is dye-sensitized solar cells (DSSCs) [2][3][4], where light is absorbed by dye molecules that are adsorbed on the surface of a nano-porous wide-gap semiconductor (usually TiO 2 in the anatase phase). Following light absorption by the dye molecules, the excited electron is transferred into the conduction band of the semiconductor, while the remaining hole is filled by electron transfer, typically from an electrolyte or a hole-conducting polymer. Such DSSCs have achieved efficiencies of up to 11% [5,6]. In those high-efficiency DSSCs, dyes containing transition metal complexes, often based on Ru, are employed. The choice of the dye complex is obviously fundamental for the functioning, efficiency and lifetime of the cell, and most studies of DSSCs are centered on this topic. The relative alignment of the dye molecular orbitals and the substrate energy levels determine the cell's voltage V oc , while the geometric overlap of the orbitals determines the lifetime of the excited state and the charge injection efficiency. Those are the key properties determining the overall efficiency of the device, and a detailed understanding of those properties at the atomistic level might guide the development of better dye-substrate combinations. But despite the potential benefits of precise and systematic computational investigations on DSSC devices, such studies are very often limited to dye complexes alone or to minimal models of the substrate in terms of very small TiO 2 clusters. More comprehensive computational studies face the complex nature of DSSC systems: a realistic model of a typical dye together with the semiconductor substrate contains at least 100-200 atoms. Moreover, the substrate is often best described using a two-dimensional (2D) periodic model for the surface. Such system sizes as well as the periodic boundary conditions are difficult to handle with sophisticated wavefunction-based quantum chemistry methods, which otherwise could provide precise information about the electronic structure and the electronic excitations in the systems. While being less precise than wavefunction-based methods, density functional theory (DFT)-based computations can routinely handle systems of that size, but they are limited to the electronic ground state. Time-dependent DFT (TDDFT), an extension of standard DFT that is capable of describing excited states, is computationally much more demanding than basic ground state DFT. Still, TDDFT studies of DSSC systems are possible, but the large system sizes and the broad spectral region of interest (from infrared to ultraviolet over the whole visible range) bring TDDFT computations quickly to its limits. With few exceptions [7][8][9], computational models of solar cell devices do not normally include the electrolyte surrounding the sensitized surfaces or include it only in an approximate way using continuum solvation models [10]. While such effective models can capture many essential features of the solvent, they are nevertheless limited because they only model the solvent's dielectric constant, but do not take into account the precise interactions between the solvent and the solute at the atomic level. In this paper, we present a computational study of a dye-sensitized semiconductor surface that explicitly includes water molecules [11][12][13][14][15] surrounding the dye. The water is treated at the same level of theory as the surface and the dye, and its complex interaction with the solute is unraveled using Car-Parrinello [16] (CP) molecular dynamics (MD). From the MD trajectory, snapshots of the system at different times are taken, and the optical properties of each snapshot configuration is analyzed using TDDFT. Such a systematic study including dynamical effects and solvent molecules allows us to unravel the relative influence of these factors on the experimentally observed spectra. The need to compute many optical spectra over the whole visible frequency range in a system containing more than 400 atoms has pushed this study to the limits of present-day computers and algorithms. Model system Our computational model consists of an organic dye molecule, squaraine, adsorbed on a two-layer periodically repeated TiO 2 slab. We have chosen squaraine as a dye in our model computations, because it is representative of all organic dyes that were studied recently as alternatives to dyes containing transition metal complexes. Another reason to choose this molecule is that the relatively simple adiabatic approximations to TDDFT are able to describe its absorption properties very well, which is not the case for other molecules, including Ru-based complexes. The exposed surface of the slab is the (101) surface of the anatase phase, using a (1 × 4) primitive cell. We have chosen this surface because it is the most stable anatase surface [17,18], and is commonly investigated in the context of DSSC models. The squaraine molecule as described in [19] was simplified by replacing the octyl substituents with methyls. Such a simplification does no lead to significant changes in the electronic structure of the molecule [19]. The dimensions of the orthorhombic simulation cell are 19.35 × 28.61 × 54.28 a 3 0 , where a 0 is the Bohr radius. Periodic boundary conditions are applied to this unit cell. We have checked that these dimensions are sufficient to guarantee a good compromise between numerical cost and minimizing spurious interactions between the repeated images of the dye molecules in the directions parallel to the surface. The minimal distance between repeated images of the dye is about 5 Å. In the direction perpendicular to the surface, the distance between the lowermost TiO 2 layer of the repeated slab and the top of the dye molecule is approximately 6 Å. The space surrounding the dye and the surface slab has been filled with a total of 90 water molecules, representing liquid water in ambient conditions. In figure 1, this system is depicted. The surface slab plus dye system consists of 159 atoms, and together with the 90 water molecules, this computational model consists therefore of 429 atoms. Please note that the squaraine model system described above, but without the surrounding water, is exactly equivalent to the model used previously in a study of electronic transitions using TDDFT [20]. Adding the surrounding water does not simply mean to repeat the vacuumphase computations with more atoms present. Instead, the liquid solvent introduces complex interactions with the solute, and its global effect should be accounted for by a statistical sampling over a sufficiently long time span. While it was enough in [20] to find the minimum energy configuration of the surface-dye system and to calculate one optical spectrum of this configuration, we aim here at obtaining the statistically averaged spectrum by performing many TDDFT computations along an MD trajectory. Computational method The computations presented here use a plane-wave basis set. The interactions between the ions and electrons are described using ultrasoft pseudopotentials [21] 5 . We use a kinetic energy cutoff of 25 Ry for the wavefunctions and 200 Ry for the charge density. In the given model system, these cutoff values lead to a basis set of 181 600 plane waves for the orbitals and 717 700 plane waves for the charge density. The Brillouin zone is sampled using the -point only. The Perdew-Burke-Ernzerhof (PBE) functional [24] is employed for the DFT ground state computations, the CP MD and the TDDFT spectral calculations, where in the last case the adiabatic approximation [25] is used. All calculations have been performed using the Quantum ESPRESSO suite of programs [22,23]. The CP molecular dynamics was carried out at a fixed unit cell volume and an ionic temperature of 300 K. The fictitious electron mass in the CP Langrangian was 700 a.u. and a time step of 0.121 fs was used. After an initial equilibration run of 1 ps, a total of 120 000 time steps were performed, leading to an ionic trajectory of 14.5 ps. During this trajectory, 78 electronic absorption spectra were computed, one approximately every 181 fs. 5 TDDFT computation in systems comprised of 429 atoms using a plane-wave basis set containing more than 180 000 basis functions over the whole optical range is a daunting task. It has become possible thanks to a recently developed recursive scheme [23], [26][27][28][29], where instead of individual excitation energies the complete dipole response function is computed using a Lanczos algorithm. Following other applications of this method [20,30], 2500 Lanczos steps were performed for each of the three Cartesian directions of the polarization, leading to the frequency-dependent dipole polarizabilities α x x (ω), α yy (ω) and α zz (ω), which in turn determine the photoabsorption spectra. For the spectra plotted in the following, a small imaginary component of 0.027 eV is added to the frequency, leading to a broadening of the spectral peaks. The depicted absorption spectra are the average of the three directions of possible light polarization. The employed Liouville-Lanczos algorithm, although computationally more efficient, is equivalent to other linear-response implementations of TDDFT. It has also been shown that the resulting spectra are essentially identical to absorption spectra obtained using real-time implementations of TDDFT [23,27,30]. Results and discussion An analysis of the electronic structure projected onto atomic orbitals allows one to examine the rearrangement of electronic charges in the presence of water. In fact, the addition of the solvent molecules leads to a slight charge transfer to the TiO 2 slab: about 0.3 e are transferred to the slab when water is added. Given that the slab consists of a total of 32 TiO 2 formula units, this is a very small quantity. At the same time, about 0.15 e are transferred to the squaraine molecule. These charge transfers lead to a shift in the relative level alignment: while in the dry system the dye highest occupied molecular orbital (HOMO) lies about 0.55 eV beneath the conduction band edge of the semiconductor, this energy difference is strongly increased to 1.4 eV in the solvated case. This shift has, however, little consequence for the absorption spectrum: the strongest absorption lines determining the shape of the spectrum are intra-dye transitions. Direct transitions from the dye HOMO to the conduction band edge cannot be discerned in the spectra, as shown below. In figure 2, we show the computed photoabsorption spectrum of the squaraine dye in the presence of the solvent. The spectrum is computed from a configuration of the system after an initial equilibration dynamics of about 1 ps. For comparison, the figure includes the experimental spectrum from [19] and the spectrum of the adsorbed dye in vacuum, in its minimum energy configuration. The latter two spectra have already been discussed in [20]. Upon contact with the solvent and subsequent equilibration, the main absorption peak of the dye is red-shifted by about 0.12 eV and now falls almost exactly on top of the experimental peak at 1.92 eV. Such precise agreement between experiment and theory is certainly fortuitous, since it is well beyond the expected precision of the employed adiabatic functionals. While the experimental spectrum shows a pronounced shoulder at about 2.03 eV at a higher frequency than the main absorption peak, the computed spectrum exhibits a shoulder at lower frequencies, around 1.76 eV. In the spectrum computed in vacuum, no shoulder is present. One important point to note, however, is that the spectrum obtained in vacuum shows three unphysical peaks at low frequency (not shown in figure 2), at 0.72, 1.05 and 1.40 eV. Those peaks have been discussed in [20], where it was shown that they correspond to direct charge-transfer excitations from the dye to the semiconductor. It is well known that such 6 Absorption spectra for the squaraine dye adsorbed on the TiO 2 surface slab. Black curve: absorption in the dry system, where the geometry has been determined by energy minimization, as discussed in [20]. Red: absorption in the water solution after initial equilibration at 300 K. Green: the experimental spectrum from [19]. charge-transfer excitations are wrongly described in adiabatic GGA [31][32][33] which systematically underestimates the excitation energies in such cases, sometimes by as much as 1 eV or more. In our computations for the solvated complex, no such direct charge-transfer excitations are present, and the low-energy shoulder at 1.76 eV is the spectral feature with the lowest energy in that particular configuration. At the level of a single equilibrated snapshot of the solvated system, we can thus conclude that the main effect introduced by the solvent is a red shift of the main absorption peak, a broadening of the spectrum and a suppression of direct charge-transfer excitations. More generally, the solvent influences the solute and its optical properties in various ways. On the one hand, the presence of the solvent determines the effective dielectric constant around the solute. This dielectric constant is crucial for the electrostatics and influences both the ground state and excited electronic states. The solvent-determined dielectric constant can be easily accounted for using standard continuum solvent models [10]. A typical effect of this dielectric screening is the common 'bathochromic shift', where spectra undergo a red shift upon solvation. On the other hand, due to hybridization, the solvent's orbitals may participate actively in the photoabsorption, even at frequencies where the pure solvent is not absorbing. In such cases, the presence of the solvent may lead to enhanced absorption strength at given frequencies. A third important influence of the solvent is through the nuclear forces exerted by the solvent molecules on the solute and affecting its structure. For this last effect, it is not sufficient to study a single configuration of the solvated system like above, but it should be observed along an MD trajectory. During the MD trajectory for our system, the absorption spectra maintain their main features, i.e. most configurational snapshots lead to one main peak in the optical range, but the frequencies of the principal absorption peak are highly dependent on the precise molecular configuration and vary between 1.84 and 2.07 eV. Obviously, it is tempting to look for a correlation between these frequencies and geometric properties of the corresponding snapshots. In this respect, one of the most striking features during the dynamics is that the squaraine's geometry varies considerably over time. While the molecule in its equilibrium configuration without solvent is essentially planar, it undergoes slow but strong deviations from the planar configuration during the dynamics. In the middle and right panels of figure 1, we present a side view of the dye in its planar and bended configurations. The time evolution of the angle is depicted in figure 3, where it can be seen that during the dynamics, the dye deviates by up to 47 • from a planar configuration. Examining the dependence of the main absorption peak on the bending angle, however, no correlation could be found between these features. It is therefore interesting to note that the most striking characteristic of the geometric evolution of the solute is not directly related to variations of the absorption frequencies. In figure 3, it can also be seen that typical time scales of fluctuations of the solute are in the range between 0.5 and 5 ps. Our sampling of system geometries every 181 fs leads therefore to a faithful representation of the influence of dynamics on the optical properties. A second possible influence on the absorption frequency is the surrounding water. Are fluctuations in the first solvation shell responsible for the observed shifts in the absorption? In order to examine this point, we have chosen a subset of 37 snapshots from the MD trajectory, spaced equally in time. For those snapshots, we have removed all the surrounding 90 water molecules and thus computed the optical spectrum in vacuum, but in the instantaneous molecular geometry without any further molecular relaxation. In this way, we could compare the frequencies in identical geometries, with and without the surrounding water. The result is depicted in figure 4, where we plotted the frequency of the main absorption peak in the dry system plotted against the frequency in the solvated system for each of the 37 snapshots. It is evident from that figure that the main effect of water is to red shift the frequencies. But apart from this global shift, the frequencies with and without water are very well correlated, showing that the variability of the absorption frequency is due to intrinsic variations in the geometry of the solute and not due to changes in the instantaneous solvation geometry. We therefore infer that the global effect of the solvent is rather subtle: on the one hand, it is responsible for a general red shift of the absorption frequencies, a fact that can probably be accounted for by using standard continuum solvent models. On the other hand, the solvent is inducing continuous geometrical changes of the solute, which in turn influence the absorption frequency. The variations of the absorption frequency thus induced are larger than the overall red shift due to the presence of water molecules. There is no obvious dependence of the frequency on any one structural characteristic (angles, bond distances, etc). This seems to indicate that actual computation of the spectra for many configurations is the only way of systematically accounting for the observed variance in the spectra. Finally, in figure 5, we show the resulting absorption spectrum obtained by averaging the individual spectra of the snapshots over all configurations. We also show the averaged spectra of the dry system as described above and the experimental absorption from [19]. Recall that the solute geometries in the dry system are exactly the same as those in the solvated system, being obtained from the same MD trajectory. It is remarkable how well the averaged spectra reproduce the experimental curve: both computed spectra present one main absorption peak and one shoulder at higher energies. This is at variance with the spectra of the single snapshots shown in figure 2, where no shoulder at higher frequencies is present. The main absorption peak is found at 1.97 eV in the averaged solvated system and at 2.0 eV in the averaged dry system, in comparison with the experimental value of 1.92 eV. We ascribe this deviation of 0.05 eV at least partly to the semi-local adiabatic functional employed. The averaged spectrum also reveals that the precise coincidence of the main peaks in figure 2 is true only for the shown snapshot. The high-energy shoulder of the averaged spectrum is found at 2.06 eV in the solvated system and at 2.07 eV in the dry system. The experimental shoulder is found at 2.03 eV. In figure 5, the effect of water molecules on the absorption spectra is also shown. Since all the snapshots that underlie these spectra have the same solute geometry, the difference between the solvated and dry spectra can be clearly attributed to the electronic effects of water molecules. On the one hand, the strong red shift upon solvation is a typical bathochromic shift, as discussed above. But the overall reduction of absorption intensity when water is removed suggests that the water molecules-interacting with the squaraine molecule-contribute to the optical response of the system also in the visible region, where pure liquid water is transparent. Conclusions We have performed CP MD simulations of an organic dye attached to a TiO 2 surface and surrounded by water in ambient conditions. The optical properties of this system are investigated by TDDFT computations at geometrical snapshots along the MD trajectory. We find that the inclusion of solvent molecules, which are treated at the same level of theory as the solute and TiO 2 , considerably changes the optical absorption spectra and that a meaningful comparison with experiment is possible only in the solvated case. Moreover, the spectra of single snapshots of this system do not show the characteristic structure of the measured spectrum, namely one main absorption peak and a shoulder at higher frequencies. These features appear only in the averaged spectra. These results show that the optical activity of such a system may dramatically depend on the dynamical modifications of the molecular configuration induced by thermal fluctuations. No static model can properly describe these effects, which are much stronger than typical errors induced by the approximate functionals used in practical TDDFT computations. The computation of the optical spectra by time-averaging individual spectra calculated for selected configurations is a demanding task, which can however be addressed using recent developments in the implementation of TDDFT. A major issue to be addressed and understood is the extent to which the optical properties calculated for static, minimum-energy configurations of isolated molecules are robust with respect to the combined effects of thermal fluctuations and the interaction with the solvent molecule. This will certainly be the theme for further investigations.
2019-04-20T13:09:27.305Z
2011-08-16T00:00:00.000
{ "year": 2011, "sha1": "bedabc4b32cf823f70c5b88afaa83bab0132673a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/13/8/085013", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d8ffc0b845d0bc15c343c8f53a91e0fde92ed522", "s2fieldsofstudy": [ "Physics", "Materials Science", "Chemistry", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
202660579
pes2o/s2orc
v3-fos-license
Cyclotron radiation emission spectroscopy signal classification with machine learning in project 8 The cyclotron radiation emission spectroscopy (CRES) technique pioneered by Project 8 measures electromagnetic radiation from individual electrons gyrating in a background magnetic field to construct a highly precise energy spectrum for beta decay studies and other applications. The detector, magnetic trap geometry and electron dynamics give rise to a multitude of complex electron signal structures which carry information about distinguishing physical traits. With machine learning models, we develop a scheme based on these traits to analyze and classify CRES signals. Proper understanding and use of these traits will be instrumental to improve cyclotron frequency reconstruction and boost the potential of Project 8 to achieve world-leading sensitivity on the tritium endpoint measurement in the future. Introduction The Project 8 experiment aims to perform an ultra-precise measurement of the tritium beta decay endpoint to directly measure or constrain the effective mass of the electron anti-neutrino, and to determine the mass hierarchy ordering.To this end, the collaboration has pioneered the Cyclotron Radiation Emission Spectroscopy (CRES) technique [1], in which electromagnetic radiation from the cyclotron motion of individual electrons in a magnetic field B is used to reconstruct an energy spectrum from the angular frequency: where B is the magnetic field magnitude, e and m e are the electron charge magnitude and mass, K e (t) is the electron's kinetic energy, and c is the speed of light.In the nonrelativistic regime, this gives a low energy limit of 2.8 × 10 10 Hz in a 1 T magnetic field.A CRES signal is reconstructed via a series of short-time discrete Fourier transforms (DFTs) to produce a frequency spectrum as a function of time (a spectrogram).Due to radiative energy loss, the signal exhibits a pseudo-linear behavior in this time/frequency plane; Figure 1 shows an example spectrogram with several such signals.We refer to these CRES signals as tracks. Figure 1: A multi-track electron event featuring five tracks.The electron is born in the trap around 7 ms and scatters with a residual gas molecule around 32 ms, abruptly changing the frequency of all tracks.This event is shown for illustrative purposes only and is not from the data sets used in this work. For this work, we are primarily concerned with the apparatus and data from Phase I of Project 8; the data is from a campaign performed in 2015 with 83m Kr as the electron source gas.The Phase I detector [2] featured a rectangular waveguide to house the source gas and transport emitted radiation from the source to an antenna, as well as a conductive short acting as a reflector opposite the antenna. The waveguide also sports a configurable magnetic bottle trap formed by pinch coils wound along the axis of the ∼ 1 T background magnetic field, which we call the axial direction or simply ẑ by definition.In such a trap, electrons with momentum mostly in the (x, ŷ) directions can be constrained in ẑ by the small O(mT) influence of the trap coils.In the "bathtub trap" (the only configuration explored here), two coils of equal polarity source the trap by creating a pair of potential barriers for the electrons as illustrated in Figure 2. Thus, in addition to cyclotron motion in the (x, y) plane, a trapped electron exhibits a slower axial oscillation O(MHz) as it explores the allowed region in z within the trap.In our experimental setup a full event has an average of approximately 2.5 ms.This axial motion gives rise to a number of rich signal characteristics beyond only the instantaneous cyclotron frequency given by Equation 1.In this paper, we will summarize our understanding of these characteristics, the impact on frequency reconstruction, and present a machine-learning (ML) track classification scheme as a first step toward a sophisticated CRES signal analysis.This type of analysis will allow for significant improvement in the energy resolution achievable with CRES, and will be especially beneficial when moving from a monoenergetic krypton source to a continuous tritium source where proper event reconstruction is of paramount importance.Lastly, we study the impact of our ML classification analysis on the extracted tritium endpoint through simulation. Data and Signal Basics The 83m Kr source emits internal conversion electrons at several energies, which we divide into three nominal groups by the atomic shell of the transition: [3] and [4]. Each transition is monoenergetic, up to a natural linewidth of order eV, which is substantially less than the energy resolution of Phase I.The 17 keV data is closest to the tritium endpoint (18.6 keV), and the higher-energy peaks provide important insight on the relative energy dependence of signal characteristics. Electron signals emitted in the magnetic trap are received by an antenna and processed by a cryogenic receiver chain described in [1].The signal is down-mixed twice using local oscillators and sampled at 200 MHz by a real-time spectrum analyzer (RSA).The RSA triggers an acquisition when it detects a high Fourier excess, and writes time-domain data for 10 milliseconds per trigger with a pre-trigger time of 1 ms.The acquisition is then processed with a series of DFTs of size 8192 samples (0.04096 ms) to produce a spectrogram like the one in Figure 1, which displays a CRES event.The resulting spectrograms are scanned for high-power bins in collinear groupings called tracks; for an in-depth discussion the track finding algorithms and procedure, we refer to [5].The initial frequency of a track signal is called the start frequency, and it is the start frequency of an electron event that contains primary energy information needed for spectrum reconstruction. All events considered in this study are subject to a cut on the start time of the first track within ±0.25 ms of the pre-trigger time.This retains only events which have promptly triggered an acquisition and removes those which only triggered after some time due to sufficiently enough power; these low-power tracks often start even before the acquisition window, which prohibits a measurement of the start frequency altogether.Furthermore, in preparing the training set for the machine learning analysis we consider only the first track(s) in such events.In the example of Figure 1, the second set of tracks would be cut, which is useful in labeling the ground truth, a process described in Section 4; by only considering the first set of tracks in an event we can confidently label separate signal classes using both frequency information and other parameter space cuts. The Need for Classification In this section, we summarize our understanding of relativistic electron dynamics in Project 8 traps to motivate the classification scheme and elucidate the resultant properties of reconstructed CRES signals.An analysis that properly extracts and uses the information in these signal properties is key to obtaining a precise, well-understood energy spectrum. Axial Motion Considerations We parameterize the axial motion of an electron with the pitch angle θ(t), defined as the angle between the momentum vector and the magnetic field: For a trapped electron, p z has an oscillatory time dependence and therefore so does the instantaneous pitch angle.‡ By definition, θ = 90 • at the turning points of the axial oscillation as p z = 0; at the center of the trap, the pitch angle reaches a minimum along with B z .Thus, the range in z explored by an electron is fully characterized by (a) the trap geometry and (b) the minimum pitch angle.Going forward, we will simply use θ to refer to this minimum pitch angle, rather than the time-varying instantaneous pitch angle θ(t).In fact, there is also a limit to the smallest pitch angle value due to conservation of energy which depends on the ratio of the trap depth and maximum magnetic field, for more on this see [6].At a nominal 4 mT trap depth the lower bound is about 86 degrees and may be higher depending on the chosen geometry. A detailed mathematical discussion of trapped electron dynamics and the resultant CRES signal characteristics is outside the scope of this work but may be found in [6]; we refer to their nomenclature for the rest of the paper.There, a phenomenological model is developed using approximate Project 8 magnetic trap geometries to analytically describe the motion of electrons with θ < 90 • .The model includes a short opposite the antenna, as is present in the Phase I detector; the result is an energy (frequency) and position-dependent interference effect between the incident and reflected radiation.The power spectrum P (ω) is calculated from the Poynting vector in the axial direction, and decomposed as a sum of waveguide modes.For a single mode denoted by λ, we take advantage of the quasi-periodic motion of the electron in the trap to express the power averaged over the axial period with Equation (45) in [6], reproduced here: ‡The fractional energy loss over the timescale of the axial period is small, so we may treat the total momentum p as a constant here. where P 0,λ and every a n are amplitudes dependent on the magnetic trap shape, z t + l is the distance from the short to the trap center, and v p,λ is the mode phase velocity.This equation describes a comb-like spectrum with power concentrated at a central frequency Ω 0 (the average cyclotron frequency) and at frequencies shifted by integer multiples n of the axial oscillation frequency Ω a .In the bathtub trap geometry where the function of tangent includes parameters describing the length and depth of the trap.The pitch angle dependence of P λ (ω) comes from Ω 0 , Ω a , and in turn k λ ; in particular, the coefficients a n (k λ ) describe the relative strength of each peak in the comb structure.Considerable discussion about these coefficients and their calculation for some models is included in [6]. We refer back to Figure 1 in the previous section, now equipped to understand how this example event illustrates the behavior of Equation 3: (a) The signal takes the form of multiple parallel tracks corresponding to the different values of the frequency band order n.We call this structure a multi-peak track (MPT). (b) At approximately 32 ms, the electron scatters with a residual gas molecule and changes the makeup of the MPT; this is consistent with an abrupt change in the pitch angle.In particular, the frequency and power of individual tracks is observed to be pitch-angle-dependent as expected. (c) By contrast, the track slope − the change in frequency (energy) with respect to time − encodes the total radiated power and thus does not vary within tracks of any one MPT. The dependence of the individual band power on the pitch angle for n ≤ 2 is shown in Figure 3 for a 32 keV electron in a bathtub trap; it is this individual band power, and not the total power, that corresponds to a single track in the spectrogram.Since high-order bands are never powerful enough for reconstruction, we in general restrict our discussions to only the mainband (n = 0) and one detected sideband order: n = 1 or n = 2 depending on the short interference effect.The dashed line in Figure 3 shows an example detection threshold that is met only by the mainband and the n = 2 sideband for different but partially overlapping ranges of the pitch angle.This creates three allowed track types based on the pitch angle and the band order: (i) Mainband high pitch angle: closest to 90 • for n = 0. Since the short interference is a wavelength-dependent effect, we expect the specific nature of the allowed pitch angle regions to vary with frequency.Indeed, in Project 8 Phase I the 32 keV tracks are well described by Figure 3 and the three cases above; but the story is very different at 17 keV, where the mainband is almost completely suppressed and we detect only the n = 1 sidebands.For further discussion of this effect where the as dependent on pitch angle due to the rectangular waveguide with a short.For a threshold as shown, only the mainband and 2 nd -order sidebands surpass the detection threshold while the 1 st -order sideband is suppressed.Right: As a result, the slope of the mainband, which is directly proportional to the total detected power, suffers a discontinuity in frequency.n = 1 sideband is visible we refer the reader to Section VII of [6].This exemplifies the powerful influence of the short, and the importance of understanding sideband effects. Radial Gradient Effects So far, we have treated the axial motion as independent of the (x, y) plane on the basis that the magnetic field varies only with z, i.e. ∇ B is always parallel to B. However, to improve our description of the electron dynamics we must consider a small radial gradient of the form ∇ B × B §.This causes the guiding center of the cyclotron orbit to precess slowly (compared to the axial motion) in x and y, which in effect perturbs the one-dimensional magnetic trap profile B z (z) with a slow time dependence.A small anti-symmetric tipping of the trap coils at angle ψ from ẑ is the simplest way to recreate this ∇ B × B perturbation in simulation.Consequently, this gradient induces a periodic drift in the axial frequency Ω a : where r is the radial position of the electron, l 1 is the characteristic trap length and Ω m is the drift frequency. The effect of this precession on the track signal now becomes clear: if the axial frequency varies sinusoidally, so will the frequency of the n > 0 (sideband) tracks.This oscillation has been observed in Phase I data, where it manifests as a track with an appreciable "width" in frequency; Figure 4 shows an example sideband track with this §In fact, the assumption that ∇ B × B = 0 contradicts the Maxwell equations and thus is clearly unphysical. quality.Since the observed period of precession is ∼ 100 µs, which is comparable to the DFT length (40.96 µs), the oscillation can be seen to some extent directly in the spectrogram. This observed frequency oscillation represents one of the primary motivations for a machine learning approach to signal classification.Its effect on the spectrogram is clear as illustrated in Figure 4, and it is a unique property of sideband tracks which demonstrates the power to discriminate from mainbands.By extracting information from the spectrogram around a track, we can then apply machine learning techniques to identify sideband oscillation when it is present and label the track accordingly. Figure 4: Close examination of a spectrogram with a sideband signal.The frequency oscillates due to radial gradients in the magnetic field, depositing power over a range of roughly 500 kHz and most concentrated at the turning points of the oscillation.This oscillation corresponds to the apparent thickness of the track. Energy Correction The cyclotron frequency Ω c of a MPT structure can be calculated using the phenomenological model in [6] if both the reconstructed mainband frequency Ω 0 and the pitch angle θ of the electron are known.The result is in effect an energy correction, where the kinetic energy (and thus from many events, the energy spectrum) is calculated from the true cyclotron frequency rather than the frequency of any one reconstructed track.It is the end goal of the classification scheme to accomplish exactly this; first the identity of the mainband track must be established, and then pitch angle information extracted.We may extract the pitch angle in two ways: (i) Axial Frequency: for MPTs with a mainband and one or more sidebands, the axial frequency Ω a is the frequency difference between the mainband and a sideband track divided by its order n.Ω a may also be determined the same way from an event with multiple sidebands but no mainband.In Phase I, these cases comprise a minority of our data at about ∼ 10%. (ii) Track Slope: for MPTs with only a mainband, the pitch angle may be extracted from the track slope, which is proportional to the total radiated power in Equation 3.Such cases comprise the majority of the data used for this work at about ∼ 90%. The second method listed above has an ambiguity that must be addressed: in general, the track slope alone does not uniquely determine the pitch angle (this is evident from Figure 3).To resolve this issue, we must also differentiate between mainbands of the high and low pitch angle regions.Our task is then to assign every track an appropriate topological label from the list in Section 2.1: mainband high pitch angle, mainband low pitch angle, or sideband.Classification into these three groups will allow for an accurate measurement of the mainband frequency, the pitch angle, and in turn the true kinetic energy can be determined from the cyclotron frequency. We approach this task with a machine learning model that uses a supervised learning method for classification.The overarching goal of the classification program is to use only those track features that are intrinsic to the signal itself and, through their inclusion, have the capability to improve the accuracy and robustness of the signal identification.As Project 8 moves to a tritium source, such a classification scheme will be vital to make an accurate measurement of the continuous spectrum and reach meaningful conclusions about the endpoint.In the remaining sections, we will develop the classification scheme and present the results on krypton Phase I data at all three energies of interest.The next steps of pitch angle calculation and the resultant energy correction are not yet implemented, but are of course a primary focus of future work to realize the full potential of track classification.We will also discuss the future impact of a more-developed classification process in the context of tritium endpoint sensitivity and full event reconstruction. Signal Analysis and Feature Extraction The track features which we use for classification result from two separate analysis techniques: primary track finding and the rotate-and-project algorithm.These methods give us a total of 14 parameters that we use in training the ML classification model.In this section, we describe the calculation of these parameters. Primary Track Parameters Primary track finding is the process of collecting high-power spectrogram bins into linear track signals, described thoroughly in [5].At its conclusion, several parameters are formally calculated to describe each track candidate; these include the slope, start frequency, time length, and many others.We use the following three quantities as inputs to the classification model: • TotalPowerDensity (W/Hz): the sum of power spectral density values in all bins that comprise the track cluster. • TrackSlope (Hz/s): the slope of the track as extracted from regression analysis and a Hough transform [7]. • TimeLength (s): the difference between the track end time and start time. Recall that the slope of a track is directly proportional to the total power emitted by the electron, and the slope and individual track power together determine the pitch angle information as illustrated in Figure 3. Thus, the correlation between slope and track power has strong discrimination power between regions of high and low pitch angle.Figure 5 illustrates this correlation for 32 keV electrons and shows a good separation of the two mainband populations, which agrees with our understanding of the physical process from the phenomenological model.In this figure the sideband events populate the low PSD range across all slopes.This provides a clear motivation for the use of these two features as inputs to the classification model.The utility of the track length is primarily based in its correlation with the track power as well.Mainband tracks in general have a strong profile with power concentrated in one or a few of the frequency bins at each time slice.By contrast, sideband tracks often have power distributed across many bins due to the effect of the radial magnetic field gradient discussed in Section 2.2.Sidebands also contain less power overall in the case of the 30 and 32 keV peaks.Consequently, the track power can have considerable dependence on the number of points which comprise the track, and the track length helps to bolster the discrimination ability between all three types in conjunction with the slope and power. Rotate-and-Project Distribution Radial magnetic field gradients create a sinusoidal variation in the axial frequency, effectively smearing sideband track power over several frequency bins while leaving the mainband track untouched.To extract a set of parameters from the spectrogram that quantify this difference, we first simplify the problem.After primary track finding, spectrograms with a known track are reprocessed with a "Rotate-and-Project" operation, where they are effectively projected along the axis perpendicular to the track.This reduces the analysis to one dimension − the projected spectrum − while preserving the most useful information about the track from the full spectrogram.The precise procedure is as follows: (i) The known track is characterized by a slope q and intercept f 0 . (ii) The full spectrogram is reduced to a sparse spectrogram of only points that have SNR > 4.0 and that lie within the time bounds of the track.These points will be described by (t j , f j ) where t denotes the time coordinate, f the frequency coordinate and j the point index. (iii) The projected spectrum s at bin k is calculated as a function of the intercept, which we call β k and which sweeps the range f 0 ± ∆f in discrete steps of δβ (both ∆f and δβ are runtime-configurable parameters): where β k = f 0 −∆f +k δβ, q is the track slope, and σ is another runtime-configurable variable which describes the resolution of the spectrum.This calculation is a kernel density estimation with a Gaussian kernel and bandwith σ. At a minimum, σ should reflect the inherent uncertainty in each point location, which is roughly the bin size; this way, the spectrum is not strongly affected by how precisely the choices of ∆f , δβ, or the reconstructed value f 0 coincide with the discrete binning of the spectrogram.σ can also be made much larger than the bin size, and the projected spectrum gains sensitivity to structures which span a similarly larger bandwidth; however to retain good sensitivity to the sharp mainband tracks, we keep σ similar to the bin width.There is no advantage to matching it exactly with the bin size, so for convenience we choose σ = 50 kHz which is approximately 2 bins.For the step size, we choose δβ = 25 kHz, or half the resolution σ and approximately 1 bin.The only requirements on the sweep range 2∆f are that it should be many times larger than the frequency bin size, and at a minimum large enough to capture the full amplitude of the sideband oscillation (∼ 1 MHz).We choose ∆f = 4 MHz. Figure 6 shows typical projected spectra corresponding to a mainband and a sideband track.The qualitative differences between the two remain clear: sideband spectra typically feature a wide double peak structure, contrasted with the sharp and high-amplitude profile of the mainband spectrum.The sideband spectrum amplitude is largest near the edges of the signal region because the axial motion is slowest at its turning points, thus depositing more power per bin; this effect can also be seen in the spectrogram (Figure 4).Next, we use the ROOT library TSpectrum [8] to characterize peaks in the projected spectrum.This library fits a linear background b k = ak + b and labels a point k as a peak if it meets all of the following criteria: (i) Value is at least twice that of the background level: (ii) Peak amplitude meets or exceeds a minimum fraction r of the highest peak: The frequency range corresponding to m bins is m δβ. We choose r = 0.4 and m = 5.The values of these and the other configurable parameters from Equation 5 are listed in Table 2. Once the peak locations are determined, the full spectrum is fit to a sum of n Gaussian functions where n is the number of peaks found.Only a handful of tracks in our studies produced a spectrum with 3 or more peaks, and none with more than 6. From the spectrum s k and the results of the Gaussian fit, we extract a total of 11 additional parameters for use with the classifier: • Average, RMS, Skewness, Kurtosis: First four statistical moments of the spectrum s k .Average and RMS are in units of MHz, and Average is shifted by f 0 so that 0 corresponds to the center of the spectrum. • MeanCentral, SigmaCentral, NormCentral, MaximumCentral: Extracted fit parameters of the Gaussian with mean closest to f 0 (the most central peak).MeanCentral is shifted by f 0 as described above, and NormCentral / SigmaCentral where b 0 is the background level at the peak location. Both MeanCentral and SigmaCentral are in units of MHz. • NPeaks: Number of peaks found by TSpectrum for the Gaussian fit. • CentralPowerFraction: average value s of bins within 3 times SigmaCentral of the most central peak divided by the average value of all points in the spectrum. In the event that the Gaussian fit fails to converge, all of the parameters that depend on it are obviously unreliable and some can be undefined.To circumvent this, we simply remove any tracks from the analysis that have an unsuccessful fit, or which have any parameters undefined as a result of an improper fit.These represent between 5−10% of all tracks in various data sets we have used.Combining these with the track parameters discussed in the previous subsection, we have a total of 14 parameters to use for machinelearning-based classification.The slope, power, and track length have the ability to distinguish between all three track topologies based on the phenomenological model; the projected spectrum provides an additional 11 parameters which distinguish sidebands from the two mainband topologies.In the next section we discuss the implementation of this new analysis. Supervised Classification We take a machine learning approach towards signal classification across 17, 30, and 32 keV data sets (see Section 1.1) using a Support Vector Machine (SVM) [9] classifier optimized via supervised learning.The result of training the SVM is a nominal decision function that takes data points (track parameters) as 14-dimensional vector inputs and predicts a class label with a given accuracy and precision.In this section we briefly discuss the overall training scheme with details left to Appendix A. To obtain the training, cross-validation, and test sets over which the classifier is optimized we make a series of parameter-space cuts in the data.The first cut is on the start time as discussed in section 1.1, which selects only tracks that promptly trigger the RSA acquisition.From these, we assign ground-truth labels using two independent fits to the phenomenological model: first, we fit the predicted behavior of slope with respect to the start frequency with the energy assumed to be known exactly.Points within a fixed Euclidean distance in the slope/frequency space of the model predictions were labelled as main carriers, either high-θ or low-θ accordingly.Second, we label the remaining sideband tracks using the relative track power with respect to frequency, again with the energy fixed.Tracks outside the inclusion regions for every label are simply discarded.This yields a total of 7,347 tracks for optimization.Labeled tracks are further split in a 67%/33% fashion for training (training and cross-validation together) and testing, respectively.In performing the split, we keep the relative ratios of classes in mind so as not to introduce biases during training. It is worth noting briefly that although a ground-truth labelling informed by a proper simulation is likely more desirable, our understanding of these pitch angle effects with the phenomenological model was very new at the time, and our simulation tools were not equipped to incorporate them easily.We have high confidence in the accuracy of the training set labels as described here, so the concern is a minor one.Furthermore, the method described above for labeling is only suitable for training purposes and not for classification since, in generating the fits to the selected subset of data, we assume that the energy is known exactly (with a small margin of error represented by a Euclidean distance in the respective parameter space).In the final data, which we wish to classify, this assumption is not valid for all the tracks we reconstruct. The implementation of the classifier is performed with the python-based ML library Scikit-learn [10].In Scikit-learn, template SVMs are implemented by a Cython wrapper around the powerful library LIBSVM [11].In training the SVM classifier we in parallel optimize the model's hyperparameters C and γ which may influence bias and overfitting.C encodes the leniency of the SVM in trading misclassification for model stability (influencing overfitting) and γ dictates the influence of training points defining the decision boundary to the rest (influencing bias and variance).We then test the competency of the optimized model on the test set and use the accuracy and the Area Under the Receiver Operating Characteristic (AUROC) as performance metrics.For our multi-class application (two mainband classes and one sideband class), we average the individual Receiver Operating Characteristic (ROC) curves to report the overall AUROC metric. Results Here we report the results of the track classifier as trained on different combinations of the 83m Kr line groupings discussed in Section 4. We will show that the optimized SVM classifier can distinguish the three different track topologies with great accuracy and robustness, allowing us to obtain clean spectra across all energy ranges. Narrowband Classifier In the case of a tritium spectrum, the region of interest will be a window spanning approximately 4 keV (200 MHz) around the endpoint value Q = 18.6 keV.A classifier will be necessary to understand the CRES signal in this region if sidebands are present and, overall, if energy corrections are to be applied in an event-by-event fashion.With 83m Kr as a calibration source gas, we may first study the classifier results by training our model on the 30 keV peaks and applying it to both 30 and 32 keV peaks simultaneously.This 2 keV energy separation ¶ serves as a test of classifier reliability across an energy range similar to the tritium window; we call this configuration the narrowband classifier. The results of the optimization outlined in the previous section (in detail in Appendix A), picked from a cross-validation accuracy of 92.0 ± 0.8% as a mean over 3-folds, gives us the following SVM hyperparameters: These values are indicative of a "smooth" model (γ 1) which captures little of the data complexity in the feature space, but is balanced by the large value of C 1 which allows for highly nonlinear terms in the loss metric minimization, recovering some complexity in the decision plane. The respective test set accuracy on the 30 keV range is 91.2%.We also generate the ROC curves for each class and average them as shown in Figure 7. Across all classes we observe an AUROC over 0.9 which indicates that our model does very well at separating any individual type of track from the rest.The ROC curve for low pitch angle mainbands (in pink) has the lowest AUROC, which is most likely due to its relatively small population relative to the other two classes; this comes into effect at One-vs-Rest level where the former population is pitted against the latter simultaneously.Both average curves achieve a ROC over 0.960 putting us in an excellent range of model stability to compliment the high test accuracy obtained. In Figure 8(a) we see the resulting classified track start frequency spectrum in the 30 keV range, the range on which this classifier was trained.We observe a clean separation of mainband tracks in blue (high pitch angle) and pink (low pitch angle) to sidebands in yellow that are mostly concentrated above 1150 MHz.The broad peak between 1160−1170 MHz corresponds to upper sidebands of 2 nd order, giving a rough estimate of the axial frequency f a ≈ 22.5 MHz; the n = 1 sidebands are predicted to be suppressed due to the short effect discussed in Section 2.1.To see the separation between mainbands of different types more clearly, we study the slope populations in the frequency range 1118−1123 MHz.The overlap of high and low pitch angle mainbands in frequency space seen in Figure 8(a) around the low pitch angle peaks is now apportioned, as evident in Figure 9(a).The blue scatter points in the region around the low pitch ¶Including upper and lower sidebands the energy range is wider, at approximately 3.25 keV.angle peaks constitute true high pitch angle carriers whose true start frequency has been missed during primary track reconstruction.This separation allows a single-valued reconstruction of the pitch angle for a mainband of a given slope; recall that energy corrections may be performed once the pitch angle information is available. We also apply the narrowband classifier to the 32 keV group and obtain an accuracy of 92.8%, which surpasses the test set score.As can be seen in Figure 8(b), the frequency spectrum sports a clean separation between mainbands and sidebands; in this case both upper (around 1090 MHz) and lower (around 1010 MHz) sidebands are visible.The short peak around 1040 MHz has also been classified as a mainband with its respective low pitch angle tail.This corresponds to the 32.14 keV krypton line, which would be extremely difficult to spot by eye, as its relative intensity is very low.The mainbands of the 31.9keV lines are separated neatly in slope-frequency space as seen in Figure 9(b), once again giving way for possible energy reconstruction through pitch angle extraction.The success of the narrowband model at 32 keV is reassuring that a similar technique could be applied to the tritium endpoint region with training on the 17 keV krypton peak. Optimal Set of Classification Features We can further improve the performance of the narrowband SVM model by examining how useful each parameter is for accurate classification.An exhaustive search of the unique combinations of features from the 14-dimensional feature space results in 2 14 − 1 = 16, 383 iterations of the training algorithm.We train a SVM and evaluate the accuracy and the AUROC for each feature subset.The subset with the best overall performance may have improved accuracy compared to the full feature set, and will require less intense computation resources to train on a large data campaign. To evalulate each subset SVM with a single metric, we use the sum in quadrature of the accuracy and the AUROC value: where x is the accuracy of the model and y is the AUROC.The maximum possible value of ∆ opt is √ 2 ≈ 1.414.Maximizing this combined metric allows us to asses both the model stability and its predictive power simultaneously. Perhaps unsurprisingly, the value of ∆ opt is in general large for subsets with many features; a complex model has a greater capability for increased performance.However, some single-feature models are able to achieve high values of ∆ opt as well.Average exemplifies this, which alone yields a model with 90.1% accuracy and ∆ opt = 1.314.Of course, models with a single feature for classification run the risk of lacking enough variance to generalize effectively so we discard them as viable candidates.A global maximum of ∆ opt = 1.359 is achieved with a model utilizing 6 final parameters: TotalPowerDensity, TrackSlope, TimeLength, MeanCentral, NormCentral, and MaximumCentral.This amounts to an accuracy of 94.9%, an increase of 3.7% compared to the original result in Section 5.1, and an AUROC of 0.97 on the test set. 17 keV Peak: Sidebands and Energy Dependence The 17 keV peak presents a unique problem for the classifier; as discussed in Section 2.1, the CRES signal power distribution across the sideband spectrum is energy-dependent as a consequence of the waveguide short.We have come to understand that the 17 keV Phase I data studied here consists almost entirely of pairs of 1 st -order sideband tracks, due to a large suppression of the mainband peak from the short interference effect.We have studied more closely the region between the 17 keV sideband peaks and found an excess of power corresponding to an average SNR of 1.26; much too low for track reconstruction, but adequate in combination with other studies to confirm the sideband hypothesis.Consequently, we cannot train the classifier on 17 keV tracks since we have no mainband tracks to offer it. Instead, we can first use the same classifier that was trained on 30 keV to evaluate A small fraction (∼ 1%) of observed tracks at 17 keV are hypothesized to be genuine mainband signals from shake-off electrons. the 17 keV tracks, and the results are subpar, with an accuracy of 75.5%.While a sizeable portion of tracks are still classified properly, we are confident that nearly all of those classified as main carriers are incorrect.However, this result isn't unexpected; the power and slope correlation discussed in Section 3.1 is also energy-dependent, and from Section 5.2 we now understand these two parameters to be among the most decisive in the classification scheme.In the training scheme, the classifier becomes familiar with the 30 keV power-slope correlation, and this has a considerable negative influence when applied to the 17 keV peak where the true power-slope correlation is different.It should be noted that this effect is also present at 32 keV when trained on 30 keV, but from the comparable accuracy scores and AUROCs we conclude it is insignificant for this small energy difference, much to our advantage. Wideband Classifier To improve the classifier performance on 17 keV data, we will consider two approaches: first, we use only the rotate-and-project parameters, which have no energy dependence.Second, we train and evaluate the 14-dimensional classifier simultaneously on all three peaks. With the rotate-and-project parameters only, we re-train and test the classifier on 30 keV data, and apply it at 17 keV.In the test set (30 keV), the total accuracy decreases to 86.1%; this is reasonable given that the slope and power information, which was especially useful in discriminating between the two mainband types, is missing.When applied at 17 keV, we observe an accuracy of 78.9%, which is a modest improvement over the narrowband model (75.5%).The ratio of the 17 keV accuracy to that of 30 keV is more substantially improved: 0.917 compared with 0.829 for the narrowband.This suggests, as we hypothesized, that energy-dependent parameters are partially responsible for the shortcoming of the narrowband classifier at 17 keV.However, the rotate-and-project only performance is still far less than ideal and poor compared to the narrowband model at 30 and 32 keV, thus it does not provide us with a satisfactory alternative. The second approach we explore considers all three energy ranges simultaneously for training; we call this model the wideband classifier.The 17 keV classified spectrum from this model is shown in Figure 10.It is immediately clear that this model has by far the best performance at 17 keV, with an accuracy of 96.1%.The largest population of mainband tracks is broadly centered about ∼ 1750 MHz, which is at least 30 MHz above the (suppressed) mainband peak.Informal inspection of the slope-power correlation among these tracks, as well as the individual spectrograms, suggests they are indeed most consistent with true mainband tracks.We interpret this as evidence for satellite shake-up/shake-off electrons [12] to be further investigated.Similar broad peaks of mainbands at the 30 and 32 keV ranges have also been observed at a similar separation in frequency (see Figure 8).Overall, the 30 and 32 keV lines themselves also show improved accuracy scores compared to the original narrowband model with the full feature set: 92.3% (+1.1%) and 95.6% (+2.8%) respectively. Discussion and Outlook We now have two candidate classifier models: the optimal-feature set narrowband model and the wideband model.Since the classifier will be used for future tritium analyses, we keep the context of tritium data in mind when discussing the advantages of each.The overall accuracy scores and AUROCs for each classifier are summarized in In the previous section, we improved the classification accuracy at 17 keV by training on all three peaks simultaneously (the wideband model); this model also boasted percent-level improvements to the accuracy at 30 and 32 keV.We understand that the narrowband classifier performed poorly at 17 keV due to energy-dependent correlations between parameters, and the lack of appropriate training at 17 keV.In a tritium analysis, the data acquisition will be contained to not more than ±2 keV around the endpoint; this includes the 17 keV krypton peak, which can be used for magnetic field calibration.Therefore, if we construct a narrowband model in this context − trained on 17 keV krypton data and applied to tritium data from approximately 15−19 keV − it is reasonable to expect a performance similar to the current narrowband model at 30 and 32 keV.Although the narrowband classifier has not been directly trained and evaluated in the tritium endpoint energy range, we have high confidence in its applicability there in the future.The overall results of the various classification approaches and studies have yielded great improvements to our understanding of the data to give us this confidence. We also saw in the previous section that the choice of an optimal subset of the classification features improved the narrowband model accuracy and AUROC metrics at the percent level, bringing them to a point comparable to the wideband model.In the next section, we will also examine the effect of imperfect classification on the resultant tritium spectrum.Upon comparison of the performance of a narrowband 30 keV network and the wideband network applied to 30 keV and 32 keV data, we see that the wideband model gives only marginal improvement in performance over the narrowband model trained on a close neighbor.Since the tritium signal spectrum has a close neighbor calibration source at 17 keV of Krypton, it is not necessary to bring in the complexity of a wideband model.The sideband problems which prevented a version of the narrowband model from being trained at 17 keV in this work are expected to be ameliorated in hardware at later phases.However, in the eventuality that they remain a challenge, we have shown that a wideband model can achieve good performance as well if it is trained properly. Future Applications in Event Reconstruction The signal from a single electron in general takes the form of many reconstructed tracks, via (a) sideband power deposition and (b) scattering interactions with residual gas molecules.The sideband comb structure creates a group of parallel tracks as discussed in Section 2.1, and discrete energy loss from a scattering interaction creates a "jump" in the signal frequencies and in the pitch angle.After track finding, those tracks that belong to the same event (electron) are grouped together to obtain only the start frequency of the event as a whole.In the current event building scheme, this is accomplished with two stages in sequence corresponding to the two items above.First, individual tracks are combined into MPT objects based on a coincidence between start and end times.Many such MPTs are then joined into a single event using a similar coincidence check on the timestamps, this time a head-tail comparison.A full treatment of this process is given in [13].However, the present event builder (with no classifier information) makes no statement about the identity of the mainband within a MPT structure; the start frequency of an event is simply defined as that of the first track (in time) within the first MPT of the event sequence.The classification scheme thus creates the potential for a more intelligent event building procedure which takes advantage of the labeled topologies to determine the true main carrier start frequency of an event. One simple improvement is to utilize the classification labels to add consistency checks in event building.A MPT structure should logically contain no more than one mainband, and the start frequency of the MPT should be determined by this mainband track alone (if present).MPTs with two or more sidebands may also be checked to ensure the frequency spacing between them is consistent with a unique axial frequency.If a mainband track decreases in frequency after a scatter, indicating a sharp increase in the pitch angle, the accompanying decrease in axial frequency of the candidate sidebands may also be used as a check.These examples are only some of the many possibilities in which classification labels can enhance our reconstruction process. In Figure 11 we show some typical interesting event topologies which could benefit from an event builder that utilizes the classifier results: • Top left: An event with a scatter that changes the pitch angle from the high to low region, according to the classifier.A faint sideband is also visible above the mainband after the scatter, but it was not reconstructed.It is interesting to note that this change in topology would be very difficult for a human labeller to identify, but is clearly easy for the classifier. • Bottom left: A MPT with two lone sidebands.In this case, we can reconstruct the hidden mainband start frequency from the axial separation and determine the pitch angle correction. • Bottom right: A MPT that the classifier has identified to contain two mainbands.This indicates an error, either in the classification or the MPT construction.To address this, we might consider the relative probabilities that either track is in fact a sideband (i.e.work event building information into the classifier), or simply discard the event if it cannot be made sensible. An improved event builder that works in tandem with the classifier is crucial for proper reconstruction of a continuous tritium spectrum using CRES.Complex event topologies and sideband proliferation from lower energies in the spectrum continuum will demand a sophisticated understanding of the underlying nature of tracks.For example with atomic tritium, assuming 1 × 10 18 atoms/m 3 and a cylindrical voxel † † of 1 cm in diameter and 10 m in length, for events with, on average, ten tracks of length 80 µs each we expect 3.14 × 10 15 atoms/voxel.Then, given an activity of 5.6 × 10 6 Bq per voxel for the entire spectrum, we expect about 1.69 events (about two events) present at all times when looking at a 1 keV window below the endpoint where only a 2 × 10 −4 fraction of the activity is present.With two possibly overlapping events in a given spectrogram, the use of an accurate classifier will be decisive in identifying and separating the constituents of each; we expect that the model presented here is a decisive step toward that success. In Figure 12 we outline the analysis steps discussed for this work, including the future work regarding (a) the event builder as discussed in this section, and (b) pitch angle corrections to extract the true cyclotron frequency. Effects of Misclassification on Tritium Spectrum Along with the improved event builder, a classifier helps us reuse or remove all but the misclassified sidebands at a high confidence level.However, it is still important to study the effect of sideband proliferation through misclassification on the tritium spectrum.Using the Morpho [14] interface to perform Hamiltonian Monte Carlo simulations with the Stan package [15], we model the electron kinematic variables and compute the detected track frequencies according to the discussions in Section 2.1.The kinetic energy is drawn from the tritium spectrum probability distribution function with an endpoint of exactly Q = 18600 eV and zero neutrino mass.The power in each of the mainband and the pair of n = 2 sideband tracks is then calculated for a circular waveguide with the same bathtub trap configuration as for the data used in this paper.For those electrons that become trapped, a uniform detection threshold is enforced on each track.The detected tracks are collected into a mainband spectrum, denoted p 0 (E), and a sideband spectrum denoted p 2 (E) ‡ ‡.Here, E represents the inferred kinetic energy from the detected track frequency, which in the case of sideband tracks would constitute an erroneous reconstruction; this allows us to study the effects of both misclassification and the lack of pitch angle corrections.It is important to note that this simulation serves only as a toy model and does not reflect the complete design of the Project 8 detector in Phase I (from which the data presented here in earlier sections was taken), or Phase II which has since reconstructed the first ever tritium spectrum with CRES.However, the toy model is relevant as a mean to highlight and evaluate some of the challenges that sideband presence will bring when reconstructing any CRES spectrum. If the probability for a track to be wrongly classified (either mainband as sideband, or vice-versa) is uniformly α, then the spectrum of classified mainband tracks is: With α = 0, we obtain the main-carrier-only spectrum from a "perfect" classifier.Since the energy of each event is calculated from the measured (mainband) frequency and not the cyclotron frequency, we expect to observe a shift to lower energy (higher frequency) in the endpoint.Indeed, Figure 13 shows the simulated Kurie plot for perfect classification α = 0 (in black) and the fitted Q-value: which deviates from the true input value by 139 eV.In the same figure we show the spectrum with α = 0.5 (random classification); now, the region above the endpoint is contaminated by sidebands.As a result, the endpoint is measured at: Lastly, we perform the Kurie fit for many values of α to see the dependence of Q in a more continuous form; Figure 14 illustrates these results.Each Q-value (shown as the dashed line) represents the mean of 50 unique spectrum simulations for a fixed value of α; the band in light red spans the standard error of the mean on either side, and is dominated by the statistical uncertainty of each simulation.Though the simulations are themselves independent, we use the same set of simulations for all values of α.Thus, the uncertainty band in Q does not reflect the endpoint measurement uncertainty of the simulation, nor, more importantly, of any real Project 8 phase.Comparing to α = 0, we observe that even for α ≈ 10 −2 which is small compared to our demonstrated models, the endpoint shift is significant; a precise requirement or bound on α, however, is again not necessarily transferable to Project 8. Still, we may safely conclude that going forward with the classification models we must have the highest reasonable standard for accuracy.Our thorough understanding of the trap geometry and related systematics helps in addition to reduce the effects of sideband contamination in future Project 8 phases, through both design and trap configuration. We have clearly demonstrated with this simulation that both energy corrections and track classification will have a substantial influence on future CRES tritium results, with an observed endpoint shift on the scale of O(100 eV).The design and configuration of future phases of Project 8 are guided in part by the goal to suppress detectable sidebands and achieve sub-percent level misclassification.Incorporation of this improved track classification and pitch angle considerations will enable a CRES experiment like Project 8 to take the next step and achieve a competitive eV-scale endpoint sensitivity. Conclusions With the phenomenological model put forward in [6], we have motivated the need for a classification of CRES signal topologies according to the pitch angle distribution of trapped electrons and the sideband comb structure of events discussed in Section 2. With the use of reconstructed signal properties including total power, track slope, and analysis of the rotated-projected spectrum (Section 3.2), we have enumerated 14 quantities that have the power to discriminate between the three different track topologies.We have implemented a Machine Learning-based classification scheme using a Support Vector Machine algorithm with these 14 parameters, and studied the results of several models on Project 8 Phase I data.With the use of an optimal feature set, we have achieved a model with 94.9% total accuracy and an AUROC of 0.973.This model is trained and applied over a total energy range of approximately 3.5 keV, which gives us confidence in its applicability to tritium analysis (∼ 15−19 keV). The use of these classification models has already improved our understanding of the Phase I data.It has bolstered our confidence in the phenomenological model, our understanding of the relationship between signal characteristics and trap geometry, and the set of optimal features provides some insight into the nature of tracks both physically and from the viewpoint of reconstruction.The classified spectra are much more informative than ordinary (unclassified) spectra, and our comparison of different training models has highlighted the energy dependence of some track parameters. Energy correction − from measured frequencies to true cyclotron frequencies − is another necessary step that will utilize track classification, but this application is outside the scope of this paper.These corrections were discussed briefly in Section 2.3, and in Section 6.2 we showed through simulation that we expect these corrections to be of order 100 eV.Consequently, for an experiment like Project 8 to perform an eVscale precision measurement of the tritium spectrum in its next phase, it is essential to minimize the impact of sideband effects.Strategies to accomplish this have been pursued already in Phase II and are a key consideration in the development of the Phase III experiment; one such strategy in Phase II almost completely eliminates the presence of detectable sidebands by reducing the frequency modulation index.We also discussed future prospects of an event builder that works in tandem with the classifier, and perhaps through Machine Learning as well.With these types of improvements to the apparatus guided by this work, and a highly robust track/event classification scheme, Project 8 will work toward a Phase III analysis that is greatly advanced and mature compared to earlier phases, and capable of achieving an endpoint measurement with eV-scale precision. Project 8 has demonstrated the CRES technique, constructed the first-ever CRES tritium spectrum with it, and will soon be looking toward a competitive eV-scale measurement of the neutrino mass limit in Phase III.The work presented here has contributed greatly to our understanding of CRES signals and the obstacles to an eV-scale sensitivity, and its conclusions have provided us with valuable knowledge of the path toward a highly sensitive measurement.By utilizing the full potential of track classification, we can continue to advance the CRES technique and make valuable steps towards the future ambitions of Project 8: ultra-precise spectroscopy, a decisive measurement of the tritium endpoint, and and ultimately a direct mass measurement. Appendix A. Supervised Classification Details The Support Vector Machine (SVM) [9] is a model that, as all ML algorithms, features a unique loss function to be minimized over a set of m training points {x (i) , y (i) } with the goal of obtaining a fit parameter vector w.The x (i) in our case are electron tracks, each represented as a 14-dimensional vector of features with a label y (i) ∈ {0, 1, 2}: 0 for high pitch angle mainbands, 1 for low pitch angle mainbands, and 2 for sidebands.The optimal fit vector w is also 14-dimensional and may be thought of as defining the normal to a hyperplane in a transformed feature space which maximally separates disjoint classes of points.Classification itself is performed by projecting data points x (i) onto w following constraints regarding the sign of the projection.Specifically, we employ a SVM with a radial basis kernel to facilitate classification of our non-linearly separable track topologies.Example slices of the 14-dimensional space where the SVM trains are shown in Figure A1. To aid in minimization of the loss function, we rescale all learning points such that each feature has a mean of 0 and a standard deviation of 1 (now in dimensionless units).In a SVM, classification relies on computing a measure of distance from a point to the decision hyperplane.This objective is aided by the scaling, preventing any feature with a widely different range to bias the prediction.class, a true positive is a track with the correct label and a false positive is a track with any other incorrect label.A random (useless) classifier would produce equal numbers of true and false positives regardless of threshold.With an ideal classifier, every threshold that retains at least one point would yield 100% true positives.In practice, a classifier ROC curve should lie somewhere between these two scenarios, resulting in an Area Under ROC (AUROC) bounded between 0.5 and 1.The steepness of the ROC curve tells us the power of discrimination and thus the AUROC is a good quantitative measure of the strength of the model itself. In order to compute a ROC curve we first binarize the multi-class problem by one hot encoding of the labels.The metrics are then computed using a One-vs-Rest strategy where a single class is pitted against the other two simultaneously.To obtain a comprehensive ROC curve, we may take one or both of the following approaches: • Micro-averaging: count true and false positives in all classes together as a single problem, and then compute the resulting ROC curve • Macro-averaging: construct the ROC for each classification type separately, then simply average the results. With the accuracy score, optimized hyperparameters, and ROC curves defined, we evaluate the performance of the classification scheme. Figure 2 : Figure 2: Diagram of the magnetic bottle trap in a bathtub configuration.Three coils are wound around the rectangular waveguide whose magnetic fields create a potential barrier along the main axis where a 1 T background field is present.Electrons are constrained to the low-field region between two trapping coils. Figure 3 : Figure 3: Left: Power distribution of a 32 keV electron signal in a bathtub-type Project 8 trapas dependent on pitch angle due to the rectangular waveguide with a short.For a threshold as shown, only the mainband and 2 nd -order sidebands surpass the detection threshold while the 1 st -order sideband is suppressed.Right: As a result, the slope of the mainband, which is directly proportional to the total detected power, suffers a discontinuity in frequency. Figure 5 : Figure 5: Power and slope correlations in Phase I 32 keV bathtub trap tracks (black scatter).The phenomenological model fit is overlaid for high (blue) and low (pink) pitch angle carriers, which demonstrates the well-resolved separation of mainband populations with disjoint pitch angles. Figure 6 : Figure 6: Spectra from the rotate-and-project analysis for a typical (a) mainband track and (b) sideband track.The mainband track is sharply peaked, whereas the sideband track is spread over a wider frequency range and doubly-peaked.Each spectrum also shows the associated Gaussian fit for comparison.Note the y-axis in (b) is scaled down in comparison to (a).The sideband used for (b) is the same track as illustrated in Figure 4. Figure 7 : Figure 7: ROCs of individual classes and averages for the narrowband model. Figure 8 : Figure 8: 30 and 32 keV frequency spectra classified with the narrowband model.The colors represent the SVM class identification. Figure 9: 30 and 32 keV track slope and frequency correlations classified with the narrowband model.The colors represent the SVM class identification. Figure 10 : Figure 10: 17 keV classified frequency spectrum with wideband model.The colors represent the SVM class identification. Figure 11 : Figure 11: Classified tracks of candidate MPT events exhibiting multiple topological combinations present in Project 8 Phase I data.The blue are mainband high pitch angle, the pink mainband low pitch angle and the yellow sideband classified tracks.The rectangular boxes are for illustration only; the track is composed of all points concentrated along a line passing through the middle of the box. Figure 12 : Figure 12: Analysis flowchart described and proposed in this work.The green blocks indicate large-scale processing steps and the smaller orange blocks show the data at each step.Feature extraction and the classifier decision function are contained within the block labelled 'Classifier', and the classified track provides more input information to the event builder compared to the raw track.Expansion of the event building stage and implementation of the pitch angle corrections are the most critical future analysis tasks to fully utilize the classification scheme. 0.5) = 19.028keV which exceeds the true value by 428 eV, or roughly 20 MHz: quite similar to the observed axial frequency.‡ ‡Recall that the n = 1 sidebands are suppresed due to the interference effect. Figure 13 : Figure 13: Kurie plot of two simulated tritium spectra in the toy model described in section 6.2: mainbands only (black) and sideband-contaminated with misclassification rate α = 0.5 (purple). Figure 14 : Figure 14: Extracted Q-value from Kurie fit to sideband-contaminated tritium spectrum simulations for different values of misclassification fraction α.The light red band represents the standard error of the mean.The true value of the endpoint is asymptotically approached for decreasing α which, without energy corrections, sits around 18.5 keV. This material is based upon work supported by the following sources: the U.S. Department of Energy Office of Science, Office of Nuclear Physics, under Award No. de-sc0011091 to the Massachusetts Institute of Technology (MIT), under the Early Career Research Program to Pacific Northwest National Laboratory (PNNL), a multiprogram national laboratory operated by Battelle for the U.S. Department of Energy under Contract No. DE-AC05-76RL01830, under Early Career Award No. de-sc0019088 to Pennsylvania State University, under Award No. DE-FG02-97ER41020 to the University of Washington, and under Award No. de-sc0012654 to Yale University; the National Science Foundation under Award Nos.1205100 and 1505678 to MIT.This work has been supported by the Cluster of Excellence "Precision Physics, Fundamental Interactions, and Structure of Matter" (PRISMA+ EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project ID 39083149); Laboratory Directed Research and Development (LDRD) 18-ERD-028 at Lawrence Livermore National Laboratory (LLNL), prepared by LLNL under Contract DE-AC52-07NA27344; the MIT Wade Fellowship; the LDRD Program at PNNL; the University of Washington Royalty Research Foundation.A portion of the research was performed using Research Computing at PNNL.The isotope(s) used in this research were supplied by the United States Department of Energy Office of Science by the Isotope Program in the Office of Nuclear Physics.We further acknowledge support from Yale University, the PRISMA Cluster of Excellence at the University of Mainz, and the Karlsruhe Institute of Technology (KIT) Center Elementary Particle and Astroparticle Physics (KCETA). Figure A1 : Figure A1: 3D slices of the classification feature space for the narrowband classifier where the SVM is trained; the class separations can be seen by eye.The blue scatter are mainband high pitch angle, the pink mainband low pitch angle and the yellow sideband tracks.All features are scaled (unitless). Figure A2 : Figure A2: Illustration of resulting ROC curves for three cases: a perfect classifier (red) with AUROC 1.0, a random classifier (blue) with AUROC 0.5, and a real-case of intermediate strength (black) following the top figure. Table 1 : Relative intensities of 83m Kr conversion electron lines under study in this work.From Table 2 : Standard parameter values for rotate-and-project analysis. Table 3 : Summary of classification model accuracy scores and averaged AUROC metrics.We exclude 17 keV from the calculation of the narrowband AUROC.
2019-09-17T21:36:17.000Z
2019-09-17T00:00:00.000
{ "year": 2019, "sha1": "7d99808ef1de1b119d929574a67e8ca0f4e0d1a7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/ab71bd", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "7d29aed393e85bbaadd003d29a3d5e89bb20ea48", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
55150596
pes2o/s2orc
v3-fos-license
Manganese Ores from South Sulawesi: Their Potential Uses as Raw Materials for Metallurgical Industry — Characterization of manganese ores from Barru and Bone regencies of South Sulawesi has been conducted with the aim at clarification of their mineralogical and chemical composition for their potential uses as the raw materials for metallurgical industry. Mineralogical properties of the ores analyzed by means of optical microscopy and X-ray diffractometry (XRD) show that samples from Barru consist mainly of rhodochrosite (MnCO 3 ) with less cryptomelane, groutite, bixbyite, and todorokite. Goethite, calcite and small amount of quartz present as impurities. Manganese ore samples from Bone are predominantly composed of pyrolusite (MnO 2 ) with subordinate ramsdellite and hollandite. Barite, quartz, hematite and clay are present as gangue minerals. Chemical compositions determined by using XRF method revealed that Barru samples contain higher in MnO (average is 40.07 wt%) than the Bone samples (average is 34.36 wt%). Similarly, Fe 2 O 3 and CaO are also higher in Barru than those of the Bone samples. In contrast, concentrations of SiO 2 and total alkali (K 2 O + Na 2 O) are lower in the Barru samples. The average P 2 O 5 content of samples in both areas is low (<0.2 wt%). Relatively higher grade of Fe 2 O 3 in the Barru ore implies that it has potential application for ferromanganese production; whereas the elevated SiO 2 content of the Bone ore is a good indication for silicomanganese manufacture. However, both ores may not favorable to be directly used as raw materials in metallurgical uses. Prior to be used, the ores should be treated by applying physical beneficiation in order to reduce deleterious elements. I. INTODUCTION Manganese ores are the main source of Mn metal that is primarily used as raw materials in industry in addition to iron, aluminum, and copper. Approximately 95 % of the ore produced is utilized by metallurgical industry mainly for the production of iron and steel and in alloys of steel. The remainder is used for non metallurgical sectors such as battery, chemical and pharmaceutical application [1]. Total reserves of Mn ore globally reach amounts of 4,517 million tons with the largest derives from Kalahari, South Africa. Large amounts of Mn have also been found in Groote Eylandt, Australia; Nikopol, Ukraine; and Ucurum, Brazil [2]. In 2013, total of world mine production reached 17,000 tons with the largest producer is South Africa and followed by Australia and China [3]. Although manganese deposits from Indonesia have not been studied in detail to date, manganese deposits are reported to be present all over the country including in Sulawesi. The purpose of this study is to describe mineralogy and chemical compositions of some manganese ore samples collected from the Barru and Bone areas of South Sulawesi with the implication for their potential usage as raw materials particularly in ferroalloys industry. II. REVIEW OF MANGANESE UTILIZATION Normal classification of manganese ore can be based on its grade. The ores containing more than 35 % Mn are regarded as manganese ore which suitable for manufacture of ferromanganese. Ferruginous manganese ore grading 10-35% is suitable for manufacture of spiegeleisen. The ore containing 5-10% Mn is referred to as manganiferous iron ore and it is suitable for manufacture of pig iron [4,5,6]. The end-uses classification of manganese ores is divided into metallurgical, chemical, and battery grades. Metallurgical grade ore for iron and steel industry ideally contains 35 -55 % Mn. P 2 O 5 , Al 2 O 3 , SiO 2 , CaO and S are also important. The Mn/Fe ratio is very critical. It needs about 7 kg of Mn to produce one tons of steel. Manganese serves as a desulphurizing, deoxidizing and conditioning agent during the smelting of iron ore. As an alloying element, manganese increases toughness, strength, and hardness of steel [5,6]. Nonferrous manganese alloys include manganese bronze (Mn, Cu, Sn, and Zn) and manganin (Mn, Cu, and Ni). Manganese bronze is corrosion resistance as in the case of seawater reaction. It is therefore suitable to be used for propeller blades on boat or torpedoes. Manganin is used in the form of wire for accurate electrical measurement [5]. III. MATERIAL AND METHODS Manganese ore samples used in this study were collected from two localities (Fig.1), namely Palluda village, Pujananting sub district of Barru Regency (four samples) and Mappesangka village, Ponre sub district of Bone Regency (eight samples). Samples of manganese ore were subjected to XRD, SEM-EDS and XRF analyses at the Faculty of International Resource Science, Akita University, Japan. XRD analysis was conducted using a Rigaku Multiflex X-ray diffractometer (Cu-Kα radiation, λ=1,541Å, voltage, V=30 kV, and current I=16 mA). SEM-EDS analysis was utilized to observe the morphology and semi-quantitative chemical composition of minerals containing in samples under polished-thin section. This analysis was carried out by using a JEOL JSM-IT300 scanning electron microscope equipped with energy dispersive spectrometer (Oxford instrument). Chemical composition of manganese ore samples analyzed under pressed powder was determined by using a Rigaku Primus II X-ray fluorescence spectrometer. A. Mineralogy The manganese ore in the Barru area occurs in two types, i.e., cavity filling and residual materials. The first type is found to be hosted in bedded limestone. The ores are massive and show black with local pink color. The thickness of orebody ranges between 2 cm and 25 cm. The second type occurs as residual massive subangular to rounded black-colored materials with diameter up to one meter and hosted in soil. Results of XRD examination (Fig. 2) indicate that samples of cavity filling type are predominantly composed of ferroan rhodochrosite (FeMnCO 3 ) with subordinate groutite (MnO.OH) and todorokite (Mn-Ca-K-Na-Ba-Mn-H 2 O). These phases are associated with gangue minerals that mainly consist of calcite (CaCO 3 ) with minor quartz (SiO 2 ). On the other hand, results of XRD analyses of residual ore type exhibit that cryptomelane (K-Na-Mn 8 O 16 ) and bixbyite (FeMn) 2 O 3 are the main Mn phases and goethite (FeO.OH) is detected as the principal gangue mineral (Fig.3). Optical microscopy and SEM examination of the Barru samples show the typical colloform texture (Fig. 4A). The bands have various thickneess ranging from 10 to more than 500 microns with circular or wavy form. SEM-images show alternating dark and light bands, indicating the difference metal compositions. The light bands may indicate more various of metal concentrations where the proportion of light metals is less than those of heavy ones ( Fig. 4C and 4D). EDS analysis of selected spots indicates the presence of sphalerite (ZnS) as shown in Figure 4B. This mineral occurs as inclusion in rhodochrosite and is characterized by anhedral texture with diameter ranges from 100 to 500 microns. Calcite and silica are also identified within this sample. Manganese ores in the Bone area occur in the form of lensoid and brecciated, associated with chert, carbonaceous shale and volcanic rocks. Results of XRD analysis showed that pyrolusite (MnO 2 ) is the main manganese phase present with subordinate ramsdellite. Quartz (SiO 2 ) and barite (BaSO 4 ) were identified as gangue minerals (Fig. 5). Textural features of the Bone samples analyzed by SEM are shown in Figure 6. Mostly pyrolusite occurs in association with quartz. A large pyrolusite crystal ( -up to 5 mm in diameter) shows subhedral and well-developed cleavages in quartz (Fig.6A). Medium grains of pyrolusite (20 -250 micron) within the gangue show subhedral to anhedral and locally display cirlular shape (Fig. 6B). Pyrolusite also occurs as fine unhedral grains in quartz (Fig. 6C). Barite shows elongated prismatic euhedral crystals with the size up to 700 micron occuring as irregular veins and is associated with silica and iron oxides (Fig. 6D). B. Chemical composition Chemical composition of the samples analyzed by means of XRF method from the Barru and Bone areas are presented in Table 1 Regarding about trace elements, it is shown that the Barru samples have high concentrations of As, Pb, Sr and Zn as compared to the Bone samples. On the contrary, Ba, S and V content have higher in the Bone samples. Significant concentration of Ba and S in Bone samples is due to the occurrence of barite (BaSO 4 ) within the ores. Meanwhile, the high concentrations of Pb, As, and Zn in the Barru samples are not only connected to the presence of sulfide phase such as sphalerite, but also the existence of goethite that has significant concentrations of trace elements. This mineral has high capacity to adsorb such cations from solution during chemical weathering [7]. C. Potential uses as raw materials in metallurgical industry In term of utilization of manganese ore in metallurgical sector, analytical results indicate that Mn-ore from Barru may have good potential uses as raw material for the manganese ferroalloy production due to the relatively higher content of Fe 2 O 3 . In modern steelmaking, the existance of rhodochrosite can act as effective desulfurizer. However, the higher moisture content (av. 18.81%) of these ores is problematic because the higher energy is required in reduction the moisture thereby the increase of production cost. Significant concentration of SiO 2 and Al 2 O 3 containing in the Bone samples implies that such ores may be favorable to be used as raw material for the production of silicomanganese.
2018-12-07T06:59:15.452Z
2016-06-24T00:00:00.000
{ "year": 2016, "sha1": "d32a7fc25befe53ae52f771f381a3235b0648d10", "oa_license": "CCBY", "oa_url": "https://doi.org/10.20342/ijsmm.3.1.174", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2c0a90a01977ee9ee24014d99befb9e19f34d2a0", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Environmental Science" ] }
55374212
pes2o/s2orc
v3-fos-license
Potential effects of plant growth promoting rhizobacteria ( Pseudomonas fluorescens ) on cowpea seedling health and damping off disease control Damping off caused by Sclerotium rolfsii on cowpea results in yield losses with serious socioeconomic implication. Induction of defense responses by plant growth promoting rhizobacteria (PGPR) is largely associated with the production of defense enzyme phenyl ammonia lyase (PAL) and oxidative enzymes like peroxidases (PO) and poly phenol oxidase (PPO). In the present study, the effect of plant growth promoting rhizobacteria (Pseudomonas fluorescence (bv. V)) on both damping off development and growth parameters in cow pea seedlings were investigated. The best reduction in pre and post emergence damping off in cowpea seedlings was observed in BCPF 8-treated samples. Seed bacterization with BCPF 8 significantly increased peroxidase (PO), polyphenol oxidase (PPO), and phenylalanine ammonia-lyase (PAL) activities. The activation of these defense reactions by BCPF 8 was correlated with an enhanced resistance to damping-off caused by S. rolfsii. This study demonstrated the ability of the rhizobacteria BCPF 8 to induce systemic resistance in cowpea, suggesting that this legume is an Induced systemic resistance (ISR)-positive plant. INTRODUCTION Cowpea (Vigna unguiculata [L.] Walp.) is a food legume of significant economic importance worldwide.Cowpea diseases induced by species of pathogens belonging to various pathogenic groups (fungi, bacteria, viruses, nematodes, and parasitic flowering plants) constitute one of the most important constraints to profitable cowpea production in all agro-ecological zones where the crop is cultivated (Sendhilvel et al., 2005).Damping off of cowpea has been reported in many countries which can provoke 50 to 60% dry yield loss in new alluvial regions of West Bengal (Unpublished data, AICRP on Vegetable crops).Sclerotium rolfsii was by far the most common species isolated from all the agro-ecological zones and pathogenic on cowpea.Stress alleviation or disease control remains one of the most challenging issues to be addressed, which is especially true for cowpea considering the largely undefined area of cowpea self-defense mechanisms.Chemical fungicides application is the conventional strategy used for managing damping off for over 50 years.Though, fungicides have shown some promising results in controlling damping off, fungicide residues could lead to environmental pollution and human *Corresponding author.E-mail: subhadipnandi87@gmail.com.Tel: +91 9433964327. # Both authors contributed to this work. Treatment number Treatment name 1 Normal seeds in Non-infested soil (Control) 2 BCPF 7 in seed treatment+ S. rolfsii infested soil 3 BCPF 7 in seed treatment+ Non-infested soil 4 BCPF 8 in seed treatment+ S. rolfsii infested soil 5 BCPF 8 in seed treatment+ Non-infested soil 6 Carbendazim in seed treatment+ S. rolfsii infested soil 7 Carbendazim in seed treatment + Non-infested soil 8 Normal seeds in S. rolfsii infested soil health hazards.Biocontrol approaches may help to develop ecofriendly strategies for managing this disease in cowpea seedlings.Biological control represents both the oldest and youngest technology for the control of plant diseases and pest ( Akinbode and Ikotun, 2008).Most people agree that agriculture could not have begun without the benefits of naturally occurring biological controls. Yet modern biological control achieved with introduced microorganisms is still in its infancy.The saprophytic pseudomonads associated with plants include P. fluorescens, P. putida and P. aeruginosa.The use of fluorescent pseudomonads is gaining importance for plant growth-promotion and biological control.Fluorescent pseudomonads could reduce disease severity in several crop plants through induced resistance phenomenon (Thahir Basha et al., 2012).Induced systemic resistance in crop plant is characterized by the induction of hostdefense responses including, defense related enzymes synthesis and phenolics accumulation.In this context, we aimed to evaluate the biocontrol activity of some indigenous fluorescent pseudomonads against damping off disease in cowpea and to define the mechanisms implicated in this process. Fungal culture and inoculum preparation Naturally infected cowpea plants as a source of Sclerotium rolfsii were collected from fields.Isolation was done by directly transferring mycelia and sclerotia of the fungus, found on the stem and collar of the infected plant to Potato Dextrose Agar (PDA-Hi-media) plates.All plates were kept at 28 ± 1°C in dark in an incubi-tor for 1 week.The inoculum for S. rolfsii was prepared by inoculating 50 g of sterile wheat grain medium in polyethylene bags with three 5 mm diameter fungal plugs and incubated at 24°C for 3 week.Colonized wheat grains were stored at 4°C until further use.Sterile wheat grains only were used as inoculums for the control treatment. Antagonistic activity of bacterial Isolates Fluorescent pseudomonads designated as BCPF 7 and BCPF 8 were isolated from the soil collected from the rhizosphere of rice and chilli root respectively with King's medium B (KMB Hi-media) (King et al., 1954).The rhizobacterial isolates were characterized on the basis of their morphological (cell shape, cell arrangement, gram reaction), cultural (colony type, pigment production) and biochemical identification keys of Bossis (1995) for Pseudomonas sp.The antagonistic effects of the rhizobacterial isolates were assessed against S. rolfsii by a dual culture technique.The petri plates are poured with 20 ml PDA Hi-media (without antibiotic) and the fresh bacterial loopful culture was streaked linearly leaving 1 cm from the margin.The pathogens are placed as 5 mm disc from the 3 days old culture at the centre of each petri plate and plates were incubated at 28°C (±2°C) for 3 to 4 days.The distance between the fungal growth and the bacterial colonies was recorded as inhibition zone.For each treatment, three replications were used.Percent inhibition over control (inoculated with S. rolfsii disc, without bacterial streaking) was calculated by using the following formula: Where, I, percent inhibition of mycelium; C, Growth of mycelium in control; T, Growth of mycelium in treatment.The rhizobacterial isolates were bio-assayed as vigor index for their ability to promote / inhibit seedling growth on cowpea using the method previously described by ISTA (1966). Seed treatment Cowpea seeds (var.Kashi-kanchan) obtained from AICRP on Vegetable crop B.C.K.V, Kalyani, Nadia, West Bengal, India were surface sterilized for 30 s in 0.1% (w/v) mercuric chloride (HgCl 2 ), rinsed in 70% (v/v) ethanol for 3 min before rinsing three times in sterile distilled water.The efficacy of disinfection was tested by placing samples of the treated seeds on PDA (Hi-media) and Nutrient Agar (Hi-media) plates for any microbial growth (Table 1).Fluorescent pseudomonades were grown in Erlenmeyer flasks (250 ml) containing 100 ml of KMB broth (Hi-media) for 48 h on a rotary shaker at 28 ± 2°C.Cells were removed by centrifugation at 10 000 rmp for 10 min at 4°C and washed in sterile water.The pellet was resuspended in a small amount of sterile distilled water and then diluted with an adequate amount of sterile distilled water to obtain a bacterial suspension concentration of 10 8 cfu ml −1 (OD 595 = 0.3).For bacterization of seed, seeds of cowpea were surface sterilized with 0.1% (w/v) mercuric chloride (HgCl 2 -Merk) for 30 s and rinsed in sterile distilled water and dried overnight under a sterile air stream.10 ml of bacterial inoculums (10 8 cfu ml −1 ) was put in a petri plate. To this, 100 mg of carboxymethylcellulose (CMC-Himedia) was added as adhesive agent. 1 g of seeds was soaked in 10 ml of bacterial suspension for 12 h and dried overnight in sterile petri plate.For fungicidal treatment, seeds were soaked in Carbendazim (0.1 g.ml -1 ) for 30 min and the seeds soaked in sterile distilled water under aseptic conditions served as control.Effect of different seed treatments on growth performance and disease incidence The experiment was carried out under glasshouse conditions with the daily temperature ranging from 28 to 30°C with 90% relative humidity (RH) during the study period.Planting trays with drain holes (39 × 28 × 11 cm), surface sterilized in 0.1% (w/v) mercuric chloride (HgCl 2 ) and rinsed with sterile distilled water were used to grow the plants.Each tray was filled with 2 kg of sterilized soil mixture (top soil: compost: sand = 3:2:1, v/v) amended with S. rolfsii inoculum (8 g kg -1 of soil).Infected soil was allowed to incubate for one week for the establishment of S. rolfsii in the soil.The soil moisture content was maintained at field capacity by daily watering. No supplementary fertilizer was added.The treatments were arranged in randomized block design with three replicates.The whole experiment was repeated twice.In Each treat-ment, 10 planting trays with 40 seeds per tray were used.Data on the seed germination was recorded on third day after sowing.Seedling length (cm), root length (cm), number of branches, leaves and nodules, fresh weights (g) were also recorded 25 days after sowing.Development of disease symptoms associated with Sclero-tium rolfsii damping off infection was observed and assessed based on the pre-(death of seedlings before they reached the surface of the soil) and post-emergence damping-off (wilting appearance) until seedling establishment.Infected seedlings were collected and the infection of S. rolfsii postulated by Koch's was confirmed.Disease development was expressed as disease incidence percent (DI, %) according to the formula: Disease incidence (%) = (Number of infected seedlings / Total number of seedlings assessed) × 100. Plant treatments for analysis of defense-related enzymes Another set of experiments was carried out under similar conditions to assess for a potential induction of defense related enzymes in cowpea plants.Seed treatment was carried out as mentioned above.Treated seeds were sown in plastic pots containing autoclaved mixture of peat-moss and soil.Treated seedlings (5 days old) were challenged inoculated by mycelial plug (0.2 g) method in collar region.Seedlings were planted at the rate of 5 transplants per pot and 10 pots per treatment were used.Sampling for induction of enzymatic activity was carried out at every 1 day after inoculation (DAI) for 10 days and the lesion length was recorded at different time intervals (days). Spectrophotometric assays Spectrophotometric assay of PO (EC 1.11.1.7),Poly Phenol Oxidase (PPO(EC 1.14.18.1)) was done by modifying the method of Malik and Singh (1980) and Hammerschmidt et al. (1982), respectively and expressed as changes in absorbance of fresh tissue per minute and phenyl ammonia lyase [PAL (EC 4.3.1.24 )] was done according to Dickerson et al. (1984) and enzyme activity was defined as µg cinnamic acid produced min -1 g -1 of tissue.Assay of phenol was done according to Malik and Singh (1980) and expressed as mg 2 per fresh tissue -1 .Each experiment was repeated three times. Extraction and electrophoresis of different isoenzymes Freshly harvested plant tissue was crushed with Na-P buffer (pH-7) for peroxidase (PO) isomer detection.Electrophoresis of PO was done in 10% polyacrylamide gel according to the method of Kahler and Allard, 1970. Statistical analysis The data collected during these investigations were subjected to appropriate statistical analysis using SPSS Statistical Tool 10.0. In vitro antagonism The present study revealed an efficient inhibition of S. rolfsii growth by fluorescent pseudomonades pretreatment in which two native isolates exhibited a strong antagonism effect against S. rolfsii in in-vitro assays (Table 2), illustrating an antifungal activity for both isolates.The antagonistic effect of BCPF-8 rhizobacteria was evidenced by the inhibition of the pathogen growth by 70.05% using dual culture method.The data depicted in Table 2 indicated that the vigour index based on germination percentage, root length and shoot length was also increased by treatments with BCPF-8 (2215.70)and PF-7 (1938.25).These two effective rhizobacterial antagonist isolates were selected for further characterization of biovar detection and used as inducer for development of systemic resistance in cowpea seedling against damping off disease incited by S. rolfsii.On the basis of phenotypical criteria (Bossis et al., 2000), BCPF 8 and BCPF 7 were identified as P. fluorescens biovar V and III, respectively (Tables 2 and 3). Effect of different seed treatments on growth performance and disease incidence The effect of different seed treatments on plant growth parameters including germination percentage, shoot length, root length, fresh weight and numbers of nodules per root of cowpea seedlings in both infested and noninfested soil were recorded (Table 4).The highest germination rate was recorded in BCPF 8 treated seeds sowed in non-infested soil (94.7%) over untreated seeds.Among the infested soil BCPF 8 treated seeds show the highest germination rate (86%).In all infested soil, BCPF 8 treated seeds also shows the higher shoot length (21.33 cm), root length (7.33 cm), fresh weight (7.27 g).However, Carbendazim treated seeds shows the highest number of nodules per root (5.67) among all infested soils, which was statistically at par with that of BCPF 8 treated seeds sowed in infested soil (5.33). In the present study, the efficacy of different seed treatment in controlling damping off disease of cowpea seedlings were also recorded and presented in Table 5. PGPR bio-formulations and fungicidal seed treatments were prepared individually and used in this study at different time intervals (days). Disease incidence (%) was assessed at different time intervals (15 and 25 days after treatment).BCPF 8 treated seeds show the lowest disease incidence in post emergence (22.81) damping off of cowpea seedlings at 25 days after seed inoculation. Our results show also that the BCPF 8 also recorded as the best seed treating bio-formulation as it controlled the disease incidence percentage in post emergence damping off of cowpea seedlings up to 37.31, over the untreated seeds sowed in S. rolfsii infested soil.Interestingly, in this present study, it is noteworthy that BCPF 8 bio-formulation followed by BCPF 7 bioformulation was better than chemical fun-gicide Carbendazim in the control of damping off in cow-pea caused by deleterious pathogen S. rolfsii. Induction of defense mechanisms by different seed treatments The induction of greater amount of defense related enzymes by PGPR bio-formulations treated plants are shown in Figure 1.Levels of peroxidase (PO) enzyme increased significantly within 5 days after inoculation and thereafter a sudden fall in activity was noteworthy in only pathogen challenged seedlings where as in seedlings challenged with pathogen and treated with BCPF 8 ex-presses an early and prolonged peroxidase activity up to 10 days after inoculation.The seedlings challenged with pathogen and treated with BCPF 8 expressed the highest (3.04 fold) increased activity over the untreated control (Figure 2).Similarly, the seedlings challenged with patho-gen and treated with BCPF 8 at 5 days after treatment had highest PPO activity (0.475) and a slow decrease in activity up to 0.368 at 10 days after inoculation.On the contrary, in only pathogen challenged seedlings the PPO activity reached highest (0.221) at 5 days after inocula-tion but the activity dropped down to 0.175 at 7 days after inoculation (Figure 3).Though the PAL activity increased at 5 days after inoculation to 0.024 and a quick fall of activity noticed at 7 days after inoculation in case of only pathogen challenged seedlings, the seedlings challen-ged with pathogen and treated with BCPF 8 possessed an early and 6.28 fold enhanced increased PAL activity at 3 days after inoculation (Figure 4).Peroxidase isoform PO 2, Rm = 0.38 (marked as white arrow) was noticed in seedlings challenged with pathogen and pre-treated with BCPF 8 (Figure 5).This isoform may be associated with the induction of systemic resistance in cowpea seedlings elicited by BCPF 8 and challenged by S. rolfsii. DISCUSSION In this work we found that approximately 70% of S. rolfsii growth was inhibited by native Pseudomonas isolates in dual plate culture.Similarly, Tripathi and Johri (2002) observed in vitro inhibition of Colletotrichum dermatium, Rhizoctonia solani and Sclerotium rolfsii by fluorescent pseudomonads.The data presented in Table 2 indicates that the fluorescent pseudomonades biovar V isolates designated as BCPF-8 shared maximum vigour index evidenced by an enhancement of the germination rate, root length and shoot length, which confirm the findings of Rao et al. (1999) who observed positive effect of five isolates of fluorescent pseudomonades on growth of lentil by means of vigour index.The most possible exploration of in-vitro antagonism exhibited by the isolates BCPF8 and BCPF7, an attempt was made to develop effective biocontrol system management of damping off disease of cowpea under field conditions.Our results show that BCPF 8 is the most effective biocontrol agent against S. rolfsii by means of significant enhancement of the germination rate, shoot length, root length, fresh weight of the plant and number of nodule per root in infested as well as non-infested soil.Such enhancement of root nodulation Table 4. Effect of seed treatments on seed germination percent after 3 days of sowing and seedling establishment of cowpea in S. rolfsii infested and non-infested soil mix after 25 days of sowing.by Pseudomonas sp. may be due to the production of plant growth-promoting substances (Shabayev et al., 1996).Similar finding also reported by Yeole and Dube (1997) where seed bacterization with rhizospheric Pseu-domonas isolates increased the germination rate, root length and shoot length of cotton, chilli, ground nut and soybean.In addition to beneficial effect of Pseudomonas sp. on the development of plants in pathogen infested soil, they also reflects improved plant growth in absence of pathogen, which strongly supported the finding of Avis et al. (2008).Inoculation of cowpea seeds with BCPF8 induced a faster and stronger reduction of pre and post emergence damping off disease in comparison with other bioformulations and fungicides tested in this work.Pseudomonas sp. has been broadly studied for their ability to reduce the development of various soil borne plant pathogens (Carisse et al., 2003).Different modes of action for Psedomonas sp. have been reported, including the production of different antimicrobial compounds (Tharne et al., 2000), competition (Ellis et al., 1999) and induction of plant defense mechanisms (Sangeetha et al., 2010;Tonelli et al., 2011).Recent investigation on mechanisms of biological control by plant growth promoting rhizobacteria (PGPR) like fluorescent pseudomonads revealed that PGPR strains protects plants from pathogen attack by strengthening the epidermal and cortical walls with deposition of newly formed barriers beyond infection sites including callose, lignin and phenolics (M'Piga et al., 1997).Also PGPR could stimulate defense related genes expression (Chen et al., 2000) and the induction of enzymes responsible for phtoalexins synthesis (Maurhofer et al., 1994).The hyphae of the pathogen surrounded by phenolics substances exhibited considerable morphological changes including cytoplasmic disorganization and loss of protoplasmic content.Benhamou et al. (2000) reported that an endophytic bacterium, Serratia plymuthica induced the accumulation of phenolics in cucumber roots following infection by P. ultimum.In the present study, a higher accumulation of phenolics was recorded in cowpea seedlings inoculated with pathogens and pretreated with BCPF8 at 7 days after inoculation compared to other treatments.This increase in phenol content might indicate a possible involvement of such compounds in the enhanced resistance of cowpea seedlings to pathogen S. rolfsii by PGPR.This might have contributed to reduced infection by the S. rolfsii in cowpea seedlings.Peroxidase has been implicated in the last enzymatic step of lignin biosynthesis, that is, the oxidetion of hydroxyl cinnamyl alcohols into free radical inter mediates, which subsequently are coupled to lignin polymer (Gross, 1980). Treatment Furthermore, peroxidase is involved in the production or modulation of active oxygen species which may play various roles directly or indirectly in reducing pathogen viability and spread (Lamb and Dixon, 1997).In this work, early and prolonged higher activity of PO from 3 days after inoculation may be correlated with the lowest patho-genicity of the pathogen in cowpea seedlings inoculated with S. rolfsii and pre-treated with BCPF8.Similarly, the higher PO activity was noticed in cucumber roots treated with Pseudomonas corrugate and challenged with Pythium aphanidermatum (Chen et al., 2000).Biochemical analysis of rice plants raised from seeds treated with P. fluorescens show an early induction of PO (Nandakumar et al., 2001).Mishra (2006) reports an increased in the activity of PO in PGPR treated tea cuttings grown in pathogen infested soil.PPO catalyzes the last step in the biosynthesis of lignin and other oxidative phenols.The PPO activity was increased in cowpea seedlings inoculated with pathogen and pre-treated with BCPF 8 at 5 days after inoculation. Similarly, induction of defense responses by PGPR is associated with the production of oxidative enzymes like PPO reported by Sangeetha et al. (2010).PO and PPO play a central role triggering the hypersensitive reaction (HR), in cross linking and lignifications of the cell wall and in transducing signals to adjacent non-challenged cells (Lamb and Dixon, 1997).PAL is an enzyme of the general phenylpropanoid metabolism and controls a key branch point in the biosynthetic pathways of flavonoid phytoalexins, which are antimicrobial compound (Bowles et al., 1990).The induction of PAL in cowpea seedlings pre-treated with PGPR (BCPF 8) and inoculated with pathogen S. rolfsii could indicate a possible involvement of phenyl propanoid metabolism in BCPF8-induced resistance to damping off.Similarly, Meena et al. (2000) demonstrated that the increase in PAL activity correlated with disease incidence reduction when groundnut plants were sprayed with P. fluorescens.The rapid induction of PAL genes in incompatible plant-pathogen interactions might be due to the activation of a specific and appropriate signal transduction pathway. of three replications; DAS, Days after seed sowing; in a column, means followed by a common letter are not significantly different (p= 0.05) by DMRT. Figure 2 .Figure 3 . Figure 2. Changes of Peroxidase activity in different treatments at different hours after inoculation. Figure 4 . Figure 4. Changes of Phenyl ammonia lyase activity in different treatments at different hours after inoculation. Figure 5 . Figure 5. Changes of Peroxidase isomers in different treatments at different hours after inoculation. Table 1 . Seed treatments used in the study. Table 2 . Determination of biovar and in-vitro antagonistic activity of bacterial isolates Table 5 . Effect of seed treatments on post emergence damping off of cowpea in S. rolfsii infested and non-infested soil (page 1857).Days interval, DAS-days after seed sowing.Values are mean of three replications.Values in parentheses are % value.In a column, means followed by a common letter are not significantly different (p= 0.05) by DMRT.Changes of Phenolic compounds in different treatments at different hours after inoculation.
2018-12-06T10:59:53.665Z
2013-04-10T00:00:00.000
{ "year": 2013, "sha1": "c5c63e6e8ef7fac0cfed76c938e8cb26a490ef2d", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/2525E0623012.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c5c63e6e8ef7fac0cfed76c938e8cb26a490ef2d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
146392259
pes2o/s2orc
v3-fos-license
Assessing some measures of online deliberation The empirical turn in deliberative democracy has fostered the development of different methodological procedures. Within this literature, studies focusing on the internet have gained increasing attention. The belief that the internet may help solve some of the deliberative deficits of democracies has propelled an interest in the potential benefits and problems of online discourse. This article seeks to discuss some of the methods that have been advocated for the study of online deliberation to point out three of their weaknesses: (01) the establishment of misleading distinctions; (02) the neglect of the implications of the deliberative system; and (03) the disregard of some specificities of the internet. he empirical turn in deliberative democracy has brought a renewed vitality to this field of research.After a decade of fruitful philosophical development, the 21st century witnessed a growing interest in rethinking conceptual frameworks through empirical inquiry, thus pushing the frontiers of deliberative theory in new directions.Habermas (2005) has endorsed this move, and Kies (2010) attributes this validation to the lack of empirical evidence on the grounds of a highly abstract perspective. It is therefore not a surprise that Habermas has very recently strongly appreciated and encouraged the efforts accomplished by researchers from around the world to operationalize and test the criteria and presuppositions of the deliberative democratic model in different contexts of discursive interaction (KIES, 2010, p. 34). This turn has fostered the development of methods applicable to different types of discursive arenas aimed at tackling diverse problems (BLACK et al., 2009;DRYZEK, 2008).Most studies seek to assess either the deliberativeness of specific types of interaction (KELLY, 2008;STEINER et al., 2004;WESSLER, 2008) or the effects of these processes on citizens, decision making and society in the broad sense (DELI CARPINI et al., 2004).There are investigations devoted to tracking preference change (DRYZEK and NIEMEYER, 2006), the exposure to opposite perspectives (LEV-ON and MANIN, 2009;MUTZ, 2006), and processes of social learning provoked by deliberation (KANRA, 2009).There is also significant interest in the design of participatory experiments 1 and the role of the media 2 .Within this profuse literature, studies focusing on the internet have been gaining attention.The belief that the internet may help solve some of the deliberative deficits of democracies has fueled an interest in the potential benefits and problems of online discourse.The aim of this article is to present some of the methods that have been advocated for the study of online deliberation to point out some of their weaknesses.An element of these weaknesses emerges from problems in the empirical literature on deliberative democracy in general, while other issues are specific to texts related to the online phenomena. This article begins with a very brief review of online deliberation, followed by the presentation of some analytical approaches utilized to study this topic.The following sections provide a discussion of three weaknesses of the previous approaches to online deliberation: (01) the establishment of misleading distinctions, (02) the neglect of the implications of the deliberative system, and (03) the disregard of some specificities of the internet.It must be clear that it is not my aim, in this article, to advocate deliberative democracy against its critics or to deal with the many relevant criticisms raised against this democratic perspective.There is an extensive literature covering this debate 3 .My goal is to foster the advancement of a debate within the deliberative approach, contributing to the development of this literature in its own grounds. Online deliberation and its measures Online deliberation is one of the main areas of interest among the most innovative research on deliberative democracy (DAVIES, 2009).Following the excitement evident in studies from the early 1990s that anticipated the emergence of a new public sphere on the internet, and on a more critical perspective, several scholars have tried to understand online practices by examining them through the lens of deliberation 4 . over the last two decades, becoming one of the most productive areas on political theory (DRYZEK, 2007)5 . The most recent literature addressing the deliberative approach has highlighted the fact that deliberation should be viewed as a process resulting from the overlap of several arenas and discursive moments6 .These studies have also emphasized that the give-and-take of reasons that serve as the foundation of deliberation occur through a variety of communicative formats, including the presentation of ideas in an emotional way.In addition, deliberation does not require that participants become altruistic beings without particular preferences (MANSBRIDGE et al., 2010;MENDONÇA and SANTOS, 2009).The deliberative process only requires a public clash of discourses that induces reflection in a noncoercive way and promotes a connection between particular experiences and more general principles (DRYZEK, 2006, p. 52). Online forums may function as arenas that play a role in broader discursive processes, thus nurturing public deliberation (COLEMAN and MOSS, 2012).Far from compromising the benefits of face-to-face group meetings, computer mediated communication may prove especially useful for deliberative work (PRICE, 2009, p. 37).Raphaël Kies (2010) also argues that there is no original contradiction between the internet and deliberation, although some scholars claim that the former can only foster frivolous and empty interactions.Despite his criticism of several approaches to online deliberation, Arthur Lupia (2009) is another scholar who admits its potential: "Online deliberation […] is promising because of its ability to bring people together for the purpose of information exchange without the difficulties caused by physical distances between participants" (LUPIA, 2009, p.60). In the search for online possibilities regarding deliberation, deliberative democrats have conducted a wide range of investigations.Among the pioneering works in this field are Wilhelm's (2000) investigation about Usenet in the United An increase in the number of studies about online deliberation has led to an emergence of analyses with varying focuses.There are studies about: the design of forums (DAVIES and CHANDLER, 2012;SAEBØ et al., 2009;WRIGHT and STREET, 2007); the deliberativeness of online arenas (HAMLETT, 2002;JANSSEN and KIES, 2005;KIES, 2010;SAMPAIO et al., 2011;STROMER-GALLEY, 2007;WALES et al., 2010); the comparison between online and conventional media spheres (GERHARDS and SCHÄFER, 2010); the role of the internet in promoting contact between opposing perspectives (LEV- ON and MANIN, 2009;MUTZ, 2006;WOJCIESZAK and MUTZ, 2009); the use of online consultations (COLEMAN and SHANE, 2012;DAVIES and CHANDLER, 2012;FISHKIN, 2009;KIES, 2010;SHANE, 2009); and the potential impact resulting from these processes (FRESCHI and METE, 2009).These investigations have offered rich methodological approaches, which vary greatly depending on the type of research problem being addressed.In this article, it would be impossible to provide fair coverage of each of these routes.I will therefore focus on some of the most influential analytical frameworks.However, throughout the discussion I will make quick references to specific aspects of other methodological proposals that will not be featured here. The first approach I wish to focus on is the Discourse Quality Index (DQI), proposed by Steiner et al. (2004) for the study of parliamentary deliberations. Following its establishment, this technique was developed further, and has been applied to the comprehension of other discursive spheres (BÄCHTIGER et al., 2009;PEDRINI, 2012).Praised by Habermas (2005) as 'splendid', and increasingly used in empirical studies, the DQI has become the most renowned method for the microanalysis of deliberation, and is one of the most influential approaches for scholars of online deliberation (KIES, 2010).Such influence should not be seen as a misuse of the DQI, because it has been advocated by its proponents: "the DQI can be applied easily and reliably to a wide range of deliberative contexts" (STEENBERGEN et al., 2003, p. 22). The Steenbergen et al. (2003, pp. 27-30) After the DQI's initial formulation, the approach has been expanded upon and updated.Bächtiger et al. (2009) Kies's (2010) model presents some important differences in relation to the two frameworks that were previously mentioned.One important distinction is the consideration of elements that point to the external impact of the forum.The forum is not investigated as a contained environment.A second difference, which is related to the first, addresses the use of surveys and interviews in addition to content analysis.These methods help to promote a more complex picture of processes. Surveys are also at the heart of methodological procedures adopted by some studies seeking to investigate the exposure of internet users to other perspectives.Wojcieszak and Mutz (2009) used a sample of 1,028 respondents from a large national survey in the United States to investigate if, how and when Americans discuss politics online.The main goal of their survey was to observe the extent to which political talk arising in various types of online groups serve to expose participants to like-minded views as opposed to challenging them via exposure to disagreement (WOJCIESZAK and MUTZ, 2009, p. 43).Participants who (2015) 9 (3) 88 -115 had been engaged in online political discussions were asked whether they usually agreed or disagreed with the other participants' views.This research was developed utilizing some aspects initially addressed by Diana Mutz (2006).She attempted to study real world informal conversations through "several representative national surveys that included information on Americans' networks of political discussion" (MUTZ, 2006, p. 21).The conclusions of this study suggest "cross-cutting exposure is negatively related to participation" (MUTZ, 2006, p. 112). Although a discussion of the results of these investigations would be valuable, this article will focus on some of the weaknesses of the methods applied. In doing so, my analysis obviously points to the potential shortcomings in the findings that were reached.Each of the following sections focuses on presenting one of such weakness. The establishment of misleading distinctions Micro approaches to online deliberation seem fascinated by detailed coding schemes that often lead to classifications, which do not deepen our knowledge of the topic.Excessive quantification directs investigations to fallacies that lack theoretical grounds.Within the obsessive exploration of exhaustive analytical matrices, the purpose of many distinctions is not only unclear, but also misleading.This often puts the broader comprehension of the process in jeopardy.Dahlberg (2004) has made a similar argument, when he claims that: The fundamental problem is that operationalisation requires researchers to focus upon those aspects of the public sphere for which narrowly defined and measurable indicators can be found, thus neglecting other aspects less amenable to quantification.The result is serious loss of meaning (DAHLBERG, 2004, p. 31). My argument is that, besides the problem of excessive quantification (with its focus on 'measurable indicators'), there is the problem of establishing distinctions that may prove misleading.Many scholars are devoted to demanding classifications that may hinder the interpretation of deliberation as a political process. This problem becomes apparent, for instance, within the levels of typology regarding justification that are suggested by the DQI.The categorization of inferior, Several other criteria proposed by the DQI suffer from similar problems. The criterion of Mutual Respect, for instance, establishes two misleading hierarchies. The first distinguishes implicit respect from explicit respect, using ordinal variables. However, the proponents of the model never explain why explicit demonstrations of respect are preferable to implicit manifestations.The second concerns the respect given to counterarguments.Once again, ordinal variables are used to grade different types of behavior asymmetrically.Applying a value to someone's argument is considered better for deliberation than treating it neutrally.Agreeing with a counterargument is even better, according to the DQI.An approach such as this neglects the agonistic dimension of deliberation.Deliberation requires taking other positions into consideration, but not necessarily agreeing with them.A discussion consisting of numerous agreements may be significantly poorer than one in which arguments are treated neutrally. In the first case, it presumes that particular interests and the common good can be easily distinguished and are opposed to each other.The DQI neglects the fact that these dimensions may often go hand-in-hand or be intertwined.In addition, the criterion creates a distinction between two types of common good which seems unwarranted.Why should the notion of the common good be restricted to utilitarian terms and the Rawlsian principle of difference?And why is it important to distinguish types of common good?What does this categorization explain about deliberation? Lastly, the category Constructive Politics values mediating proposals more than the elaboration of alternative proposals.Consensus appeals receive an even higher grade than mediating proposals.Once again, this type of hierarchy is based on a questionable restrictive conception of deliberation that assumes the middle way is always the best route.Why are alternative proposals not more deliberative than mediating arguments?What type of consensus is implicit in this coding scheme?Are workable agreements (DRYZEK, 2000;ERIKSEN, 2000;JAMES, 2004) included in the coding?Are the authors talking about the often criticized idea of If the DQI establishes some misleading distinctions, it should not be seen as the only method for doing so.Stromer-Galley's (2007) proposal also suffers from this type of problem.To begin with, she sets up a very fragmented division of the units of discourse that does not seem very helpful at a more aggregated level.The distinction of units (which are coded as problem, metatalk, process or social) breaks up the discursive process into unnecessary fragments.These fragments can also be misleading.For instance, the establishment or hindrance of social bonds happens in many ways and not exclusively through what she outlines as 'social'. However, an even more problematic issue unfolds in her most celebrated criterion, sourcing.Stromer-Galley ( 2007 discusses.The mass media, for instance, should not be reduced to a channel of information, but sits at the heart of contemporary personal experiences (SILVERSTONE, 2002).Thirdly, and more importantly, there is a problematic assumption that pervades her analysis.This theory implies that personal experiences are somehow poorer than other types of sources.She expected to find more references to the briefing documents, and seemed frustrated by the predominant use of personal experiences. This type of assumption also manifests in another renowned procedure for the study of online deliberation.Jensen (2003) to other sources is valued as an indicator of the qualifications of participants. However, such a view shows how the methods applied may underestimate the capacity of story-telling to present reasons in a publicly comprehensible way. Personal experiences should not be seen as a less informed way to foster one's position.This type of hierarchy devalues personal experiences and neglects the essential grounds of deliberative theory, which is a technique that anticipates varying contributions within a discursive process. Neglected implications of the concept of a deliberative system Despite the broad theoretical acceptance of the concept of a deliberative system 7 , most empirical studies addressing online deliberation still neglect its implications.The notion of a deliberative system advances the understanding of deliberation as a broader process, spread throughout time and space.By utilizing this perspective, deliberation may not involve a direct give-and-take of reasons, but may occur through broader discursive clashes.Therefore, to comprehend deliberation, attention must be given to the connections and relationships that exist among several discursive arenas. Nevertheless, the great majority of methodological procedures still focus either on one arena or, even more problematically, on particular individuals.This type of focus is clear, for instance, in the way several scholars highlight the role of sincerity in understanding deliberation.Proponents of the DQI and Raphäel Kies (2010) note that sincerity is a key component of deliberation.Therefore, they see the inability to measure the sincerity of social actors as a shortcoming in their studies, without realizing that focusing on individuals fosters a restricted view of deliberation.Even Lincoln Dahlberg (2004), who advocates a broader qualitative approach, states that, "we must not abandon attempts to understand sincerity due to the difficulty of the task" (DAHLBERG, 2004, p. 34).Based on these individualistic premises, such views do not understand deliberation as a public clash of discourses, but as direct form of interaction. The focus on individuals is also clear in some works grounded on the use of surveys as a method for understanding online deliberation.Questionnaires usually (2015) 9 (3) 88 -115 ask individuals if they discuss politics on the web and, if they do, who do they talk to.There is a specific concern about the exposure of individuals to diverse opinions and a fear that subjects who talk to like-minded people may become narrow-minded and anti-democratic.Online deliberation is assumed to happen only when individuals with different opinions communicate with each other. These types of approaches are in danger of miss-measuring individuals, as John Dryzek (1990) has convincingly discussed.Surveys often assume that beliefs and attitudes are pre-established givens and treat interviewees as research objects, instead of as active political agents who interact with other agents.In addition, surveys frequently adopt an individualized and competitive approach that ignores the criticisms against the philosophy of conscience.Lastly, surveys tend to neglect key developments in deliberative theory that propose a broader understanding of discursive clashes by viewing these debates through the lens of deliberative systems.By this, I do not claim that surveys are useless for deliberative research. Used within mixed method approaches, they can shed light on important topics for deliberative scholars.However, I would recommend extreme care in its use, especially because of the danger fostering an individualized notion of deliberation Another problematic element in these approaches is the assumption that only contact with opposing perspectives would promote online deliberation (LEV- ON and MANIN, 2009;MUTZ, 2006).Conversations among like-minded individuals are often seen as fostering a form of mobilization, which could hinder deliberation (MUTZ, 2006).Such a view ignores the relevance of conversations among likeminded individuals to increase the chance that some discourses may be expressed publicly, as argued by Mansbridge (1999) and Neblo (2005).If interpreted through systemic lenses, discussions within a group can be essential to deliberation. Different types of discussions, in diverse arenas, at varying moments, offer distinct contributions to deliberative processes. Another piece of evidence that suggests neglected implications regarding the concept of a deliberative system emerges in the way coding schemes are applied.discussion vary significantly and methodological procedures have not been able to capture these variations.This is why, according do Dahlberg (2004), several studies of online deliberation make flawed generalizations, not supported by their data. In addition, interactions that count as a discourse on the web should be amplified if online experiences are to be understood.The role of videos, songs, cartoons, links, images and comments must be conceived of in their specificities and through their intertwinements.It is problematic, for instance, to neglect the centrality of images in Facebook discussions or the role of videos used to respond to other videos on Youtube.However, online deliberation tends to be taken as an asynchronous variation of face-to-face verbal communication.Studies are inclined to focus on forums and communities, measuring the arguments verbally expressed by their members.It is definitely easier to study these interactions; but this procedure may pass over the whole experience of online discussion.One exception here is the recent work of Davies and Chandler (2012), who emphasize the need to comprehend the variety of communicative elements in online interactions and explicitly draw attention to the different modalities discourses may assume. In this sense, the nature of social ties, the forms of expression, the routes followed by discourses, the regimes of visibility and even the boundaries between public and private are singular in online practices.This is not to say that the internet creates an entirely different world.Nevertheless, there are certain specificities that should be taken into account if web deliberation is to be fully comprehended. One of these specificities is deeply related to the aspects developed in the previous section.I argue that if deliberation, as such, has much to gain from the idea randomly encouraged through individual practices, the network of networks should not be imagined as a cluster of enclosed arenas.Although it may sound obvious, it is important to emphasize that the idea of a web is essential to the study of this network.However, as many studies focus on the micro-analysis of individual posts within a distinctive arena, the undisciplined discursive flows that surround the specific post are frequently neglected. An additional specificity is related to the type of engagement that is expected from online deliberators.As opposed to focusing on the discursive process engendered by certain practices and initiatives, studies focus on the energy spent by each participant.These studies often express a feeling of frustration because of a lack of engagement of participants.It is frequently suggested, for instance, that the high levels of one-timers would show the inability of online experiences to foster deliberation.Analogous to this, some scholars seem to expect that users would behave in social networks, online groups and other web arenas in exactly the same way as if they were in conventional meetings.The point I am trying to establish is that most studies of online deliberation seem to lack a sociological understanding of the way in which individuals behave online.Subjects are overburdened with certain expectations that emerge from other interactive structures, a practice which ultimately ignores the dynamics of online experience.In the quest for reciprocal and respectful arguments on the web, many studies simply borrow a pre-established idea of debate.This results in a process that fails to seek out new definitions for public discussion that could better accommodate the idiosyncrasies of the internet. Concluding remarks This article has sought to discuss three weaknesses found in prominent methods utilized for the study of online deliberation: (01) the establishment of misleading distinctions; (02) the neglected implications of the concept of a deliberative system; and (03) the disregard of some specificities of the internet.I briefly pointed out that some of the procedures often used for comprehending web discussions have been unable to grasp the nature of online interactions.The focus Therefore, I do not argue that the empirical literature on online deliberation is unproductive. In my own empirical work on online deliberation, I have attempted to operationalize six criteria (inclusiveness, reason-giving, reciprocity, respect, orientation toward common good and connectivity with other discursive arenas) in ways that combine quantitative and qualitative analyses8 .It would be beyond the scope of this article to explain how each of these categories was operationalized, but it is important to emphasize how some conceptual moves may lead the analysis in fruitful directions.When discussing reason-giving, for instance, I suggest restricting the quantitative measurement to a variable that simply codes the existence (or inexistence) of justifications, further developing the investigation through a Batesonian-Goffmanian frame analysis, that conceives of frames not as individual strategies but as broader cultural and interactive constructions.This analysis takes into consideration not only words, but also images, memes and links mobilized by posts and comments.Another criterion that deserves attention is reciprocity.My My argument is simply that key weaknesses permeate most of the empirical studies. Such studies would greatly benefit from a more complex view, which does not mean the establishment of numerous detailed categories to capture the minutiae of individual discursive constructions.The core idea of deliberation and the nature of the online experience must be kept in mind.By doing so, the concept of a deliberative system can contribute a great deal because of its emphasis on the reticular character of human interaction.A deliberative system helps to create an understanding of the complexities and specificities of web deliberation, thus generating new routes for empirical studies. Revised by Cabo Verde Submitted in December 2014 Accepted in July 2015 's (2003) comparison of a Usenet group (dk.politik), and a government sponsored forum (Nordpol.dk) in Denmark, andDahlberg's (2001) studies on the renowned experience of Minnesota E-Democracy.The research conducted by Graham and Witschge (2003) is also significant in these early stages of online deliberation research.The investigators focused on a British governmental website (UK Online), which, at the time, had around 20,000 posts. ( put forth three theoretical reformulations: (01) the consideration of alternative forms of communication (labeled Type 02 (2015) 9 (3) 88 -115Deliberation) besides only contemplating rational discourse; (02) the establishment of standards for considering an interaction as deliberative; (03) the adoption of a sequential approach that does not anticipate encountering all of the features of deliberation throughout the whole process.Such reformulations affect the analytical matrix advanced by these scholars.Equality is measured "by counting the frequency of participation as well as by counting its volume (measured by the number of words)"(BÄCHTIGER et al., 2009, p. 05).Five levels of justification are considered, with the inclusion of one additional category: sophisticated justification (in depth), which means that "a problem is examined in a quasi-scientific way from various viewpoints"(BÄCHTIGER et al., 2009, p. 05).The variable Respect and Agreement "measures whether speakers degrade (0), treat neutrally (01), value (02), or agree (03) with positions and counterarguments"(BÄCHTIGER et al., 2009, p. 06).A variable identified as Interactivity assesses mutual references between arguments.Concerning Constructive Politics,Bächtiger et al. (2009) establish four categories; in addition to the three factors already suggested in the original version of the DQI, the authors believe consensus appeals should also be considered.To evaluate Type 02 Deliberation, the scholars consider the possibility of deliberative negotiations, arguing that the use of threats and promises "allows to empirically distinguish between 'deliberative' and 'non-deliberative' forms of negotiation"(BÄCHTIGER et al., 2009, p. 08).Lastly, this revised version of the DQI measures the use of story-telling as a source of justification.The idea of looking at the sources of justification was expanded on by Jennifer Stromer-Galley (2007), who developed one of the most influential frameworks for scholars dedicated to the comprehension of online deliberation.Her micro-analytic approach is based on six elements: (01) reasoned opinion expression, (02) references to external sources when articulating opinions, (03) expressions of disagreement and hence exposure to diverse perspectives, (04) equal levels of participation during the deliberation, (05) coherence with regard to the structure and topic of deliberation, and (06) engagement among participants with each other (STROMER-GALLEY, 2007, p. 04).Stromer-Galley (2007) develops a complex coding scheme that begins with dividing units of discourse into four categories that specify the type and aim of that 95 problem (focused on the issue), metatalk (talk about talk), process (talk about the process) or social (talk that fosters or hinders social bonds).The next step involves tracing thoughts within the segmented units."The 'thought' is the unit of analysis for which the deliberations are coded" (STROMER-GALLEY, 2007, p. 22).These thoughts should be understood within turns that may Start a new topic, Respond on topic, Respond to moderator or Continue self.Thoughts that express the problem focused on the arena are coded as expressing an Opinion, an Agreement, a Disagreement, a Fact or a Question.Thoughts representing metatalk are divided into manifestations of Conflict, Consensus, Clarification of one's own position or Clarification of someone else's position.Thoughts regarding process can point to Technical Problems associated with the process, which may include Technical Benefits, Deliberation Process, Deliberation Problems or Deliberation Positive.Lastly, thoughts coded as social can be designated as Salutations, Apologies, Praise or Chitchat.Problem and Metatalk thoughts that are on topic are further coded according to their valence: For; For-but; Against; Against-but or Unsure.In addition, when these thoughts are expanded on, the source of such elaboration is coded as Personal Experience, Briefing Documents, Mass Media or Other Participants.In contrast to the DQI, "the elaboration measure did not categorize the types of reasons offered, the quality of the reasons, nor the accuracy or factual nature of the reasons" (STROMER-GALLEY, 2007, p. 10).Equality among participants "was measured by counting the frequency of participation and by volume-measured by number of words" (STROMER-GALLEY, 2007, p. 11).The measurement of engagement included not only the levels of responsivity, but also the formulation of non-rhetorical questions.Ultimately, the model fostered by Stromer-Galley (2007) has several similarities with the DQI, and thus advances a matrix for a micro-content analysis.A third proposal focusing on this type of micro analysis was recently presented by Raphäel Kies (2010), who gathered elements from different models, including those used by the DQI and by Stromer-Galley (2007).His indicators are presented in Table 02, below: Table 02.Indicators for assessing deliberation according to the Kies (2010) Criteria Indicators Inclusion Evaluation of the ease of access to the online forum, on the basis of connectivity, ICT skills and discursive rules Discursive equality Assessment of discursive concentration and the level of control of the debate Reciprocity Measurement of the proportion of posts that are within a thread and the proportion that start a new thread, in addition to the assessment of the extent to which posts take into consideration opinions previously presented Justification Evaluation of whether the opinions are justified or not and how complex justifications are.It should also be observed he depth of justifications, which is measured by coding the use of internal (based on personal viewpoints) or external (based on facts) justifications Reflexivity Content analysis points to apparent cases of reflexivity.Surveys and interviews help demonstrate more internal processes Empathy Measurement of cases of disrespect and surveys and interviews that ask users about levels of respect Sincerity Assessment of apparent cases of insincerity.Surveys and interviews indicate the participants' perceptions of the intensity of each other's sincerity Plurality Evaluation of sociodemographic profiles of participants and their political involvements External ImpactSigns of extension of the discussion to an external agenda.Participation of political personalities in the forum.Users participate in other discussion spaces.There are concrete outcomes.Source:Kies (2010, pp.56-57). and in depth justification does not reveal much about deliberative processes for several reasons.First of all, judging the completeness of an inference is not as simple as the proponents of the DQI seem to imply, and may vary across different cultures.Secondly, the number of justifications presented does not determine the strength of one's argument.Neither does using quasi-scientific examinations to interpret problems.Complex justifications may even hinder deliberation as such, because they may compromise the general comprehensibility of discourses or embarrass other participants.As observed byDahlberg (2004), "the most prolific posters and positions do not necessarily command the most attention"(DAHLBERG, 2004, p. 35).Categorizing levels of justification may nurture some critiques of deliberation that (wrongly) point to the elitist nature of the theory.Thirdly, the coding of the levels of justification neglects the basis of deliberation.The strength of this process should not depend on individual opinions, but on the broader process within which these remarks are inserted.The DQI individualizes processes and transforms deliberation into nothing more than an exchange of utterances.The philosophy of the conscience, strongly contested byHabermas (1987), returns through the backdoor.Rich deliberative processes can be demolished by coders simply because they view each utterance as being unsophisticated.On the contrary, a weak process can be praised for featuring isolated actors and opinions. substantive consensus or about meta-consensus(DRYZEK and NIEMEYER, 2006)?Is the non-radical middle way always better in deliberative terms?With this type of hierarchy, the DQI, once again, seems to feed the critiques of deliberative theory with misguided and unclear assumptions.The revised version of the DQI also creates a new problem, which was not present in the first version of the method.Besides the internal hierarchies within categories, the revised version develops a hierarchy regarding types of deliberation.According toBächtiger et al. (2009) there would be a more demanding type of process (Type I Deliberation) and a more informal one (Type II Deliberation).Instead of simply differentiating between forms of providing reasons, however, the authors set a very clear normative distinction.Type II Deliberation would "involve a shift away from the idea of purely rational discourse toward a conception of deliberation that incorporates alternative forms of communication (such as story-telling) and embraces self-interested behavior such as bargaining"(BÄCHTIGER et al., 2009, p. 03).Story-telling is not accepted as another way to provide reasons, but is considered a totally different form of communication.It would be simply a more realistic perspective when trying to comprehend real world practices, which are usually below the standards of Type I Deliberation.Throughout their article, the authors explicitly establish different standards for these 'types' of deliberation. ) advocates that the source of posts should be coded, proposing four main categories: (01) Personal Experience, (02) Briefing Documents, (03) Mass Media or (04) Other Participants.Such distinctions present at least three problems.Firstly, most participants in deliberative processes do not make explicit references to the sources of their opinions.Although some online arenas contain a great number of posts with links to other documents, this should not be seen as a valid rule for any sort of online experiment.Secondly, the categorization proposed byStromer-Galley (2007) neglects other possible sources participants may frequent, as well as the intertwinement of the sources she ( distinguishes between internal and external justifications, with an implicit preference for the later.Internal justifications, based on personal viewpoints, are somehow considered to be a more superficial and less demanding way to present one's positions.The ability to refer 101 ( Mostly, online arenas and initiatives are scrutinized in themselves.Along with the DQI, the frameworks ofStromer-Galley (2007) andKies (2010) tend to assess interactions within a certain space.Reciprocity is usually restricted to a process of direct interlocution and reason-giving is conceived of being something internal and 103 specific arena under analysis.The attempt to comprehend these internal relationships often ignores the broader nature of discursive flows.Online deliberation is constrained to a reproduction of face-to-face conversations.The role that information provided by online initiatives plays on a deliberative system is disregarded or even criticized as not fully dialogical.The connections (and disconnections) of arenas, and the discursive routes built online are overlooked.Deliberation is viewed as something to be observed within an initiative or arena, and not across initiatives and arenas.Specific points in the framework of both Stromer-Galley (2007) and Kies (2010) could be thought of as exceptions in this regard.The former suggests the measurement of sources cited by actors, while the later considers the external impact of arenas in his analyses.These points indicate the relevance of the 'external world' on processes that happen within a given online arena.However, both ideas have limitations.Stromer-Galley's (2007) sourcing, as was already noted, can only assess what is explicitly mentioned and results in missing the idea of uninhibited discursive flows that cannot be properly identified.Kies's (2010) external impact reduces the many possible connections among arenas to one type: influence on the elaboration of political decisions.As a result, neither of the two 'exceptions' is properly equipped to capture the broader idea of deliberative systems.Each idea can grasp some (eventual) connections, but they are not effective when dealing with the idea of structural connections at the grounds of their frameworks.This is one of the central challenges for current methodological measurements of online deliberation.Understanding the connections, routes and flows among discourses on the web should no longer be thought of as something that can be ignored.If online deliberation is to be understood, these complex processes should be faced properly.Discussions occurring within an online group or forum represent a small fraction of a much more complicated process, that pervades online and offline arenas.The concept of deliberative systems has become essential to obtaining a complete study of deliberation.The disregard of some specificities of the internetA third problem with some of the most influential empirical attempts to investigate deliberation on the web is related to a disregard of the nature of online (2015) 9 (3) 88 -115 interactions.Some scholars seem so concerned with their attempt to translate the conceptual dimensions of deliberation into empirical categories that they end up missing key aspects of the web.This type of disregard seems evident in the lack of attention to the different discursive architectures of online arenas.Usually, methodological procedures are conceptualized in a generic way that frequently fails to grasp the particularities of distinct online experiences.The discussion of politics either on a social network community site, a newspaper website, a blog or Facebook generates completely different processes.They should not simply be gathered under the umbrella of Type II Deliberation or coded as if they were disembodied discourses.The logics of online , then online experiences cannot be studied without it.The richness of online deliberation lies in the countless dynamic connections that engender new forms of discussion.Either explicitly promoted through linkage, or place on micro distinctions has frequently hindered an understanding of the broader picture in which they are inserted.This does not mean micro-content analysis is, in itself, wrong or misleading.It has been responsible for interesting developments in the areas of both deliberative democracy and internet studies.There are, of course, fruitful findings that help to explain the possibilities of web discussion, thus supporting the work of those responsible for designing online consultations and web forums.Methodological insights also exist that point to new research trends, such as Graham andWitschge's (2003) proposal of re-building argumentative maps or Gerhards and Schäfer's (2010) attempt to study virtual debates through search engines. coauthors and I have attempted to distinguish a direct type of reciprocity, usually measured by empirical studies, from a discursive form of reciprocity accessible through frame analysis and more in tune with the systemic approach to deliberation(MENDONÇA, FREITAS and OLIVEIRA, 2014).In addition, my investigations have benefited from the concept of affordances, frequently used in technology studies, which paved interesting routes for context-sensitive analyses., therefore, skeptical about the possibility of empirically assessing online deliberation and, as a matter of fact, measuring some of its dimensions.I am against excessive micro-quantification, focused on individuals and on arenas (considered as self-enclosed) and not sensitive to the contexts of online interaction. Table 01 . first version of DQI is based on six Habermasian principles: (01) open Categories for assessing deliberation according to DQI
2019-05-07T14:21:49.987Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "da4d3857ca5c2fab117f3e07fb8a94fe5959afe3", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/bpsr/v9n3/1981-3821-bpsr-9-3-0088.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "da4d3857ca5c2fab117f3e07fb8a94fe5959afe3", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Sociology" ] }
221748022
pes2o/s2orc
v3-fos-license
Increased levels of reactive oxygen species in platelets and platelet-derived microparticles and the risk of respiratory failure in HIV/AIDS patients Respiratory failure (RF) is the main cause of hospital admission in HIV/AIDS patients. This study assessed comorbidities and laboratory parameters in HIV/AIDS inpatients with RF (N = 58) in relation to those without RF (N = 36). Tuberculosis showed a huge relative risk and platelet counts were slightly higher in HIV/AIDS inpatients with RF. A flow cytometry assay for reactive oxygen species (ROS) showed lower levels in platelets of these patients in relation to the healthy subjects. However, when stimulated with adrenaline, ROS levels increased in platelets and platelet-derived microparticles of HIV/AIDS inpatients, which may increase the risk of RF during HIV and tuberculosis (HIV-TB) coinfection. Respiratory diseases of infectious origin are the most commonly occurring and principal causes of death in HIV/AIDS patients. (1,2) Lethality rates are notoriously high in patients co-infected with HIV and tuberculosis (HIV-TB), since the virus weakens the host's immune response to Mycobacterium tuberculosis, thus the chances of developing a TB infection are greatly increased and its progression is more dramatic. (2,3) The respiratory conditions in these patients can be aggravated by the platelet activation process. (4) In HIV-1 infections, platelets have been shown to circulate in an activated state and their degree has been correlated with the severity of disease. (5) However, this persistent state of activation in these patients may cause them to become them refractory to further stimulation. One study assessed ex vivo platelet function in HIV-infected subjects and showed hyper-and hypo-reactivity in platelets depending on the platelet agonist tested. While platelet aggregation is increased in response to adrenaline, collagen and thrombin-receptor agonist peptides induced significantly less aggregation. (6) Another showed a depressed release of the chemokine RANTES, which indicates a deficit in platelet function in HIV patients. (7) doi: 10 Despite increased levels of platelet activators and exacerbation of platelet activation, it is known that HIV infection is often associated with a deficit in platelet function or a state of "platelet exhaustion". (7) This ambiguity has been shown to have a relationship with the lungs, since on the one hand platelets contribute as a pulmonary vascular barrier and in defense against pulmonary hemorrhage, though on the other hand, they contribute to pathologic syndromes of pulmonary inflammation and thrombosis. (4) Recently, we observed a relationship between increased PDW and likelihood of survival in HIV/AIDS inpatients. Opportunistic infections were the major events in mortality of these patients, which evidences the role of antimicrobial host defense under severe immunosuppression (Gama WM et al., Unpublished observations). The lack of data on platelet involvement in HIV infection is mainly associated with respiratory comorbidities and requires further investigation. Here, we evaluated the hematological and biochemical data of ninety-four patients hospitalised with HIV/AIDS, who were admitted to the Fundação de Medicina Tropical -Dr Heitor Vieira Dourado (FMT-HVD) in the Amazonas State, Brazil. It was a prospective study and sampling was done for convenience. Patient engagement in the study was carried out with the patient or his/her companion, according to ethical procedures (CAAE: 57330116.6.0000.0005) and information regarding comorbidities were obtained from the electronic medical database at the FMT-HVD after consent was obtained. Patients were classified into two groups, those with RF (n = 58) and those (n = 36) with no RF (Table). PROOF PROOF PROOF RF was classified as a respiratory syndrome when patients reported dyspnea (shortness of breath or difficulty in breathing), atypical chest performance with abnormal noises, long-term or productive cough, vesicular breathing sounds, abnormal respiratory murmurs, pleural effusion, gasping and wheezing. Pulmonary coinfections were tuberculosis, pneumocystosis and pulmonary histoplasmosis. Participants considered as having tuberculosis were those with positive Xpert MTB/RIF test results and positive sputum smears for M. tuberculosis. During hospitalisation, all HIV-TB coinfected inpatients underwent the 4-drug regimen anti-TB treatment (isoniazid, rifamycin, pyrazinamide and ethambutol). Other comorbidities were defined as signs and/or symptoms of neurological and digestive origin, of infectious and non-infectious origin, with or without chronicity. Neurological syndromes were classified as disorientation, seizure, paralysis, movement deficit, mental Other digestive comorbidities, such as chronic lymphocytic leukemia, Hodgkin's disease, aplastic anemia, and neurological disorders e.g., multiple sclerosis and myasthenia gravis, were monitored. The probability of TB or neurological, circulatory, and digestive syndromes that are a comorbidity of HIV inpatients with RF was assessed using the Chi Square test (Table). TB showed a twenty-three-fold higher risk among patients with RF, and represented the major cause of hospitalisation. Logistic regression predicted TB as major cause of respiratory syndrome [crude Odds Ratio (OR) = 0.02, 95% confidence interval (CI) = 0.0-0.14) and adjusted OR = 0.01 95% CI = 0.0-0.08) P-likelihoodratio test < 0.001], was as observed in other studies. (8,9) Patients with vomiting were 58% less likely to have RF [crude OR = 0.42 (95% CI = 0.17-1.02) and adj. OR = 0.17 (95% CI = 0.04-0.7) P-likelihood-ratio test = 0.007]. In addition, hematological and biochemical parameters were collected from the electronic medical record closest to the interview day. Despite having statistical differences, bilirubin levels were in the normal range (Table). On the other hand, levels of hepatic and cardiac serum enzymes were higher in both groups, and indicated chronicity of HIV infection. Of all the evaluated markers, the T-test showed that the mean of platelet counts was slightly higher in patients with RF (Table). This slight increase in platelet counts of patients with RF may be due to the fact that reactive thrombocytosis is linked to pulmonary tuberculosis, as observed in other studies. (8,9,10,11) The clinical situations in HIV patients with respiratory diseases can be further aggravated by the platelet activation process, which is induced by the exacerbation of the constant inflammatory state in these patients. Platelets are best known as primary mediators of hemostasis and may be targets for reactive oxygen species (ROS) during cell activation. (12) ROSs are known to modulate the coagulation and fibrinolysis pathways in platelet activation, and promote coagulation initiation and activation of other coagulation factors. (12) An imbalance between the production and detoxification of these ROSs may drastically affect platelet physiology and even lead to, as a final event, a change in the number of cells. The dihydroethidium (DHE) is a fluorochrome that allows the characterisation of redox responses in platelet activation by physi-ological and pathological stimuli. (13) For ROS analysis in platelets and microparticles, for convenience, a small group of samples (N = 21) taken within 72 h of admission from the patients hospitalised with HIV was investigated. Fifty-two percent of inpatients (n = 11) had RF; 42.9% were HIV-TB coinfected patients, 95.2% reported effective use of HAART; 9.5% died, 52.4% had a neurological comorbidity, 19.0% cardiovascular disorder and 38.1% presented a digestive comorbidity. The age average of this small group of HIV inpatients was 38.0 ± 10.3 years and the level of ROS was compared in paired-age samples collected from twenty-three healthy volunteers. The blood collection was performed after signing of the informed consent form. The platelet-rich plasma (PRP) was obtained with two cycles of centrifugation at 500 x g for 20 min at 25ºC in PSG buffer (5 mM of PIPES, 145 mM of NaCl, 4 mM of KCl, 50 μM of Na2HPO4, 1 mM of MgCl 2 •6H 2 O, 5.5 mM of glucose; pH 6.8 and 300 nM of Prostaglandin E1, according to previous studies, (14) the platelets were re-suspended in this buffer and labeled with anti CD41 and DHE for platelet marking and visualisation using a FACSCanto™ II cytometer (BD Biosciences, USA). The Flowjo Software was used to select platelet population by Forward versus Side Scatter (FSC vs SSC; Figure). The total number of double platelets (labeled CD41+DHE+) was calculated from the acquisition of 10,000 events ( Figure A). The total number of CD41 + DHE + platelets did not differ from those of healthy people ( Figure B). PRPs were stimulated with 200 µg/mL of adrenaline (Hipolabor 1mg/mL) for 30 min and then labeled, fixed and acquired on the cytometer. In relation to the level of ROS before stimulation, HIV platelets showed lower levels than the control ones. However, after adrenaline stimulation, the level of ROS in HIV platelets measured by mean of fluorescence intensity (MFI) increased when compared to control platelets ( Figure C). The low ROS level before stimulation may suggest platelet exhaustion in HIV infection, as observed by Holme and colleagues with time-dependent platelet aggregation. (7) Meanwhile, Satchell and colleagues showed differences in reactivity of platelets to agonist in HIV-infected patients possibly mediated through effects at both receptor and post-receptor levels. (6) There are few studies on ROS in human platelets, (13) and analysis of ROS generation in HIV patients is new. In this study, platelets of HIV patients stimulated by adrenaline were more reactive due to the increase of MFI. Therefore, our research underlines the need for further prospective studies regarding platelet function in HIV patients. It has been suggested that platelet activation generates platelet-derived microparticles (P-MP) in HIV infection. (15,16,17) We selected the P-MP for their low lateral and frontal dispersion characteristics and for the expression of CD41+DHE+ below the platelet population, according to other authors ( Figure D). (17) The mean of P-MPs generated in relation to number of CD41+DHE+ platelets was 19.15±157.5. HIV/AIDS patients showed a greater number of P-MPs ( Figure E) and responded to stimulus by adrenaline by increasing ROS levels due
2020-09-17T13:06:18.740Z
2020-09-14T00:00:00.000
{ "year": 2020, "sha1": "97c8ba6a24466cf8d04994cb9d3cb1d9c101641a", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/j/mioc/a/n8Yz3TfYL7SWvJCWhNmKRdQ/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2422161c307be82b5e3847d870c604d382b3322d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264143792
pes2o/s2orc
v3-fos-license
Effects of modulator therapies on endocrine complications in adults with cystic fibrosis: a narrative review Cystic fibrosis is a monogenic disorder caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) protein, which transports chloride ions in secretory organs. Modulator therapies are small molecules that correct CFTR dysfunction and can lead to a wide range of benefits for both pulmonary and extrapulmonary complications of cystic fibrosis. With advancements in airway, antimicrobial and nutritional therapies and now introduction of modulator therapies, most people living with cystic fibrosis in Australia are now adults. For adults with cystic fibrosis, endocrine manifestations such as cystic fibrosis‐related diabetes, metabolic bone disease, and reproductive health are becoming increasingly important, and emerging evidence on the endocrine effects of CFTR modulator therapies is promising and is shifting paradigms in our understanding and management of these conditions. The management of cystic fibrosis‐related diabetes will likely need to pivot for high responders to modulator therapy with dietary adaptions and potential use of medications traditionally reserved for adults with type 2 diabetes, but evidence to support changing clinical care needs is currently lacking. Increased attention to diabetes‐related complications screening will also be required. Increased exercise capacity due to improved lung function, nutrition and potentially direct modulator effect may have a positive impact on cystic fibrosis‐related bone disease, but supporting evidence to date is limited. Fertility can improve in women with cystic fibrosis taking modulator therapy. This has important implications for pregnancy and lactation, but evidence is lacking to guide pre‐conception and antenatal management. Provision of multidisciplinary clinical care remains ever‐important to ensure the emergence of endocrine and metabolic complications are optimised in adults with cystic fibrosis. Summary • Cystic fibrosis is a monogenic disorder caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) protein, which transports chloride ions in secretory organs. • Modulator therapies are small molecules that correct CFTR dysfunction and can lead to a wide range of benefits for both pulmonary and extrapulmonary complications of cystic fibrosis. • With advancements in airway, antimicrobial and nutritional therapies and now introduction of modulator therapies, most people living with cystic fibrosis in Australia are now adults.For adults with cystic fibrosis, endocrine manifestations such as cystic fibrosis-related diabetes, metabolic bone disease, and reproductive health are becoming increasingly important, and emerging evidence on the endocrine effects of CFTR modulator therapies is promising and is shifting paradigms in our understanding and management of these conditions. • The management of cystic fibrosis-related diabetes will likely need to pivot for high responders to modulator therapy with dietary adaptions and potential use of medications traditionally reserved for adults with type 2 diabetes, but evidence to support changing clinical care needs is currently lacking.Increased attention to diabetes-related complications screening will also be required. • Increased exercise capacity due to improved lung function, nutrition and potentially direct modulator effect may have a positive impact on cystic fibrosis-related bone disease, but supporting evidence to date is limited. • Fertility can improve in women with cystic fibrosis taking modulator therapy.This has important implications for pregnancy and lactation, but evidence is lacking to guide pre-conception and antenatal management. • Provision of multidisciplinary clinical care remains ever- important to ensure the emergence of endocrine and metabolic complications are optimised in adults with cystic fibrosis. Classification of cystic fibrosis transmembrane conductance regulator (CFTR) gene mutations in cystic fibrosis Narrative review and phenotypes means that not all individuals living with the condition benefit.Ivacaftor was the first drug available, but despite promising in vitro results, it is clinically effective in a very small proportion (< 6%) of people with cystic fibrosis who have the G551D or other class III (gating) mutations. 8,9Lumacaftor was discovered next and used in combination with ivacaftor, but bronchospasm, particularly in individuals with severe airflow obstruction, limited its uptake. 10This created opportunity for tezacaftor-ivacaftor, which offered advantages over lumacaftorivacaftor, including fewer drug-drug interactions, better side effect profile, and benefits in those homozygote for the Phe508del mutation. 11However, the clinical benefits of tezacaftor-ivacaftor were modest and high throughput drug discovery continued, eventuating in the most successful synergistic combination yet with elexacaftor-tezacaftor-ivacaftor (ETI).With ETI, remarkable clinical benefits have been demonstrated for the first time in people both homozygous and heterozygous for the Phe508del mutation. 12,13Real-world experiences have further substantiated their benefits, tolerability and safety. 14With ETI approved on the Australian Pharmaceutical Benefits Scheme for all adults since 1 April 2022, those with diminishing pulmonary symptoms have pivoted their attention to extrapulmonary complications.This narrative review aims to summarise modulator effects on the endocrine system, with a focus on cystic fibrosis-related diabetes, metabolic bone disease, and fertility. We conducted an Ovid MEDLINE and grey literature electronic search on 14 March 2023 to capture studies evaluating effects of currently available modulator therapies on cystic fibrosis-related diabetes, metabolic bone disease, and reproductive health.All study designs that included human in vitro studies were selected.Animal studies were excluded due to inherent challenges and limitations of currently available models.The search strategy is available in the Supporting Information, section 1. Epidemiology The term "cystic fibrosis-related diabetes" encompasses a complex spectrum of glucose abnormalities that can be transient or progressive with diagnosis based on specific glucose criteria being met on screening tests. 15,16Epidemiological studies report that the prevalence of cystic fibrosis-related diabetes increases with age, with an estimated 30% of adults affected. 17Both incidence and prevalence are directly proportional to diabetes screening, which may vary across centres.The presence of cystic fibrosis-related diabetes has been associated with higher morbidity and mortality, 18 reduced quality of life, and increased utilisation of health care resources compared with people without diabetes. 19However, with the introduction of aggressive screening and improved clinical management, adverse effects can be minimised. 20 Risk factors Risk factors for cystic fibrosis-related diabetes include increasing age, female sex, pancreatic insufficiency, cystic fibrosis-related liver disease, and systemic corticosteroid use. 21Individuals with class I and II CFTR gene mutations also appear to be at increased risk of cystic fibrosis-related diabetes. 21Family history of type 2 diabetes may also increase susceptibility. 22 Clinical associations Adverse clinical outcomes directly linked to the presence of cystic fibrosis-related diabetes include more severe pulmonary disease, increased frequency and severity of pulmonary exacerbations, and presence of multiresistant microorganisms such as Pseudomonas aeruginosa. 23Before modulators, people with cystic fibrosis-related diabetes have also been at a higher risk of compromised nutritional status due to catabolism and ensuing weight loss. Pathogenesis The endocrine pancreas contains β islet cells, which are responsible for synthesis and storage of insulin.The exact mechanisms of their dysfunction and loss resulting in insulin deficiency and onset of cystic fibrosis-related diabetes remains unclear.In the exocrine pancreas, dysfunctional CFTR can lead to ductal obstruction and autodigestion of pancreatic ductal and acinar cells.The "bystander hypothesis" postulates collateral damage to β cells and perhaps explains why exocrine pancreatic insufficiency often coexists clinically with cystic fibrosis-related diabetes.The "inflammation hypothesis" suggests that initial structural damage to β cells is accompanied with an exaggerated inflammatory response adding further insult, 24 which may explain insulin resistance seen in cystic fibrosis.Disruption of the incretin 25 axis that regulates release of insulin and glucagon from the pancreatic islet cells, lipid metabolism, gut motility, appetite, body weight, and immune function has also been postulated. Clinical findings Clinically, weight loss and declining lung function may be the earliest indicators and precede the onset of glucose abnormalities required to meet the current diagnostic criteria for cystic fibrosisrelated diabetes. 26Post-prandial hyperglycaemia is often the first glycaemic manifestation, with fasting hyperglycaemia a later occurrence.Presence of fasting hyperglycaemia has been linked to increased risk and incidence of microvascular complications. 27ore recently, macrovascular events have also been reported in adults with cystic fibrosis-related diabetes. 28 Screening and diagnosis Guidelines recommend screening for cystic fibrosis-related diabetes from the age of ten years with an annual oral glucose tolerance test (OGTT) 16,29 (Box 3).The OGTT remains the most sensitive test for the diagnosis of cystic fibrosis-related diabetes (Box 4) and a distinct category called "indeterminate glycaemia" Narrative review (INDET) is used to recognise the mid-OGTT glucose elevations that can be seen in people with cystic fibrosis. 32 Effects of modulator therapies on cystic fibrosis-related diabetes Ivacaftor monotherapy.In people with cystic fibrosis with normal and impaired glucose tolerance, ivacaftor may reduce progression to cystic fibrosis-related diabetes. 33This effect may be through improved first phase insulin secretion [34][35][36] and overall endogenous insulin production independent of incretin effects. 350][41] Importantly, CFTR genotype appears key to ivacaftor success, 34 with patients homozygous for Phe508del mutation receiving minimal benefit. Combination modulator therapies.In individuals with impaired glucose tolerance, the use of lumacaftor-ivacaftor has been linked to positive glycaemic outcomes after one year of treatment in people who also demonstrated improved pulmonary and nutritional outcomes. 424][45] Early prospective studies evaluating effects of ETI on continuous glucose monitoring indices, 46 glucose tolerance, 47,48 and weight 49 • Recommendation to perform annual OGTT 29 Indeterminate glycaemia (INDET) < 5.6 < 7.7 • Mid-OGTT plasma glucose > 11.1 mmol/L 32 Impaired glucose tolerance (IGT) 5.6-6.97.8-11.0 • Can be impaired fasting and/or two-hour plasma glucose level 15 Cystic fibrosis-related diabetes without fasting hyperglycaemia < 6.9 ≥ 11.1 • Diagnosis can only be made in the non-pregnant state 15 Cystic fibrosis-related diabetes with fasting hyperglycaemia ≥ 7.0 na • Association with microvascular complications 27 Gestational diabetes (GDM) in cystic fibrosis ≥ 5.1 ≥ 8.5 • One-hour plasma glucose > 10.0 mmol/L also meets diagnostic criteria for GDM • Repeat OGTT post partum recommended to evaluate for cystic fibrosis-related diabetes 15 Narrative review on β cells, and likely other mechanisms are involved.There is some evidence that CFTR modulators may reduce the risk of pancreatitis in cystic fibrosis and may partially restore exocrine pancreatic function, [51][52][53] with favourable pancreatic exocrineendocrine paracrine interactions 54 improving residual β cell function.Suppressed glucagon may also play a role in reducing hyperglycaemia. 40,55The effectiveness of modulator therapies in treating hyperglycaemia in individuals with confirmed cystic fibrosis-related diabetes may depend on available β cells that can be rescued and genotype-modulator interactions.But what is more exciting is the preventive or disease-modifying potential of modulator therapies, with the possibility that early initiation in childhood may preserve both endocrine and exocrine pancreatic function.The correlation between improved pulmonary function and glycaemia is noteworthy and points to possible downgrading of the inflammation cascade and reduction in insulin resistance, either via mechanisms of reducing islet cell and/or systemic inflammation. Management in the post-modulator era.Management of cystic fibrosis-related diabetes will need to pivot for high responders to modulator therapies, with increasing focus on normalising glycaemia, screening for diabetes-related complications and optimisation of metabolic health parameters, including cardiovascular risk factors. 56Overweight-and obesity-related complications, such as fatty liver, may also become more prevalent. 56These metabolic complications and risks also need to be considered in adults with mild cystic fibrosis phenotypes not receiving modulator therapies.For now, insulin remains the mainstay of treatment due to lack of robust evidence supporting the use of oral antihyperglycaemic agents and non-insulin injectables. 57Incretin-based therapies may be appealing for adults with excess weight gain on ETI, but given lack of safety and efficacy data, initiation should occur within a research context.In addition, improvements in lung function may be associated with reduced energy needs and this will require adaptive dietary changes.Interactions with obesogenic environmental factors and genetic susceptibility to type 2 diabetes will also become relevant.Continuous glucose monitoring will be of great value to guide dietary and pharmacotherapy decisions. 58Use of glycated haemoglobin as a screening tool, particularly in adults with stable lung disease on modulator therapy, will also need to be explored. Epidemiology The term "cystic fibrosis-related bone disease" is an umbrella term encompassing both low bone mineral density and/or fragility fractures in people with cystic fibrosis.Fracture rates in adult men and women with cystic fibrosis may be double that of the general population, 59 with fragility fractures affecting an estimated half of all adults with end-stage lung disease. 60Rib and vertebral compression fractures can significantly compromise lung function, and severe osteoporosis can be a barrier to lung transplantation. Risk factors Multiple risk factors for cystic fibrosis-related bone disease exist.These include exposure to exogenous glucocorticoids, chronic infection and inflammation, malnutrition, vitamin D deficiency, decreased physical activity, hypogonadism, and presence of diabetes.Currently, dual-energy x-ray absorptiometry (DXA) is recommended for the assessment and monitoring of cystic fibrosis-related bone disease. 61 Pathogenesis CFTR may be expressed on osteoblasts, osteocytes and chondrocytes and their progenitors, implicating that dysfunction may have direct effects on bone. 62In vitro experiments have demonstrated CFTR dysfunction can increase osteoclast activity, 63 survival and differentiation 64 and can disrupt osteoblast function, 65 resulting in uncoupling of bone remodelling. Management The current management of cystic fibrosis-related bone disease involves optimisation of micronutrient status, including fatsoluble vitamin D and K supplementation, ensuring adequate calcium intake, avoidance or minimisation of glucocorticoid therapy, evaluation for modifiable secondary causes, and administration of bone preservation therapies.Bisphosphonates are effective in improving bone mineral density at both the hip and femur, but they have not been shown to significantly reduce fracture rates. 66Intravenous administration may be preferred to minimise the risk of aspiration.Denosumab use has been limited due to risks of hypocalcaemia, infection, and younger age of this population.Reduced sex steroids in both men 67 and women 68 have been described in cystic fibrosis; however, the benefits of hormonal replacement therapy on bone health has not been specifically evaluated.The evidence for anabolic therapy such as teriparatide and romosozumab is limited, 69,70 and these agents are often reserved for individuals with complex fracture history who fail to respond to initial antiresorptive therapies. Effects of modulators on cystic fibrosis-related bone disease The clinical evaluation of CFTR modulators on bone health is sparse.Ivacaftor has been demonstrated to improve bone mineral density 63 and cortical bone area, thickness and porosity. 71A pilot study demonstrated that weight gain improved bone mineral density and exercise capacity in nine individuals with cystic fibrosis (mean age, 18.6 years; standard deviation, 4.7 years) who were administered ETI compared with cystic fibrosis controls. 71In addition to potential direct effects of restoring CFTR function on bone health, indirect benefits may include improved absorption of vitamin D from the gastrointestinal tract, 72 increased exercise capacity, improved nutritional state, and reduced infection and inflammation. 73Together, these early findings are encouraging and, given the tolerability and safety of modulator therapies, clinical improvements in cystic fibrosisrelated bone disease should be anticipated.However, optimal clinical management for adults receiving modulator therapy with declining bone mineral density and/or osteoporotic fracture remains unclear, and research is needed to clarify whether adjunct use with currently available bone preservation therapies is safe and efficacious. Reproductive health With increasing life expectancy in the post-modulator era, adults with cystic fibrosis may desire fertility.Subfertility and infertility are common in people living with cystic fibrosis and can affect both men and women. Male infertility Heterozygosity and homozygosity for Phe508del mutation correlates strongly with male infertility. 74Congenital bilateral Narrative review absence of vas deferens leads to azoospermia, often requiring surgical sperm retrieval with intracytoplasmic sperm injection.Male hypogonadism due to chronic illness, growth and pubertal failure may also affect spermatogenesis.In contrast to congenital bilateral absence of vas deferens, whether modifiable factors leading to male hypogonadotropic hypogonadism may be responsive to modulator therapies remains to be seen. Female infertility Up to 40% of adult women with cystic fibrosis may experience subfertility, which is higher than the general population. 75enarche in females with cystic fibrosis can be delayed, 76 and secondary amenorrhoea due to chronic illness, malnutrition and low bodyweight are not uncommon.Abnormalities in cervical mucus due to CFTR dysfunction may also contribute to subfertility, with CFTR expressed in the cervical and endometrial epithelium and fallopian tubes. 77Assisted reproductive technologies, such as intrauterine insemination, are used to overcome hostile cervical mucus, and gonadotrophic ovulation induction is offered to women with anovulation.With in vitro fertilisation, pre-implantation genetic testing of partner CFTR mutation carrier status can also be facilitated. Pregnancy in women with cystic fibrosis The rates of spontaneous miscarriage in women with cystic fibrosis appear comparable to the general population. 78There is evidence that pregnancy overall does not affect survival in cystic fibrosis; 79 however, higher rates of Caesarean delivery, diabetes, preterm birth, congenital anomalies, 80,81 and pulmonary decline in the mother have been reported. 82 Effects of modulators on fertility The effects of CFTR modulators on female fertility and fetal and maternal outcomes during pregnancy and lactation have not been studied in clinical trial settings, as pregnancy was often an exclusion criterion. 83The American Cystic Fibrosis Foundation Patient Registry report found pregnancy rates doubled in women of reproductive age in the first year after ETI initiation. 846][87][88][89] This also highlights the need for appropriate contraceptive counselling to avoid unplanned pregnancies, which may be relatively high in this population. 90 Effects of modulators on pregnancy and lactation CFTR modulator therapies cross the placenta and continued use in pregnancy needs to be balanced with known risks to the mother, especially pulmonary decline, and potentially unknown risks to the fetus.Modulator therapies are also secreted in breastmilk and despite ivacaftor now approved for children above six months of age, safety in lactation remains unclear. 91Currently, case reports 91 and surveys 92 form the primary data source to elicit potential effects of modulator therapies on mothers with cystic fibrosis and their babies. A recent report of bilateral congenital cataracts in three separate neonates exposed to modulator therapies in utero and while breastfeeding 93 highlights the need for vigilance.The prospective MAYFLOWERS trial (Clini calTr ials.govIdentifier, NCT04828382), which evaluates maternal pulmonary function antenatally together with obstetric and neonatal outcomes in the era of modulators, will hopefully fill this evidence gap. Future directions Health care providers involved in caring for adults with cystic fibrosis need to distinguish between those who have had marked clinical response to modulator therapies and non-responders.This will enable appropriate tailoring of clinical management. People with cystic fibrosis are a heterogenous population and efforts are underway exploring metabolomic analysis and transcriptomics to help us better understand and predict outcomes 94 and move closer to precision medicine.There is also interest in developing models whereby characterisation and classification of CFTR variants will be according to their response to modulator therapies, a process called "theratyping". 95Postapproval, real-world studies such as PROMISE (Clini calTr ials.gov Identifier, NCT04038047) will be integral in informing the cystic fibrosis community about the broader effects of modulator therapies.Currently, most funding for cystic fibrosis research in Australia remains discovery-based and targeted at gene therapy, CFTR modulation, and development of anti-infective agents.Research into the emerging endocrine and metabolic complications in cystic fibrosis is also needed to evaluate whether pharmacotherapies adopted for use in this population are safe and effective, inform evidence-based practice, advocate for equitable access to diabetes technology, and develop integrated models of care that can support clinicians outside dedicated cystic fibrosis centres.Although recent advances in therapeutics have resulted in greater optimism for many adults living with cystic fibrosis, modulator therapies are not a cure, and provision of ongoing complex care in a multidisciplinary setting is still needed. 3 Available screening tests and their applicability for diagnosing cystic fibrosis-related diabetes Screening test Procedure Used for diagnosis of cystic fibrosis-related diabetes 31gestion of 1.75 g/kg up to 75 g of glucose in the fasting state followed by evaluation of plasma blood glucose level at least 0-minute, 60-minute and 120-minute time points• Yes • It is the most validated test with longitudinal data linking Evidence is lacking for cystic fibrosis-specific CGM cut-points31 These limited findings provide interesting insights into the aetiopathogenesis of cystic fibrosis-related diabetes, potential recovery of β cell function, genotype-modulator interactions, and associations with pulmonary outcomes.The data suggest that initiation of modulator therapies may not be associated with immediate glycaemic benefits and that the lag time might extend beyond the time frame it would take to reach steady state.This indicates modulator therapies do not briskly switch
2023-10-17T06:17:34.839Z
2023-10-15T00:00:00.000
{ "year": 2023, "sha1": "994ef4c14b189bb3a1b269c5428706bf5cc52146", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.5694/mja2.52119", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "c35b214aa39eb6650772046050e16a597dccab71", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
95701164
pes2o/s2orc
v3-fos-license
Surface-Enhanced Oxidation and Determination of Isothipendyl Hydrochloride at an Electrochemical Sensing Film Constructed by Multiwalled Carbon Nanotubes The electrochemical behavior of isothipendyl hydrochloride (IPH) was investigated at bare and multiwalled-carbon-nanotube modified glassy carbon electrode (MWCNT-GCE). IPH (55 μM) showed two oxidation peaks in Britton-Robinson (BR) buffer of pH 7.0. The oxidation process of IPH was observed to be irreversible over the pH range of 2.5–9.0. The influence of pH, scan rate, and concentration of the drug on anodic peak was studied. A differential pulse voltammetric method with good precision and accuracy was developed for the determination of IPH in pure and biological fluids. The peak current was found to be linearly dependent on the concentration of IPH in the range of 1.25–55 μM. The values of limit of detection and limit of quantification were noticed to be 0.284 and 0.949 μM, respectively. Introduction Since the discovery by Iijima [1], carbon nanotube (CNT) including single-walled carbon nanotube (SWCNT) and multiwalled carbon nanotube (MWCNT), has attracted much attention due to its unique structure and extraordinary properties [2].CNT possesses subtle electronic properties, huge surface area, efficient catalytic activity, and strong adsorption ability, high chemical and thermal stability, high elasticity, high tensile strength, and in some instances metallic conductivity [3,4].The modification of electrode surfaces with MWCNT for use in analytical sensing is well documented.It has demonstrated the ability to promote the electrontransfer reactions of electroactive biomolecules [5][6][7].These excellent properties suggest that CNT is a fascinating electrode material, and now, it is widely used in electrochemistry and electroanalytical chemistry [8][9][10]. Isothipendyl hydrochloride (Figure 1) is a phenothiazine-related drug and has a broad range of clinical applications as antipruritic for local and generalized allergic reactions and radiation sickness [11].It reduces vascular permeability and significantly reduces secretory activities.Large doses may cause drowsiness, nausea, and vomiting. Several methods have been reported for the determination of IPH in forensic samples, pharmaceutical formulations and body fluids [12][13][14][15].Because of its pharmacological importance and lack of reports on its electrochemical behavior and analysis by voltammetry, we thought of investigating the electrochemical behavior of IPH at bare glassy carbon electrode (GCE) and over multiwalled carbon-nanotube modified glassy carbon electrode (MWCNT-GCE) in detail.Further, we have developed a differential pulse voltammetric method for the determination of IPH in pure and biological samples.as the working electrode, a platinum wire as a counter electrode, and an Ag/AgCl (3 M KCl) as reference electrode was employed.For reproducible results, improved sensitivity and resolution of voltammetric peaks, the working electrode was polished with 0.05 micron alumina powder on a polishing cloth.Then, it was thoroughly rinsed with milli-pore water.All the reported potentials are against Ag/AgCl (3 M KCl). Reagents. MWCNTs were obtained from Sigma-Aldrich (>99%, 10-20 nm in diameter).Pure IPH was obtained from German Remedies Ltd.A stock solution of IPH (2.5 mM) was prepared in millipore water and stored in a refrigerator at 4 • C. In the present study, BR buffer (pH 2.5-10.6)was used as supporting electrolyte.All the solutions were prepared in milli-pore water, and all other chemicals used were of analytical reagent grade. Preparation of MWCNT-Modified GCE. MWCNTs were refluxed in concentrated nitric acid for about 5 h, filtered, and washed with milli-pore water till the filtrate became neutral and finally dried [16].The MWCNTs suspension was prepared by dispersing 2 mg of MWCNTs in 10 mL acetonitrile using ultrasonic agitation to obtain a relatively stable suspension.Before modification, the GCE was carefully polished with 1.0, 0.3, and 0.05 μm α-alumina on a smooth polishing cloth and then washed in methanol and water.The cleaned GCE was coated by casting 20 μL of the black suspension of MWCNTs and dried in air.After modification, the electrode was rinsed with water for about 5 min to remove the loosely adsorbed nanotubes, if any. Working Procedure.The MWCNT-GCE was first activated in BR buffer of pH 7.0 by cyclic voltammetric sweeps between 0 and 1.4 V till stable cyclic voltammograms were obtained.The modified electrode was then transferred into 10 mL BR buffer (pH 7.0.)containing IPH (55 μM), and an accumulation time of 240 s was given.After this accumulation time, the electrode was used to record the cyclic voltammogram/differential pulse voltammogram. Working solutions were prepared by diluting the stock solution as required with BR buffer (0.04 M) of required pH.For DPV, the following parameters were maintained: sweep rate 20 mV s −1 , pulse amplitude 50 mV, pulse width 30 ms, and pulse period 500 ms.For analytical applications, oxidation peak a1 was selected.All electrochemical experiments were carried out at 25 • C.After every measurement, new MWCNT-GCE was prepared. Determination of IPH in Human Urine and Plasma Samples.Spiked urine samples were obtained by treating 0.9 mL aliquots of urine with 100 μL IPH standard solution (2.5 mM) to obtain 250 μM IPH.A suitable aliquot of spiked urine was diluted with BR buffer, without any pretreatment, to prepare appropriate sample solution, and differential pulse voltammogram was recorded under optimized conditions. Spiked serum samples were prepared by following the procedure reported earlier [17].Serum samples, obtained from healthy individuals (after having obtained their written consent), were stored frozen until assay.For the determination of IPH in plasma, 500 μL of IPH (2.5 mM) was added to 500 μL of untreated plasma.The mixture was vortexed for 30 s.In order to precipitate the plasma proteins, the plasma samples were treated with 250 μL perchloric acid (15%).After that, the mixture was vortexed for further 30 s and then centrifuged at 5000 rpm for 5 min.An appropriate volume of supernatant liquor was transferred in the voltammetric cell containing BR buffer of pH 7.0, and voltammograms were recorded.The voltammogram of sample without IPH did not show any signal that can interfere with the direct determination.The content of the drug in plasma was determined referring to the calibration graph or regression equation. Cyclic Voltammogram of IPH at MWCNT-GCE. The cyclic voltammograms of 55 μM IPH at bare GCE and MWCNT-GCE in BR buffer of pH 7.0 along with that of blank are shown in Figure 2. IPH showed two oxidation peaks at 0.721 V (a1) and 0.958 V (a2) at bare GCE (Figure 2).No reduction peak was observed in the reverse scan suggesting that the electrochemical oxidation of IPH was an irreversible process.At MWCNT-GCE, these oxidation peaks appeared at 0.696 V and 0.912 V, respectively, with a considerable enhancement in the peak current.Thus, the negative shifts in peak potentials were observed to be 25 mV and 46 mV for peak a1 and a2, respectively, suggesting that MWCNT exhibited catalytic effect towards electrooxidation of IPH [8,18]. Successive cyclic voltammograms were recorded to check the adsorption of the oxidation product of IPH on MWCNT-GCE.The oxidation peak currents of IPH were found to be decreased during the successive scans and finally remained unchanged.This was attributed to the adsorption of oxidative product of IPH on the modified electrode surface.as the surface concentration of IPH at MWCNT-GCE increased.With further increase in the volume, the oxidation peak current remained almost constant.Considering the peak current as well as the time needed for evaporation of acetonitrile, 20 μL of MWCNT suspension was used to modify the GCE surface.While maintaining the accumulation time of 0 to 240 s, the oxidation peak currents increased remarkably (figure not shown).However, the oxidation peak currents decreased slightly with further increase in the accumulation time suggesting that the amount of IPH tends to a limiting value at the MWCNT film.Considering the sensitivity and working efficiency, an accumulation time of 240 s was maintained throughout. Effect of pH. The electrochemical behavior of IPH in BR buffer of different pH values was studied.At pH 3.5, the voltammogram of IPH was almost similar to that of promethazine (PMZ) owing to the close similarity in structure [19]. IPH is different from PMZ only in one benzene ring, which is replaced by pyridine ring in IPH.Two oxidation waves were seen on the initial scan, and no reduction peak was observed.Like in the case of another phenothiazine derivative, ethopropazine, the peak potential of a1 (of IPH) was pH dependent indicating the involvement of proton in the oxidation process [20].With increase in pH from 2.5 to 7.0, the oxidation peak currents of a1 and a2 gradually increased at MWCNT-GCE (Figure 3).Furthermore, increasing pH to 9.0, the oxidation peak currents of a1 gradually decreased with broader and ill-defined peaks.Apparently, the oxidation signals of IPH seemed to be sensitive in the buffer of pH 7.0.We have also investigated the effect of pH on the oxidation peak potential.With an increase in pH from 2.5 to 7.0, the oxidation peak potential gradually shifted to negative potential suggesting that the protons were involved in the oxidation of IPH.The plot of peak potential of a1 versus pH showed a linear segment at pH 8.0.This intersection point of the curve was found to be close to the pKa value of IPH (8.6) [21].This could be attributed to changes in protonation of acid-base properties of the molecule.The slope was found to be 29.6 mV/pH, which is close to the reported value for PMZ [22,23] and the theoretical value for a two-electron and oneproton transfer reaction.Thus, it could be concluded that the electrode reaction mechanisms of IPH and PMZ are identical at least over the pH range of 2.5-7.0. Effect of Scan Rate. Useful information on electrochemical mechanism can be acquired from the relationship between peak current and scan rate.Therefore, the electrochemical behavior of IPH in BR buffer of pH 7.0 at different scan rates was studied and the results are shown in Figure 4. The peak current was observed to be proportional to the scan rate indicating that the electrode process was adsorption controlled [24].A linear relationship was observed between log I pa and log υ as per (1) shown below: log I pa μA = 0.8255 log υ − 3.5906. ( The slope of 0.83 (obtained from the plot of log I pa versus log υ) was noticed to be close to the theoretical value of 1.0 for an adsorption-controlled process [25,26].The E pa of the oxidation peak was also noticed to be dependent on the scan rate.Further, the peak potential was shifted to more positive values with increase in the scan rates.Linear relationship was observed between E pa and scan rate indicating the irreversibility of the oxidation process with a correlation coefficient of 0.9883 according to (2) shown below: well-resolved curves were obtained in BR buffer of pH 7.0.Under the optimized conditions (sweep rate 20 mV s −1 , pulse amplitude 50 mV, pulse width 30 ms, pulse period 500 ms), a linear relation between the peak current and concentration of drug was observed in the range of 1.25-55 μM IPH (Figure 6).Beyond the IPH concentration of 55 μM, the linearity was lost.The differential pulse voltammograms of different concentrations of IPH are shown in Figure 5.The plot of I pa versus the concentration of IPH showed linearity over the concentration range of 1.25-55 μM IPH with the correlation coefficient of 0.9931.The corresponding linear relation expressing the dependence of I pa on concentration is shown below: Analytical Applications where C is in μM L −1 .Characteristics of the calibration graph are recorded in Table 1. The limit of detection (LOD) and limit of quantification (LOQ) were calculated based on the peak current using (4) and (5) [27,28]: where s is the standard deviation of the intercept (n = 5) of calibration plot and m is the slope of the calibration curve.The LOD and LOQ values were found to be 0.28 μM and 0.94 μM, respectively.The interday reproducibility of the method was examined by recording voltammograms of 5 replicates of 5 μM, 20 μM and 50 μM IPH.These yielded the RSD values of 1.28, 1.42, and 1.53%, respectively.Further, the RSD values for intraday assay reproducibility at 5 μM, 20 μM, and 50 μM solutions (n = 5) were found to be respectively, 1.12, 1.36, and 1.15%.The corresponding results are shown in Table 1.Low values of both LOD and LOQ confirmed the sensitivity of the proposed method.Further, the low values of RSD revealed the good precision of the proposed method for the assay of IPH. Effects of Interferents. The selectivity of the proposed method was examined by studying the effects of interferents, namely, glucose, sucrose, starch, acacia powder, ascorbic acid, and talc.For this, we have recorded differential pulse voltammograms of 2.5 μM IPH in the presence of different concentrations of interferents.The results of effects of interferents on the peak current of IPH are shown in Table 2. It was noticed that the ascorbic acid did not interfere with the peak current of IPH up to 12-fold excess while the acacia powder, talc, and starch showed no effect on the peak current up to 20-fold excess.Further, glucose and sucrose did not exhibit any interference up to 32-fold excess.These results indicated that the proposed method is selective for the determination of IPH.Hence, IPH could be easily determined in the presence of above interferents.The applicability of the proposed method was also examined by analyzing IPH in plasma samples.Suitable amounts of IPH spiked serum samples were diluted with supporting electrolyte and differential pulse voltammograms were recorded.The amount of IPH in serum samples was then determined by referring to the calibration plot.The results incorporated in Table 3 indicated good recovery of IPH.The proposed method is simple, easy to perform, and sensitive enough for the assay of IPH in human serum samples. Conclusions Multiwalled carbon-nanotube-modified glassy carbon electrode was developed as an electrochemical sensor for the assay of IPH based on the enhanced peak current responses of oxidation of IPH.This novel sensing system for IPH was found to be convenient and showed excellent analytical characteristics such as significant lowering of the detection limit, higher sensitivity, and better selectivity.The method provides a simple approach for the determination of IPH in spiked human urine and serum samples without any pretreatment.The principal advantage of the proposed method is its freedom from the interference by excipients. Time.Since the oxidation current is strongly dependent on the accumulation time, we have examined the influence of accumulation time on oxidation peak currents of IPH at the MWCNT-GCE.Accumulation of drug on the electrode surface was done under open circuit potential for different time intervals, and then cyclic voltammograms were recorded at a scan rate of 50 mV s −1 . Figure 3 : Figure 3: Plot of I pa versus pH for the electrooxidation of IPH on MWCNT-GCE in BR buffer at a scan rate of 50 mV s −1 . Figure 6 : Figure 6: Relation between I pa and concentration of IPH on MWCNT-GCE at a scan rate of 50 mV s −1 . Table 1 : Characteristics of calibration plot of IPH. Table 2 : Effects of interferents in the determination of 2.5 μM IPH at MWCNT-GCE. Table 3 : Results of analysis of IPH in spiked urine and serum samples at MWCNT-GCE.The recoveries from urine samples were examined by spiking drug free urine with known amounts of IPH and by recording the differential pulse voltammograms.The calibration graph was used to determine the concentration of IPH in urine samples.The results of analysis are listed in Table3.Higher average recovery (99.37-100.07%)and lower RSD values (less than 1.51%) highlighted good recovery and reproducibility of the results.
2018-12-30T06:09:08.442Z
2012-01-29T00:00:00.000
{ "year": 2012, "sha1": "fb1330e129ead6ebf044e483aa884fc130fba066", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijelc/2012/875631.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fb1330e129ead6ebf044e483aa884fc130fba066", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
198605448
pes2o/s2orc
v3-fos-license
Looking Beyond Invisibility: Rohingyas’ Dangerous Encounters with Papers and Cards State registration and identity documents are often promoted as a way to lift an individual out of the condition of statelessness and begin to redress their deficit of rights. This paper looks beyond invisibility to differentiate between the types of visibility that are produced by documents and registration. Drawing on Rohingyas’ historical experiences of documentation and registration in Myanmar, it explores meanings that Rohingyas’ attach to their identity documents and asks what contributions these narratives can make to understandings of identity documents in statelessness studies. It concludes that in order to ensure the principle of ‘do no harm’, international approaches to statelessness could better factor in the lived experiences of the documented, undocumented and redocumented. Introduction Citizenship and statelessness are often associated with ideas of who is "visible" and "invisible" to states and to the law, who has been "counted" and who remains "uncounted", who is "documented" and "undocumented", and who is "registered" and "unregistered". State registration and ID papers are often promoted as a way to lift an individual out of the condition of statelessness and begin to redress their deficit of rights. This paper looks beyond invisibility to differentiate between the types of visibility that are produced by documents and registration. Drawing on Rohingyas' historical experiences of documentation and registration in Myanmar, it asks: How can the meanings that Rohingya attach to their identity documents contribute to understandings of visibility and identity documents from within statelessness studies? In answering this question, I draw on qualitative narrative research undertaken as part of my research with Rohingya participants. I demonstrate that Rohingyas' understandings and lived experiences of being registered and documented in Myanmar speak to different academic approaches to identity documents. In order to do this, I identify thematic areas within Rohingya narratives that converge with three of these academic approaches. These thematic areas are the emancipatory, repressive, and destructive powers of documents. State-issued identity documents are material objects of law that can frame human experience, generate multiple meanings, and describe social identities. 1 It is through rich ethnographic description that these meanings can be represented effectively within academic research. 2 Therefore, these above-mentioned thematic areas are illustrated through the personal history of one Rohingya man named Mohammed and his three identity documents. Through discussion of the power of documents, I argue that identity documents do not merely prevent and reduce statelessness but can also produce and reproduce it in multiple ways. The lived experience of being registered and documented relates not only to being seen or unseen by the state but also to how one is seen and for what purpose. I suggest that approaches aiming to reduce statelessness through registration and documentation have primarily drawn on understandings of documents as emancipators. Incorporating deeper understandings of the ways in which documents and registration can also Brinham 157 be repressive and destructive could compliment such approaches and contribute to the principle of ' do no harm' that governs donor interventions. The second section of this paper provides a background to Rohingya statelessness in Myanmar, as well as related research. I explain the relevance of an improved understanding of the lived experiences of registration and documentation practices. The third section sets out my methodological approach. It describes how qualitative social science research approaches state-issued documents as a series of encounters between individuals and the state. Thereafter, the research methods employed for the purposes of this paper are explained. The fourth section retells the story of Mohammed's encounters with the state through three identity documents. Referring back to Mohammed's account and the relevant academic literature, it identifies three powers attributed to identity documents-emancipatory, repressive, and destructive. The fifth section focuses on resistance to state power through identity documents. Finally, the paper concludes by suggesting that understanding statelessness as being more complex than invisibility to the state and the law may lead to a more critical and effective appraisal of the use of registration and the issuance of identity documents to redress the rights deficits associated with statelessness. Bureaucratic Cleansing: Myanmar's Citizenship Law and Documentation Practices In this section, a brief overview of the research on Rohingya statelessness in Myanmar is provided. I explain how an improved understanding of the lived experiences of historical registration and documentation practices can enhance study in this field. Rohingyas' have generally been described as having been stripped of their citizenship through the enactment of the 1982 Citizenship Law. However, the broader processes of discrimination and persecution that are both a symptom and cause of their statelessness may better be understood through the study of Rohingya encounters with their state-issued identity documents. 3 Multiple studies, generally using a human rights approach, have provided a legal analysis of the citizenship law in Myanmar. They largely focus on the areas in which the law fails to comply with international standards. 4 Some studies have also identified areas within the law which could be used to advocate for an increased number of individual Rohingya to gain access to different types of citizenship by relaxing the administrative restrictions and expanding the scope of provisions within the existing law. 5 At the time of writing, the impact of such approaches on access to citizenship since the transition from a military government in 2010 has been negligible. 6 The 1982 Citizenship Law created a hierarchy of citizenship with 'full citizenship' at the top. 7 In order to qualify for full citizenship, one is required to either be a member of one of the national ethnic groups, or to have both parents who are citizens. The list of official national ethnic groups is decided at the complete discretion of the Council of the State (1982 Citizenship Law section 4). The acquisition of nationality through other means became excessively burdensome under the 1982 Citizenship Law and almost impossible for Rohingya populations to access. 8 The list of ethnic groups changed from an opened ended and loosely defined notion used in previous citizenship laws 9 to a list of fixed ethnicities that was produced without consulting the population of Myanmar and often did not bare much relation to the ways in which groups on the ground self-identified. 10 Rohingya were not included as a group in this list, despite having been recognized as a national ethnic group in various other ways by the Myanmar State prior to this time. 11 The 1982 Citizenship Law, then, effectively changed the criteria for citizenship from a combination of ethnic origin and long-term residency to being based almost solely on ethnic origin. 12 The law had been drafted and enacted following the failed mass expulsion of approximately 200,000-230,000 Rohingya in 1978-1979. Myanmar was forced to take the vast majority back due to international pressure. 13 The timing of the law, combined with the reported confiscation, removal, and destruction of Rohingyas' documents immediately prior to the expulsions and on return, strongly suggests targeted attempts to denationalize Rohingya as part of a broader bureaucratic cleansing process. 14 Recently, legal scholarship on Rohingya statelessness has shifted its focus away from the content of the 1982 Citizenship Law alone and onto state practices that occurred both inside and outside of domestic law and policy. The studies note that practices relating to documentation of Rohingya effectively prevented them from accessing citizenship. As former citizens of Burma, Rohingyas' should still be entitled to citizenship. 15 The International Fact-Finding Mission Report further shifted emphasis from the content of the Citizenship Law to state practices of seizing, removing and not issuing identity documents that occurred both before and since the enactment of the Citizenship Law. 16 The report noted how the arbitrary implementation of the law violated domestic law, international human rights law, and the principles of the rule of law and legal certainty. 17 It also emphasized that the law engendered discrimination and prejudice at the societal level and recommended an overhaul of Myanmar's citizenship law, 18 while further noting that recent attempts to document Rohingya under the nationality verification process have run in tandem with state violence that they conclude may amount to crimes against humanity and genocide. 19 The study of Rohingya encounters with identity and state-issued documents can provide insights into how they understand and experience the nature of the state and the law in Myanmar, as well as the meanings they attach to citizenship beyond the documents that recognize it. The study also enables researchers to understand the forms and acts of resistance, collaboration, and negotiation in which Rohingya take part. Lastly, it provides a lens to explore the agency of Rohingya and to draw on their own analyses and lived experiences to gain a greater understanding and better interpretation of the historic events relating to the production of Rohingya statelessness. State-Issued Documents in Social Science Research In this section, a description of the way in which qualitative social science research approaches state-issued documents as both material objects of law and as anthropological objects is given. State-issued identity documents in statelessness studies are most frequently referred to as evidence or proof of whether a state recognizes individuals as citizens or not. 20 Securing access to the correct state-issued identity documents for individuals, through various means, forms a cornerstone for approaches to preventing and reducing statelessness. Commonly, the producers of documents claim that their products represent realities or facts that exist in the world outside of the processes that produce them. 21 However, identity documents are not simply neutral records or purveyors of externally available facts about individuals, such as citizenship status, gender, or ethnicity. Documents are crucial tools for states to build and maintain power by establishing a monopoly on the control over freedom of movement and access to rights and benefits. 22 States embrace particular populations while also excluding or 'Othering' in ways that produce noncitizens. State-issued categories and associated documents and registration processes do not merely describe or represent particular identities, they also bring identities into being. 23 In some cases, they also push identities out of being, reify them, or destroy them. 24 Some literature, influenced by anthropological and sociological thought, views documents not only as the instruments of state power, but also as the interface between state power and individual subjectivities. Encounters between humans and state-issued documents can provide important insights into citizenship practices, the production of statelessness, and the nature of the state. Studies that view documents as both material objects of law and governance and as anthropological objects have analyzed state-formation and identities in Northern Cyprus, for Tibetan refugees in India, and in disaster situations in Pakistan. 25 Within this body of work, documents are not only evidence of citizenship but are also artefacts and mediators between individual subjects and the world. 26 Sadiq notes that identity and citizenship documents carry affect. 27 For example, beyond the details recorded on them, they can carry notions such as belonging and loyalty. Hull identifies multiple broad and interlinked approaches to how humans encounter bureaucratic documents within contemporary literature, including those that emphasize affect or emotion relating to the moments of encounter with documents, and those that emphasize signs or describe 'the way documents link to people, place, things, times, norms, and forms of sociality'. 28 These research approaches can add depth and richness to the study of citizenship. Encounters with documents and experiences of documentation processes are particularly prominent aspects of Rohingya oral histories, narratives, and analyses pertaining to the past forty years of their persecution in Myanmar. Oral histories relating to state authorities and documents describe perilous encounters which may result in physical harm, the symbolic destruction of one's group identity, and sometimes death. Accounts of documentation processes frequently feature state theft, deception, humiliation, and force. Documentation processes are also sites of resistance, through which Rohingya resist the (re)production of their statelessness and group destruction. Their narratives describe unity, heroism, tragedy, loss, and sometimes shame. They narrate the stories of "missing" documents that have been lost through confiscation, destruction, nullification, and targeted non-issuance, and of enforced and unwanted documents that have been issued as part of genocidal violence. 29 Research Methods My research employs narrative and ethnographic research methods to explore the slow production of Rohingya statelessness in Myanmar since 1978. The research focuses on what identity papers signify to people, the shifting meanings attached to registration processes, and the meanings that are attributed to citizenship beyond state-issued identity documents. Research participants were Rohingya who had been displaced from Myanmar and were living in camps and other diasporic communities. In order to reach Rohingya from various different waves of forced migration from Myanmar since 1978, field work was conducted between August 2017 and December 2018 amongst Rohingya populations in Bangladesh, Malaysia, India, and Europe. Methods included focus groups, in-depth interviews, and observations from visits to significant sites and community events. There were 100 research participants, 9 focus groups, 61 interviews with durations between 20 minutes and 3.5 hours, and 23 sets of observations based on separate events. The collected data was then coded and analyzed thematically, including for its narrative content. 30 For the purposes of this paper-to explore the notions of visibility and invisibility-I drew on a set of themes that emerged from my broader research relating to the different positive and negative powers attributed to identity documents. The emancipatory, repressive, and destructive powers of identity documents were themes that recurred throughout the data. In order to represent and illustrate these themes that occur more broadly in the research, I have chosen to use one Rohingya man's account of three identity documents. Narrative research and rich description are effective ways to present what documents signify to holders and the meanings that individuals attach to various interactions with the state. As such, a single in-depth narrative, when employed as part of a much wider body of research, can provide a clear explanation and illustration of both the structures and subjectivities involved in documentation processes, as well as insights into everyday processes and practices. 31 Mohammed's story is one of many. It is not one of the more dramatic accounts, and he does not present events that occupy the extremities of human rights abuses or statedirected harm that some Rohingya have experienced. I selected his story because it binds together different periods of the history of state registration of Rohingya into one single narrative, and because his views and understandings of each era occupy a middle-ground that is broadly representative of other Rohingya that participated in this research. Mohammed's story describes the three powers that he, other Rohingya, and academic literature from various fields attribute to ID cards. Three Powers Attributed to State-Issued ID Cards In this section, the story of Mohammed's encounters with the state through three identity documents is retold. Then, it is examined how documents feature in statelessness and social science literature, referring back to Mohammed's accounts to provide context. His accounts are used to illustrate some of the recurring themes within my qualitative research relating to visibility and invisibility. The purpose is to identify the relevance of various social science analyses to Rohingya situation and to point towards the importance of further analysis of the meaning of documents to communities affected by statelessness. Three themes relating to the power of identity documents are identified. The first draws primarily from statelessness literature and relates to the emancipatory power of documents. The second focuses on the repressive powers of documents found in literature on surveillance and securitization. The third draws on notions from within the sociology and anthropology of genocide relating to the destruction and reorganization of national and ethnic identities. 29 Natalie Brinham, '"Genocide cards": Rohingya refugees on why they risked their lives to refuse ID cards' (openDemocracy, 21 A Tale of Three Identity Cards There are three state-issued identity documents 32 that shaped the events of Mohammed's life as a Rohingya in Buthidaung of North Rakhine State in Myanmar and map his journey to the refugee camp in Bangladesh, where he has lived since October 2017. 33 The first is a hidden and treasured document, the second a nullified and removed document, and the third an enforced document. As international agencies and governments deliberate over solutions to the mass displacement of Rohingya into Bangladesh, Mohammed's personal history with these three documents also shapes his own visions of his family's future-of the possibilities and dangers of returning to his home in Buthidaung. He recounts the stories of these documents as a series of increasingly dangerous encounters with the Myanmar State. The first document, hidden and treasured, is endowed with intergenerational belonging despite its lack of legal value and is now also furnished with the personal suffering and sacrifices that Mohammed has made to maintain his group identity and keep this ID card. It is his father's "three-fold card" or National Registration Card (NRC) that was issued to Rohingya in post-independence Myanmar before military rule. It is the same document that other citizens of Myanmar carried before 1989. 34 Mohammed's father, aware that many Rohingya had had these documents confiscated, destroyed, or removed and never returned, kept it well hidden long after the time when it had ceased to be of any practical use. Before his death in 2005, he instructed Mohammed to keep it safe, as it may one day be useful to prove that he and his family belonged to the country. Mohammed paid heed to this advice and when the security forces came to his village with a list a year later to collect all the old NRCs, he denied all knowledge of the card and refused to disclose its hiding place. As a result, Mohammed was arrested and sentenced to three years in prison. The charges bore little relation to his "offence"-they were immigration charges. In return for payment of a large fee, he was able to have his sentence reduced to seven months. Upon being released, he had still not revealed the location of the ID card. When Mohammed's village was torched in 2017, he grabbed the ID card from its hiding place, hid it on his body, and fled. With him, the ID card endured the difficult journey to the camps of Bangladesh, where he still has it. He explained that-just in case-he has not even disclosed the location of the card to his wife. He is adamant that, regardless of how they try to persuade him, he will never show this card to Myanmar State officials. The second document, which was nullified and removed, had been laced with false promises of future citizenship recognition from the military government when it was issued to Mohammed in the middle of the 1990s. It was his Temporary Registration Card (TRC), also known as a "white card". Mohammed described how white cards had been used by the Myanmar State to "trick" both Rohingya and the United Nations Refugee Agency (UNHCR), which was working in Rakhine State at the time and was advocating for documentation for Rohingya. 35 The Agency had been led to believe-or hope-that citizenship cards would be issued to Rohingya in the future. White cards had been issued to all Rohingya after the mass forced repatriations from Bangladesh in the 1990s instead of the citizenship cards issued to people of other ethnic identities in Myanmar. 36 The white card's meaning and significance in Myanmar's society transmuted over the decades as the political system changed. Mohammed's experiences of the apartheid system in North Rakhine State 32 Mohammed was interviewed on September 27, 2018 in a refugee camp in the Coxes Bazar area of Bangladesh. All names and references to villages of origin have been changed or omitted to ensure the confidentiality and safety of research participants. 33 There exists a fourth document, the household list, which is also important in Rohingyas' lives. However, I have chosen to focus on three different 'identity documents' for the purposes of clarity and conciseness. 34 NRCs were issued in Myanmar from the 1950s until the end of the 1980s. Although most of the cards contained the words 'holding this certificate shall not be considered as conclusive proof of citizenship', the cards were effectively used across the country as national identity cards. Foreigners were required to carry different documents known as the Foreigner Registration Certificates (FRCs). Rohingya were not issued with FRCs but with NRCs. These allowed them to access the same rights as other citizens across the country, including freedom of movement within Myanmar. 35 since the 1990s, were locked in by the ID card, shaping his everyday life. As such, in order to visit his family in a neighboring village, take his cattle to the market, and to get married, he needed his white card. With it, he negotiated the obstructive bureaucracy and the harsh restrictions imposed on daily life. Ultimately, with the card, he became an object of an all-pervasive and repressive state surveillance system. Later, the card also bore his shattered dreams of civic participation in Myanmar's political transition. Despite their otherwise repressive qualities, white cards carried voting rights and the right to stand for public office-rights which Rohingya had enjoyed since Myanmar's independence. These political rights were of minimal significance during the years of military rule but gained significance when national elections, hailed as an "opening-up" of the country, were finally held from 2008 onwards. In 2015, the white cards were voided and collected, marking the end of Rohingya participation in national elections. Thus, when Mohammed handed his white card back to the state authorities, his emotions were mixed. Although the ID card had singled him out for state repression, he also felt a sense of loss and a foreboding for his community's future. The third document that shaped the events of Mohammed's life was the enforced one. Mohammed identified the enforcement of this card as an immediate driver of his expulsion to Bangladesh. It has not been issued to him yet, as Mohammed and almost all Rohingya in his village refused to accept it, but the threat of its issuance in a future return to Myanmar still circulates in worried conversations throughout the camps of Bangladesh. It has been described by some as a "tool of genocide". 37 This document is the National Verification Card (NVC) that the Myanmar authorities have been trying-largely unsuccessfully-to issue to Rohingya since 2015. 38 For Mohammed, the NVC is inscribed with the intent of the Myanmar State to destroy his group identity. It singles his people out as foreigners who need to apply for citizenship; 39 not, as he feels, as a community that belongs to Rakhine State of Myanmar. Government promises that the NVC could potentially lead to citizenship in the future fell flat for Mohammed-not just because of the similar promises and deceptions attached to his other documents in the past, but also because he believes that a future citizenship card that marks him out as "Bengali", or anything other than a member of a recognized ethnic group of Myanmar, would not secure his safety, let alone secure him access to equal rights that other citizens enjoy. 40 It would also be a source of shame-an act of "collaboration" with a regime that is intent on destroying Rohingya as a group. 41 When civil authorities, immigration, police, and border guard police organized meetings in his community about the NVCs in 2016, Mohammed and other villagers refused to participate. They were told that under the law, they would not be allowed to stay in the country if they did not comply with the nationality verification procedure. They were fearful but determined to stand united against the issuance of these cards-against being labelled as "foreign". The pressure mounted. Without the NVC, Mohammed could no longer pass through the seven checkpoints between the land where he grazed his cattle and the market where he traded them. He was forced to sell his cattle at a discounted price. His trading business was no longer viable. Without a livelihood, it was difficult to provide for his family. Nonetheless, he refused to comply. The number of visits to his village by the armed forces increased. Afraid that they would be forced to accept the NVC card or be arrested, he would hide in the forest along with the other men of his village-even at night. On one such night, while sleeping in the forest, Mohammed's older brother was bitten by a snake and, unable to access medical attention, passed away. Mohammed described him as a victim of the NVCs. 37 NVCs were repeatedly referred to as 'tools of genocide' during two focus groups that I conducted in a Kutapalong refugee camp in August 2018. 38 These are formally called Identity Card for National Verification (ICNV). However, they are generally referred to as NVCs. These NVCs were piloted in Rakhine State in 2015 and rolled out in 2016. They were piloted with the term 'Bengali' written on them and were largely rejected by Rohingya. The term 'Bengali' was later removed, and no other term was included in its place. However, Rohingya continued to reject the cards. In numerous focus groups and interviews, Rohingya said they felt concerned that they were being registered as Bengali 'behind the NVC card' anyways. 39 "Classification" and "symbolisation" are the first two stages of genocide, whereby people are organised and made to stand out according to race, religion or nationality. See Genocide Watch, 'The Ten Stages of Genocide' (2013) http://www.genocidewatch.org/ genocide/tenstagesofgenocide.html accessed on 13 May 2019. 40 The citizenship cards that in some cases would be issued to Rohingya following nationality verification use the term 'Bengali' and do not carry the same rights as full citizens in Myanmar enjoy. Research participants noted that Rohingya who have accepted these cards have not been able to claim their rights and have become the target of threats and intimidations from Rakhine Buddhist populations. 41 'Shame', 'selling out' Rohingya resistance, and ' collaboration' were repeated themes in interviews relating to feelings about being forced to accept the NVC cards. While the men hid in the forest, the women left behind in their homes were subjected to abuse by the armed forces-the kind of abuse about which Mohammed said that he simply could not speak. 42 Mohammed associated each repressive or violent encounter with the armed forces between 2016 and 2017 with the enforced issuance of NVC cards; even the last, in which his friends were killed and his village was torched. Each refusal to accept the NVCs and each act of defiance against these documents, he associated with Rohingyas' organized resistance against their long-running persecution. These were acts of civil disobedience against unjust documentation processes. In many Rohingya oral histories and biographic accounts, the absence of the NVC cards amongst Rohingya populations were explained as stories of group endurance, unity, and sometimes heroism. The Emancipatory Powers of Identity Documents Identity cards and state-issued documents are described as both emancipatory and repressive in academic literature. They can enable people to access freedoms, rights, and benefits, but they are also crucial tools available for states to exert (excessive) control and implement systems of surveillance over populations. 43 Mohammed's attachment to his hidden and treasured documents relates to the state recognition and the attached bundle of rights that his family enjoyed in the past, as well as to the hope that he and his children may one day also access rights equal to the other citizens of Myanmar. Many Rohingya narratives tell of the extraordinary lengths to which they are prepared to go to retain possession of similar documents. The emancipatory qualities of identity documents and state registration procedures are at the heart of human rights approaches to reducing and preventing statelessness. Although everyone is entitled to human rights by virtue of being a human, within the international state system states are responsible for upholding those rights. 44 Stateless people find themselves unable to claim their rights from any state and fall through the cracks of this system. Citizenship then is understood as "the right to have rights", whereby a lack of citizenship leads to a depletion of many other human rights. 45 In extreme circumstances, being rendered stateless and being cast outside the international system without any recourse to state protection can lay the conditions that enable mass atrocities or genocides to occur. Hannah Arendt, for example, describes how the production of statelessness in Europe of the 1930s and 1940s preceded and lay conditions which contributed to the Holocaust. 46 Thus, state-issued documents can provide a practical method for human beings to access the rights and protections to which they are entitled. 47 For "unregistered" or "undocumented" people, documents or registration procedures can provide evidence of existing citizenship or otherwise form part of a body of evidence that can be used to plot a person's way on a trajectory towards future citizenship. From that citizenship, then, other rights and responsibilities can flow. 48 Citizenship and statelessness are often associated with ideas of who is "visible" and "invisible" to states and to the law, who has been "counted" and who remains "uncounted", who is "documented" and "undocumented", and who is "registered" and "unregistered". 49 Documents and registration procedures can move individuals from a state of invisibility into one of visibility. The idea of documents and registration as being central to people's practical ability to access their rights is reflected in the United Nations Sustainable Development Goal 16.9, aiming to 'provide legal identity to all', as well as in the Global Compact on Migration Objective 4, which seeks to ensure that all migrants (and nationals) have 'proof of legal identity and adequate documentation'. 50 It has also formed the mainstay of approaches to preventing and reducing statelessness, whereby stateless persons, or persons at risk of statelessness, are identified and made visible to the state with the ultimate goal of ensuring documentation by the state as evidence of citizenship, from which the rights available to citizens can follow. 51 The Repressive Powers of Identity Documents Recent literature on statelessness has criticized such increased promotion of national registration and documentation as potentially entrenching the exclusion of particular populations. Academics have warned that increasing the focus on registration and documentation as a solution to a lack of rights or to statelessness, without addressing the underlying causes of discrimination, has the potential to do more harm than good. 52 Manby, for example, has pointed out that national registration processes in Africa are in danger of "locking in statelessness" and creating a system that, by placing increased weight on documentation in order to access rights and benefits, creates further exclusion for those without documents. 53 This is the case for those facing increasing difficulty in accessing health and education services in Uganda and Tanzania. Further, people who were previously described as being "at risk of statelessness" become stateless under the new registration schemes. Manby provides the example of Sudan and Mauritania where, she argues, new national registration processes have been used as tools to denationalize particular sections of the populations. In Mauritania, non-Arabic speaking populations have been left off the national register-an issue described by activist groups as "biometric genocide". 54 Likewise, India's National Register of Citizens (NRC) has increasingly come under international scrutiny regarding issues of exclusion. It is evident that a large number of people in Assam and other areas of India have, being unable to produce the correct documents to evidence their Indian citizenship, been excluded from the NRC. In Assam, this has disproportionately impacted groups who are Muslim or of Bengali origin. At the same time, biometric technology in the hands of the state has reduced the opportunities available to people of informal or uncertain legal status in the country, placing increasing significance on the documents carried by individuals and excluding undocumented or unregistered people from basic services and work. 55 These criticisms and warnings about the potential negative consequences arising from the increased international emphasis from statelessness scholars on securing "legal identities for all" are highly relevant to the Myanmar context. As the story of Mohammed's white card demonstrates, documentation and registration processes can have deeply negative impacts on people's lives-far from providing access to rights. The issuance of NVCs since 2015 in Rakhine State has morphed into a process which he experienced not only as repressive but also as persecution. Yet, since the middle of the 1990s until today, securing state-issued documents and registration has been integral to the internationally proposed solutions to the long-running human rights crisis in Rakhine State. 56 These approaches are based on the same premise: that documents can help to plot Rohingyas' way towards citizenship and emancipation. Consequently, it is not only state authorities but also international agencies that feature in Rohingya narratives relating to their encounters with identity documents. It is often with strong emotions that Rohingya relate their understandings of the role of UN agencies and foreign governments in supporting, promoting, or failing to prevent state documentation and registration that they associate with state persecution. 57 Throughout Rohingya narratives, there are frequent examples of oppression related to birth registration, registration of livestock, household registration, and registration of minor alterations to homes and buildings. Thus, for Mohammed and other Rohingya, the condition of statelessness was very different from invisibility. They were not legally invisible in Myanmar; rather, the problem described was one of hypervisibility to state authorities. From Mohammed's perspective, which was reiterated by many others, the problem was the nature of the law itself and the malicious intent of the law-makers and security forces. Unlike Mohammed's father's ID document, the documents he was issued or refused to accept never contained those illusive emancipatory qualities-only the oppressive ones. His white card made him an object of state surveillance, whereby his every movement and activity beyond his village could be monitored. He was subject to a set of persecutory policies that only applied to holders of the card. 58 For Mohammed, the NVC card carried with it a whole different set of perils. The dangers of the NVC cards did not relate so much to invisibility or hyper-visibility but to their capacities to further damage or destroy Rohingya group identity. They had the powers to further impose and consolidate a national identity in Myanmar and in Rakhine State which had no symbolic or physical space for certain minority groups within their nation, most pertinently not for Rohingya. The Destructive Powers of Documents Identity documents do not only embrace populations into the national fold; they also serve a function in the (re)documenting, (re)counting, (re)categorizing, and (re)organizing of national identities. Mohammed and other Rohingya were not "uncounted", they were "recounted" as something they were not-Bengali and foreign. Their problem was not that they were "undocumented" or "unregistered"; they were "redocumented" and "recategorized" in ways that radically changed the ways in which they could function and interact with the world. Rohingyas' resistance against the NVCs shows that the wrong kinds of documents can be much worse than no documents at all. For Mohammed and other Rohingya, documents were neither a neutral record of life's events, such as birth, marriage, or death, nor did they purvey "facts" about them, such as their country of origin or ethnicity. To Mohammed and almost all participants in this research, NVCs were viewed as an attempt to symbolically and materially destroy their ethnic and national identity. The use and function of documents, registration, and categorization to reshape or reorganize social relations and individual and group identities is a feature of social science literature on national and ethnic identities. This body of work draws on the understanding that all ethnic, national, and social identities are fluid and relational because identities are socially constructed. They are not fixed and immutable as race was understood to be in colonial-era literature but shifted based on a mixture of structural and subjective factors. There is a rich body of literature, for example, relating to how state categories and administrative structures both influenced and reified ethnic identities, particularly as a result of colonial era policies and practices. 59 State-issued documents, registration, and associated categorization of populations do not always reflect how populations self-identify. Yet the categories and documents imposed on populations frame their experiences and interactions with the world around them. Studies and critiques of Rohingya identities have often exceptionalised the fluidity of their ethnic identity, emphasizing political calculations on the part of Rohingya leaders, without taking note of the equally fluid identities of other groups in Rakhine State and Myanmar. 60 They have rarely taken the relational and structural factors that frame and shape these ethnic identities into account. 61 State-issued documents are both a reflection and a material part of those structures that frame social relations and ethnic identities. 62 Documents have the power to reorganize social relations and profoundly change the lived realities of the (re)documented. Building on these notions, there is a growing body of literature that understands documents not only as the material objects of law and policy but also as anthropological objects around which identities and lived realities are organized. 63 In cases where the power relations between the state and the population are so unequal and where dialogue and consultation are quashed, as they are in Myanmar following almost fifty years of military rule, the power of documents and state categories to frame people's lived experiences is much greater. Equally, the power of new technologies related to registration and documentation, such as biometrics, can further consolidate the power of the state. 64 This makes it harder to escape or resist recategorization. Identity documents are not simply neutral records or purveyors of externally available facts about individuals-like citizenship status, gender or ethnicity-as is sometimes claimed. State-issued documents and categories do not just describe or represent particular identities; they also shape, change, and reify them. Longman's work on the role of identity cards that preceded genocide in Rwanda for example, shows how state categorizations cemented ethnic identities within heterogenous communities, such as the Hutu and Tutsi, and ultimately enabled genocidal violence. 65 The producers of documents are generally state authorities. Players in the private sector, specializing in biometric and blockchain technologies, are also increasingly becoming producers. 66 Additionally, in the case of Rohingya or other camp-based populations like refugees and internally displaced people (IDPs), international agencies like the UNHCR and the International Organization for Migration (IOM) are also involved in the production of documents and the ultimate (re)categorization of populations. 67 Sometimes, international organizations work on the premise that documents, rather than being objects that imbue state power, represent external facts about people. 68 They frequently note the state's power to exclude populations from their documentation processes but do not take account of the power of the state's documents to recategorize and reorganize societies and re-write histories. 69 Anthropological and sociological studies of genocide generally focus on the symbolic destruction of groups that accompanies the physical destruction. 70 They often also note that the ultimate goal of a genocide may be to reorganize national or social identities in new ways that consolidate the power of dominant groups. These reorganized identities do not always reflect the historical or demographic reality on the ground. This phase of genocide is what Raphael Lemkin-who coined the term-referred to as 'the imposition of the national pattern of the oppressor', which occurred alongside 'the destruction of the national pattern of the oppressed'. 71 This process is also referred to in the work of genocide scholar Feierstein as "symbolic enactment". 72 Mohammed's story, similar to many other accounts, reflects how integral documents are to this type of reorganisation of national identities. Laws and documents punctuate Myanmar's changed national vision of its State, which on its eve of independence in 1948 set out to strategically embrace multi-ethnic and warring populations and then, following the onset of military rule, sought to remove and repress certain populations and instill a new or reworked form of Buddhist nationalism. 73 Mohammed's story describes documentation, registration, and the production of his statelessness as part of a broader state-led process involving mass atrocities. Mohammed's story is not so much one in which his statelessness precedes the physical destruction of his ethnic group but one in which the production of his statelessness accompanies his group's symbolic and physical destruction. This notion of identity cards destroying one's identity was one of the most prominent themes arising from my research. As noted in the retelling of Mohammed's story, Rohingya visions of the future-of the possibilities and dangers of potential repatriations to their home in Rakhine State-are shaped by their personal histories with their documents and, in particular, their experiences with enforced issuance of NVCs. The news that Rohingya that had been deported from India in 2018 were issued NVCs upon returning to Myanmar circulated through the refugee camps and settlements of India and Bangladesh, causing concern. 74 The available draft of the Memorandum of Understanding on repatriations between the Myanmar government, the UNHCR, and the United Nations Development Programme (UNDP) includes reference to nationality verification on return to Myanmar under the existing law in section 15. 75 Although a small number of "naturalized" citizenship cards have been issued to Rohingya as a result of the nationality verification process, these cards contain the word "Bengali" and thus are often associated not with belonging but with identity destruction and shame. 76 These factors increase the concerns and insecurities of Rohingya in the refugee camps, who worry about how their safety can be ensured on return to Myanmar. Various UN and international agencies at the time of writing were lending support to the continued issuance of NVCs in Rakhine State. 77 The desirability of registration and documents in human rights approaches to statelessness, then, can sometimes converge with the interests of the perpetrating state in ways that are understood, by those being documented, as producing harm. Approaches that continue to support the issuance of NVCs neither take account of the nature of state power in Myanmar, nor do they do not consider the function and power of documents themselves. Documents do not only include and exclude, make visible or render invisible. The issuance of documents neither solely leads to emancipation, nor is the non-issuance of documents the only way in which states use them to repress. Documents can bring identities into and out of being, reorganize them, and destroy them. Imbuing Documents with Resistance to State Power Section 4 considered how identity documents are imbued with state power and how these structures of power frame an individual's lived experiences of the world. However, these processes are often also resisted and subverted. It is evident from Mohammed's narrative about his encounters with the state authorities over the NVCs that resisting these identity cards was not a form of "voluntary statelessness", as has been the case in other forms of resistance to state power. 78 Rohingya were not choosing to be undocumented or invisible. They were resisting the reorganization of state categories and the destruction of their identity. The issuance of cards became a key aspect where Rohingya could resist the state's power to further entrench their recategorization. The organizing, community retelling, and recounting of these acts of resistance gives the identity cards an emotional salience and significance that transcends the words or contents of the documents themselves. Whilst documents may be tools of state power, they are also material objects of the state that individuals and groups can use to resist, subvert, negotiate, and cooperate with that power. 79 The absence of NVCs in Mohammed's community is not characterized by emptiness. In Rohingya narratives, it is an absence filled with suffering, sacrifice, agency, and resistance. The emotional investments of resistance occupy that space, flavoring narratives with talk of unity against the odds, bravery, endurance, and heroism. For Mohammed, the way how he is documented, both on return to Myanmar and in the camps of Bangladesh, is not just a matter of accessing rights but also of group survival and valor. Attempts to ensure the "voluntariness" of Rohingya repatriations by international organizations may also need to take account of the decades-long role that identity cards have played in Rohingyas' struggle against state persecution. Conclusion: Seeing Beyond the Visibility/Invisibility Dichotomy Mohammed's three documents provide insights into how Rohingya have encountered different state-issued identity documents over the years. His story is merely one of many oral histories that reflect similar themes. Together, they can plot the bureaucratic and administrative processes and practices that have occurred before and after each major cycle of mass expulsion and mass repatriation. Mohammed's first identification document, his father's NVC, provided the kind of visibility that features in the sustainable development goals and in approaches to preventing statelessness; it was one that carried status, rights, and belonging-or national identity. 80 The second ID card, the white card, did not provide rights but facilitated state control and surveillance, reflecting a different set of literature on documents and securitization. 81 The third identity document was one which destroyed and reorganized social identities in ways that did not reflect demographic realities in Rakhine State. This document better reflects the literature from the sociology and anthropology of genocide. 82 Mohammed and other Rohingya experienced the production of their statelessness not as invisibility, but as reclassification and targeted identity destruction. Their experiences of persecution and genocide were not preceded by statelessness and invisibility. They were experiences in which the production of statelessness was a slow process that was integrated into state-led and state-perpetrated violence and in which the symbolic destruction of their group became inseparable from the physical destruction. Documents do not merely prevent and reduce statelessness; they also produce and reproduce it in multiple ways. The significance of registration and documentation is much more complex than being visible or invisible, included or excluded, registered or unregistered, and documented or undocumented. In this article, I have argued that statelessness scholarship can be enriched by drawing on approaches from multiple social science disciplines to understand the role of documents in producing, reducing, and preventing statelessness. These approaches enable us to understand documents and registration as more than neutral statements about facts on the ground; namely as material objects that can change social relations and social realities, thus possessing both emancipatory and repressive qualities. In this regard, documents are much more than tools to move people from a state of invisibility into a state of visibility. They not only relate to whether people are seen or unseen by the state and the law but also to how and for what purpose they are seen. Whilst registration and the issuance of documents can be important ways to lift people out of statelessness and enable them to access rights, interventions by international agencies could better factor in the lived experiences of the documented, undocumented, and re-documented in order to effectively ensure the principle of "do not harm".
2019-07-26T13:14:50.766Z
2019-07-02T00:00:00.000
{ "year": 2019, "sha1": "31fbbc6b30552509bc9ef8a32e531638694a7cd3", "oa_license": "CCBY", "oa_url": "http://tilburglawreview.com/articles/10.5334/tilr.151/galley/184/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b981d095a42755043fbba27c261697e561bca3d", "s2fieldsofstudy": [ "Sociology", "History" ], "extfieldsofstudy": [ "Sociology" ] }
51810451
pes2o/s2orc
v3-fos-license
Halophilic Bacterium-A Review of New Studies Halophilic bacteria are organisms which thrive in salt-rich environments, such as salt lakes, solar salterns and salt mines which contain large populations of these organisms. In biotechnology, such salt-tolerant bacteria are widely used for the production of valuable enzymes, and more than a thousand years ago humans began using salt to cure and thereby preserve perishable foods and other materials, such as hides; halophiles can be detrimental to the preservation of salt brine cured hides. The aim of this review is to provide an overview of the taxonomy of these organisms including novel isolates from rock salt, and also to discuss their current and future biotechnological and environmental uses. Halophiles are microorganisms that can adapt to growth in moderate and high salt concentrations and in biotechnology are extensively used for a number of applications, including the production of valuable enzymes.Despite this, halophiles have remained a somewhat neglected group of bacteria.Halophiles cover all there domains, namely Archaea, Bacteria and Eucarya, and contain representatives of many different physiological types, adapted to a wide range of salt concentrations as high as salt saturation.Earlier reviews have discussed the possible applications of halophiles in biotechnological and environmental processes; DasSarma and Arora, 1997, 1 for example have provided information on the limited current and potential practical uses of these organisms. Halophiles have developed two different adaptive strategies to cope with the osmotic pressure induced by the high NaCl concentration of the environments in which they live.Halobacteria species and some extremely halophilic bacteria, for example, accumulate inorganic ions in the cytoplasm (K+, Na+, Cl ) in order to balance the osmotic pressure of the medium.Additionally, they have developed specific proteins that are stable and active in the presence of salt.In contrast, moderate halophiles accumulate (within the cytoplasm) large amounts of specific organic osmolytes, which function as osmoprotectants, providing osmotic balance without interfering with the normal cellular metabolism. Biotechnological applications of halophiles include the production of compatible solutes, biopolymers and carotenoids; they have also been evaluated for use in various environmental bioremediation processes.In addition to being intrinsically stable and active at high salt concentrations, halophilic enzymes can be used in food processing, environmental bioremediation and biosynthetic processes; as a result, discovering novel enzymes, showing optimal activities at various ranges of salt concentrations, temperatures and pH values is of considerable potential economic importance.Such enzymes from halophiles can also be used in industrial applications which do not involve high salt concentrations since such exoenzymes can usually tolerate high temperatures and are stable in the presence of organic solvents. Halophiles Halophiles are salt-loving organisms that inhabit hypersaline environments.The group includes mainly prokaryotic and eukaryotic microorganisms with the capacity to balance the osmotic pressure of the environment and resist the denaturing effects of high salt concentrations.Normally, organisms living in salt-rich environments lose water and die as the result of osmosis.In order to survive in salt-rich environments, the cytoplasm of halophiles must be isotonic with the environment 2 .In order to reach this state, they use two different methods.In the first (mainly used by bacteria, some archaea, yeasts, algae and fungi), organic compounds are stored in the cytoplasm; such compounds help the organism survive osmotic stress 3 .The most commonly used solutes used in this process are neutral amino acids and sugars 4 .An important disadvantage of this method is that it requires the organism to use considerable amounts of energy.The second, and less common adaptation to salt, involves the selective intake of potassium (K + ) ions into the cytoplasm.In exchange, the organism pumps sodium (Na + ) ions out with the help of the sodium-potassium pump 5 .Ions of sodium may also be used but less frequently so than potassium 6 .This adaptation is only used by one order of bacteria and a single family of Archaea 7 .An advantage of this approach is that it uses much less energy than the previously mentioned adaption.The main disadvantage with this approach is that all of the machinery within the cell (enzymes, structural proteins, etc.) must be adapted to high levels of non-organic ions and high salt levels; such an approach turns out to be much more demanding than the use of compatible solutes.Most halophiles use only use one of these two approaches, although a few halophiles can use both. Taxonomy and phylogeny of halophiles The first halophilic fermentative bacterial species, Halanaerobium praevalens was isolated from the sediments of the Great Salt Lake (Utah) and characterized in 1983 8 and placed firmly in the family Bacteroidaceae as a genus with uncertain affiliation 8 .The characterization of H. praevalens was followed by the isolation and characterization of Halobacteroides halobius in 1984 from the sediments of Dead Sea and similarities in 16S rRNA sequences between the two halophilic fermentative bacteria were observed leading to placement of the species in a new family, namely the Haloanaerobiaceae 9,10 Between the years 1987 and 1995, five new halophilic fermentative genera, Halothermothrix, Halocella, Acetohalobium, Halanaerobacter, and Orenia, were characterized and placed in the family Haloanaerobiaceae 12,13,14 15 16 .Eventually, in 1995, a new order (named as Haloanaerobiales) for the halophilic fermentative bacteria was proposed 17 .In addition, as the result of taxonomic studies and phylogenetic analyses of the halophilic fermentative bacterial genera, a novel family (named as Halobacteroidaceae) was proposed and the genera Halobacteroides, Acetohalobium, Halanaerobacter, Sporohalobacter, and Orenia were re-assigned to this novel family 17 . The extremely halophilic archaea (also called haloarchaea or, traditionally, ''halobacteria'') belong to the order Halobacteriales, which contains only a single family, the Halobacteriaceae 18 ; since the publication of Bergey's Manual of Systematic Bacteriology (2001), which listed 14 recognized haloarchaeal genera, the number has increased to 19 genera i.e. according to the International Committee on Systematics of Prokaryotes (http://www.theicsp.org).The number of validated species currently stands at 57.One taxonomic criterion for the identification and recognition of haloarchaea is the sequence of the 16S rRNA genes; specific signature sequences and signature bases as detailed by Kamekura et al. (1982), 19 who also recommend the determination of 23S rRNA gene sequences for more refinement. The currently recognized genera and species of the family Halobacteriaceae are listed in Table 1; also shown are the data bank accession numbers for the 16S rRNA gene sequences.Historically, the composition of membrane polar lipids has long been used as one of the key chemotaxonomic criteria for the differentiation of haloarchaeal genera 19,20 . All haloarchaea examined to date possess ether-linked phosphoglycerides; phosphatidyl glycerol and phosphatidyl glycerol phosphate methyl ester are always present; many strains contain phosphatidyl glycerol sulfate and one or more glycolipids or sulfated glycolipids 18 ; most glycerol ether core lipids contain C20C20 (diphytanyl) isoprenoids, although some strains, notably haloalkaliphiles, possess also C20C25 (phytanyl-sesterterpanyl) or C25C25 (disesterterpanyl) isoprenoid chains.Halobacterial taxonomy, based on the polar lipid composition, has proved to be remarkably consistent with phylogenetic data deduced from 16S rRNA gene sequence comparisons 18 .Halobacteria (haloarchaea) are a monophyletic group, with the most distantly related species showing a 16S rRNA gene sequence similarity of 83.2% 18 .The methanogens, another archaeal group, are their closest relatives, with less than 80% 16S rRNA gene sequence similarity 21 .The complete list of required and recommended criteria for the determination and recognition of haloarchaeal species has been proposed 22 .Three genomes of haloarchaea have been sequenced, namely Halobacterium salinarum NRC-1 23 , Haloarcula marismortui 24 and Natronomonas pharaonis 25 . Industrial Application of Halophilic Microorganisms The production of solar salt Traditional salt making by evaporation of seawater in shallow ponds in coastal areas takes place in the tropics and subtropics 26 .As the brine approaches saturation and salt starts to crystallize, the it become coloured red.Three types of halophilic microorganisms contribute to the colour: extremely halophilic Archaea (family Halobacteriaceae ) containing 50-carbon carotenoids (bacterioruberin and derivatives) and sometimes also the retinal protein bacteriorhodopsin (see below), the β-carotenerich unicellular green flagellate alga Dunaliella salina and finally, the red halophilic bacterium Salinibacter rubber, which contains a unique C40-carotenoid acyl glycoside.In less saline ponds where the earlier evaporation stages take place, dense microbial mats develop on the bottom, these being composed of a range of different cyanobacteria as well as many other types of microorganisms. The importance of these bacteria in the salt making process only started to be recognized in the 1970s when it was realized that microorganisms play a role in determining the quality and quantity of the salt harvested, realization which led to the development of biological management practices for the operation of solar salterns 27,28 .Microbial processes it seems influence the size and quality of the salt crystals formed in solar saltern crystallizer ponds.At some sites large solid halite crystals precipitate that are easy to process and yield a high-quality product, while elsewhere, crystals are soft, with a high content of entrapped mother liquor, making them difficult to harvest and to purify; where seawaters of nearly identical composition exist, biological processes may account for these differences by influencing evaporation and/or crystallization. Halophiles and Fermented foods Large amounts of salt are used in the preparation of certain types of traditionally fermented foods.Such salt-rich food products are especially popular in the Far-East.Examples include 'jeotgal', traditional Korean fermented seafood, the Japanese 'fugunoko nukazuke' prepared by fermentation of salted puffer fish ovaries in rice bran, and 'nam-pla', a Thai fish sauce.The latter product is made by adding two parts of fish and one part of marine salts.The mixture is covered with concentrated brine (25-30% NaCl) and allowed to ferment for around a year.Surprisingly relatively little is known about the microorganisms involved in the preparation of these foods and about the roles they play in the production process.In some cases, the salt concentration during the fermentation process is sufficiently high for the development of Archaea of the family Halobacteriaceae.The first halophilic archaeon obtained from Thai fish sauce (nam pla) was an isolate resembling Halobacterium salinarum 29 , and two new species, Halococcus thailandensis and Natrinema gari, were recently isolated 30,31 .Halalkalicoccus jeotgali is a novel isolate obtained from shrimp jeotgal 32 . Halophiles and Biopolymers Biosurfactants and other biopolymers have been produced using non-fermentative halophilic bacteria, but not fermentative halophiles a result which may reflect the fact that fermentative halophilic strains have been relatively little studied.Biosurfactants are biopolymers which are able to decrease surface tension in liquids thereby increasing the motility of hydrophobic hydrocarbons.They have been used in the bioremediation of oil-contaminated hypersaline soil or water, and may also be useful for use in in situ microbialy-enhanced oil recovery.The is based on the fact that many petroleum reservoirs are hypersaline and exist at high temperature, making it likely that halothermophilic bacteria could find a practical us in this technology 33 . Halophiles and Enzyme Production Halophilic microorganisms produce stable enzymes, including many hydrolytic enzymes such as DNAses, lipases, amylases, gelatinises, and proteases.Such enzymes are able to function under high concentrations of salt which would normally lead to the precipitation or denaturation of most proteins, including enzymes.Most halophilic enzymes are inactivated and denatured at concentrations of NaCl below 1M.Enzymes produced by halophilic fermentative bacteria in contrast are salt tolerant and, actually, saltrequiring due to their need to maintain high intracellular ion concentrations maintained for balancing the osmotic pressure in hypersaline environments.Most interest in relation to halophilic enzymes has been devoted to isomerases and hydrolases, including amylases that catalyze the bioprocessing of starch and galactosidases which catalyze the bioprocessing of lactose.Salt-require ring enzymes have been cloned and produced as inactive forms in Escherichia coli and subsequently successfully activated with increase of salt concentration 34 . Halophiles and Alternative Energy Environmental issues, notably climate change are currently of common public concern and there is an urgent and constant need for alternative renewable energy sources, such as bioethanol, biobutanol, and biohydrogen.Currently, butanol production by halophilic fermentative bacteria has yet to be reported, but many halophilic fermentative species are known to produce ethanol, however, according to the best of our knowledge, halophilic ethanol production has not been studied in more detail. Hydrogen is a renewable and clean source of energy and thus considered an important future potential energy carrier, particularly since it has a very high heat value and is readily combustible, with water being the sole end product, making it an environmental friendly potential future energy source.Biological hydrogen production in hypersaline environments would not only be sustainable but also carefully protected from contamination by non-hydrogen producing organisms; halophilic hydrogen utilizing methanogens for example, rarely contaminate bioprocesses 2,7 .As a result, the sterilization costs needed before halophilic hydrogen production can begin are likely to be minimized, particularly where high volumes are employed.Hydrogen production using glycerol has been achieved using H. saccharolyticum 35 .The highest hydrogen yields were achieved by using 2.5 g/l glycerol and 150 g/ l salt at pH 7.4. Halophiles and Food biotechnology The use of halophilic fermentation has a number of advantages in relation to the production of salt-containing food.The fermentation products give taste, aroma, and flavor, and it has been shown that the production of acetate as a fermentation product protects food from contamination with spoilage-yeasts 33 .Halophilic or halotolerant fermentative bacteria have been used to produce a wide variety of food products, notably fermented fish, shrimp, meat, fruits, and vegetables (pickles), Asian fish and meat sauces, rice noodles and flours, and Indonesian soy sauce 33,36,37 .Most of the bacteria reported to be being involved in food production are non-obligate halophiles, including species of the genera Lactobacillus, Halobacterium, Halococcus, Bacillus, Pediococcus, and Tetragenococcus 33,36,37 .In addition to the fermentation products, halophilic bacteria have been used to produce dietary supplements, such as polyunsaturated long-chain fatty acids, and colorants, such as β-carotene 38,39,40 Polyunsaturated fatty acids are vital for human nutrition and have traditionally been added to food in form of fish oil, which, however, might give the food undesired taste or odor 33 a problem which may be avoided by the use of halophile-derived fatty acids. Production of β β β β β-carotene by Dunaliella The cultivation of the green algae Dunaliella salina and D. bardawil for the production of â-carotene remains the most important application of halophile biotechnology 41,42 .The first pilot plant for the mass culture of Dunaliella was introduced during the mid-1960s in the Ukraine, and since then commercial Dunaliella growing operations have been set up in a number of countries, particularly for the production of the pigment β -carotene which is in high demand as an antioxidant, and a source of pro-vitamin A (retinol), as well as a food colorant.The antioxidant properties of β -carotene also make it popular health food supplement.While this compound is present in many algae and higher plants and can also be synthesized chemically.The chemical product however, differs from that produced by Dunaliella in that the synthetic form consist solely of trans β -carotene, while the algal also contains a high percentage of 9-cis β-carotene, which is a more effective quencher of singlet oxygen and other free radicals than is the pure trans form.Not all authorities are however, convinced that the addition of β -carotene to the human diet is necessarily beneficial.Both D. salina and D. bardawil produce large amounts of β carotene when grown under suitable conditions, the pigment being found concentrated in small globules between the thylakoids of the cell's single chloroplast.The major environmental conditions stimulating the accumulation of this pigment are high light intensities, high salinity and nutrient limitation; the slower the cells grow in the presence of high irradiation levels, the more pigment is formed.Some strains may then produce more than 10% of their dry weight as β -carotene 43 . A variety of technologies are used to grow beta-carotene rich Dunaliella biomass in various countries, including Australia, USA, China and Israel.These approaches vary from cultivation in large lagoons to intensive growth systems at high cell densities under carefully controlled conditions.In extensive open pond systems, no mixing is applied, and the growth conditions are poorly controlled.Intensive cultivation of Dunaliella, on the other hand, is a high technology operation where all parameters are controlled.Using a 3000 m 2 shallow (20 cm deep) paddlewheel driven raceway ponds an average yield of 200 mg β -carotene per m 2 per day can be obtained 44 .A two stage operation is often advantageous.Firstly, a large biomass is produced by the addition of high nutrient levels.Under these conditions the cells produce only a small amount of β -carotene.In the second stage, nitrate limitation is induced to stimulate carotenogenesis 44 Predatory ciliates sometimes cause losses in outdoor mass cultures of Dunaliella, and different strategies have to be used to minimize the problem.Finally, superintensive cultivation systems in closed bioreactors can be used which allow for the production of high cell densities with cells having a high β -carotene content which is enriched in the 9-Cis β -carotene isomer 45,46 . Glycerol production by Dunaliella Glycerol can also be produced by the alga Dunaliella 47 with cells grown in near-saturated NaCl solutions containing as much as 6-7 M intracellular glycerol.Mass cultivation of Dunaliella for the commercial production of glycerol is effective 48 ,but because of the low price of glycerol produced by other methods (as a byproduct of the manufacturing of animal and vegetable oils), and due to the high cost of the harvesting algal cells, no commercially feasible process has yet been developed. Environmental Applications of Halophilic Bacteria Biodegradation of heavy oils A halophilic bacterium, strain TM-1, was isolated from the reservoir of the Shengli oil field in East China.Strain TM-1, and shown to be able to degrade crude oils.It is a gram-positive nonmotile coccus which grow at up to 58ºC and in an 18% NaCl solution.The strain was found to be a facultative aerobe capable of growth under anaerobic conditions.Moreover, it produces butylated hydroxytoluene, 1,2benzenedicarboxylic acid-bis ester and dibutyl phthalate and can use a variety of organic substrates.Laboratory studies showed that strain TM-1 can degrade and change the properties of the oil and when grown on heavy oils can lead to a loss of aromatic hydrocarbons, resins and asphaltenes 49 . Decolorization of textile azo dyes Among 27 strains of halophilic and halotolerant bacteria isolated from effluents of textile industries, three showed a marked ability to decolorize widely utilized azo dyes.Phenotypic characterization and phylogenetic analysis based on 16S rDNA sequence comparisons show that these strains belong to the genus Halomonas.The three strains can decolorize azo dyes in a wide range of NaCl concentration (up to 20% w/v), temperature (25-40ºC), and pH (5-11) after 4 days of incubation in static culture; they are also able to decolorize a mixture of dyes.These strains also readily grow in and decolorize high concentrations of dye (5000 ppm) and can tolerate concentrations of up to 10,000 ppm of the dye.Decolorization appears to be due to biodegradation by a reduction of the azo bond, followed by cleavage 50 . Biological waste treatment The potential of halophilic anaerobic fermentative bacteria for use in anaerobic treatment of saline waste waters has been reported 51 52 .For this purpose, halophilic fermentative bacteria have many advantages over 'conventional biological treatment systems'.For example, they can operate in high salt concentrations and may tolerate heavy metals and be capable of degrading a wide range of organic compounds 8,53,54,55 .According to Oren et al. (1992) 56 halophilic microorganisms play a major role in the biodegradation of pollutants in hypersaline environments.The potential of two halophilic fermentative bacteria, H. praevalens and O. marismortui, to biodegrade substituted aromatic compounds including nitrobenzene, o-nitrophenol, m-nitrophenol, p-nitrophenol, nitroanilines, 2,4dinitrophenol, and 2,4-dinitroaniline was confirmed 56 .In addition, the biodegradation of many nitro-substituted aromatic compounds (initial concentrations 50-100 mg/l) was completed within 24 h.Other compounds which can be degrade by halophilic microorganisms include saturated and aromatic hydrocarbons (by archaeal members), and aromatic compounds, organophosphorus compounds, and formaldehyde by eubacterial members 57 .Finally halophiles may find an important role in the degradation of PCBs 58 . CONCLUSION To date, halophilic microorganisms have found relatively few commercially viable applications.Demand for salt-tolerant enzymes in current manufacturing or related processes are currently limited, but may be extended in the future.This review has highlighted other uses of halophilic microorganisms, including their use in the treatment of saline and hypersaline wastewaters, and in the production of exopolysaccharides, poly â-Halobacterium hydroxyalkanoate bioplastics and biofuels.Many of these processes have yet to be fully exploited, but the future use of halophiles in biotechnology looks very positive.
2018-07-25T09:05:24.078Z
2016-08-22T00:00:00.000
{ "year": 2016, "sha1": "d2d856980afa30e2eaf6405c1ed68be78433a8b5", "oa_license": "CCBY", "oa_url": "http://biotech-asia.org/pdf/vol12no3/bbra_volume12_Issu03_p2061-2069.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d2d856980afa30e2eaf6405c1ed68be78433a8b5", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
15238732
pes2o/s2orc
v3-fos-license
A SNP Harvester Analysis to Better Detect SNPs of CCDC158 Gene That Are Associated with Carcass Quality Traits in Hanwoo The purpose of this study was to investigate interaction effects of genes using a Harvester method. A sample of Korean cattle, Hanwoo (n = 476) was chosen from the National Livestock Research Institute of Korea that were sired by 50 Korean proven bulls. The steers were born between the spring of 1998 and the autumn of 2002 and reared under a progeny-testing program at the Daekwanryeong and Namwon branches of NLRI. The steers were slaughtered at approximately 24 months of age and carcass quality traits were measured. A SNP Harvester method was applied with a support vector machine (SVM) to detect significant SNPs in the CCDC158 gene and interaction effects between the SNPs that were associated with average daily gains, cold carcass weight, longissimus dorsi muscle area, and marbling scores. The statistical significance of the major SNP combinations was evaluated with x2-statistics. The genotype combinations of three SNPs, g.34425+102 A>T(AA), g.4102636T>G(GT), and g.11614+19G>T(GG) had a greater effect than the rest of SNP combinations, e.g. 0.82 vs. 0.75 kg, 343 vs. 314 kg, 80.4 vs 74.7 cm2, and 7.35 vs. 5.01, for the four respective traits (p<0.001). Also, the estimates were greater compared with single SNPs analyzed (the greatest estimates were 0.76 kg, 320 kg, 75.5 cm2, and 5.31, respectively). This result suggests that the SNP Harvester method is a good option when multiple SNPs and interaction effects are tested. The significant SNPs could be applied to improve meat quality of Hanwoo via marker-assisted selection. INTRODUCTION Detection of genes or single nucleotide polymorphism (SNP) for economically important traits has been extensively performed in farm animals, and so far 5,920 quantitative trait loci (QTL) in cattle were reported from 315 publications (www.animalgenome.org). Most important traits in farm animals are multi-factorial, i.e. influenced by interaction of multiple genes and environmental factors. Recently, an advanced SNP genotyping technology such as high throughput SNP chips are available, e.g. the bovine Illumina 770k or Affymetrix 640k SNP arrays. To evaluate whether any SNP is associated with a trait of interest, a large amount of SNPs need to be considered simultaneously, e.g. by fitting the SNPs into the conventional Animal model, which may yield over-parameterization problems. To handle high-order dimensional data, a multifactor dimensionality reduction method was proposed to efficiently detect multiple genes and interactions effects between the genes (Ritchie et al., 2001;Cho et al., 2004;Su et al., 2012). The method was designed to address highdimensional data and to uncover complex relationships without relying on the models that fit multiple gene interactions in a parametric fashion (Bastone et al., 2004). Yang et al. (2009) developed a new genetic interaction approach, a SNPHarvester method, to reveal gene interactions and interaction-interaction relationships between a large pool of genes. However, the method was applied only to binary data in a case-control study. Previously, association studies between CCDC158 gene and growth and carcass traits in Korean cattle, Hanwoo, were performed under linear models, in which a single SNP or haplotype (additive) effects were fitted (Lee et al., 2008;Lee and Lee, 2009;Lee et al., 2010). In this study, a SNPHarvester method with a support vector machine was applied to detect significant SNPs in the CCDC158 gene and interaction effects between the SNPs that were associated with growth and carcass quality traits in Hanwoo. the National Livestock Research Institute (NLRI) of Korea. The steers that were sired by 50 Korean proven bulls were born between the spring of 1998 and the autumn of 2002 and reared under a progeny-testing program. All steers were fed under a tightly controlled feeding program at the Daekwanryeong and Namwon branches of NLRI. The steers were castrated at six months of age and each set of four individuals were raised in a pen (4 m8 m). After six months of age, they were fed with concentrates consisting of 15% crude protein (CP)/71% totally digestible nutrients (TDN) for a period of 60 to 90 d; 15% CP/71% TDN for a period of 180 days; and 13% CP/72% TDN for a period of 90 to 120 days of self-feeding. Roughage was offered ad libitum, and steers had free access to fresh water throughout the entire period. After two years, the steers were slaughtered. Average daily gain (ADG) was measured between 6 and 24 months of age. After slaughter, the carcass was chilled for 24 h and cold carcass weight (CWT) was measured. Also, longissimus dorsi muscle area (LMA) and marbling score (MS) were measured according to the standards of the Korean Animal Product Grading Service. The means and standard deviations of ADG, CWT, LMA, and MS were 0.7520.089 kg, 316.834.5 kg, 75.38.1 cm 2 , and 5.614.18, respectively. SNP genotyping Genomic DNA was extracted from white blood cells using the phenol-chloroform method (Sambrook and Russell, 2001). A total of 19 polymorphic SNPs of the coiled-coil domain containing 158 (CCDC158: Gene ID 534614) were obtained according to Lee et al. (2010). For the SNP genotyping, primers for the amplification and extension were designed for the single-base extension (Vreeland et al., 2002). Primer extension reactions were conducted using the SNaPshot ddNTP Primer Extension Kit (Applied Biosystems, Foster City, CA, USA). For the cleanup of the primer extension reaction, one unit of SAP (shrimp alkaline phosphatase) was added to the reaction mixture, and this mixture was incubated for 1 h at 37C, followed by 15 min at 72C for enzyme inactivation. DNA samples containing extension products and the Genescan 120 LIZ size standard solution were added to HiDi formamide (Applied Biosystems, Foster City, CA, USA) in accordance with the manufacturer's recommendations. The mixture was incubated for 5 min at 95C, followed by 5 min on ice, after which electrophoresis was conducted using the ABI PRISM 3130XL Genetic Analyzer. The results were analyzed using GeneMapper v4.0 (Applied Biosystems, Foster City, CA, USA). SNP Harvester method with a support vector machine A support vector machine (SVM), a statistical algorithm, has an advantage of solving the problem of nonlinear regression by restructuring high-dimensional spatial data into linear regression functions (Vapnik, 1998). A Softmargin technique adopting slack variables was applied (Figure 1), which allowed for hyper-plane with minimal misclassification and soft margins (Tan et al., 2006). In the SVM model, a Kernel function of the RBF (radial basis function) was used, for which RBF Gamma 0.1 was set as a default parameter from Modeler 14 (IBM-SPSS, ex-Clementine) and ten was set as a regularization parameter. In the SNP Harvester method that enables to sort out major genotype combinations between genes, several SNPs were selected among a number of SNPs by grouping and exchanging SNPs within a group (Yang et al., 2009). The process was repeated to increase test statistics values. The x 2 statistic, classification accuracy, and B-statistic values were used as score functions. The x 2 statistic value was determined with degree of freedom 3 k -1, in which k indicates number of SNP groups, e.g. two or three in this study. To identify statistically significant groups,  was set at 0.001 level. The SNPHarvester procedure is summarized as follows ( Figure 2): Step 1. Randomly select k number of groups in the entire SNP groups and assign group name, e.g. group A. Set the rest of SNPs as SNP i . Step 2. Exchange SNP i that do not belong to group A with group A elements on a one-by-one basis to calculate scores. Step 3. Set the greatest value from Step 2 as A*. Step 4. If A* has a greater score than A, then replace A with A*. Step 5. If the score of the A* is greater than a threshold value, then A* is classified as a significant group. Step 6. For SNP i+1 that do not belong to group A, repeat Steps 2-5. Step 7. If A* is not replaced with any other SNP i+1 , then stop the process and A* is determined as the final SNP combination set. By repeating the above steps, SNP combinations influencing the test traits were selected. Because the SNPHarvester method was designed to analyze interaction effects for binary traits, the measures of the four traits in this study were converted into binary values under a multitrait model. The SVM technique was employed by taking the four continuous variables as input variables and the binary value as a dependent variable, and two-or three-way interaction models were applied to determine ten significant SNP combinations. Table 1 shows the most significant SNP combinations that were related to the four economic traits in Hanwoo. Among the two or three SNP combinations, the set of g.34425+102A>T, g.4102+36T>G, and g.11614+19G>T SNPs yielded the lowest p-value. However, the subsets of the genotypes for the three SNP combinations could not be identified using the SNP Harvester method. Instead, the genotype within the g.34425+102A>T, g.410236T>G, and g.11614+19G>T combination was investigated in detail by using the CART algorithm (Table 2). Table 2 shows the best SNP combinations for the four economic traits between superior genotypes and others (not presented here). The AAGTGG genotype combination for the three respective SNPs had the best performance, i.e. the greatest t-values and the lowest p-values (<0.001) for the four economic traits. Mean and standard deviations for the AAGTGG genotype group were 0.820.09 kg for ADG, 342.623.6 kg for CWT, 80.45.9 cm 2 for LMA, and 7.354.75 for MS, respectively. These estimates were significantly greater than for the rest of the genotype groups, i.e. 0.750.09 kg, 314.234.0 kg, 74.77.8 cm 2 , and 5.013.97 for ADG, CWT, LMA, and MS, respectively (Table 2). RESULTS For the genotypes of the three SNPs that had the best combination with the greatest performance for the economic traits, i.e. AA, GG, and GT for g.34425+102A>T, g.11614+19G>T, and g.4102+36T>G, respectively, leastsquares means were obtained for the genotype and the rest of genotypes when each SNP was analyzed for each trait ( Table 3). The results show that when the three SNPs were combined, the estimates were greater than when each SNP was considered (Tables 2 and Table 3). For example, the individuals with AAGTGG combination had an average value of 0.82 kg for ADG, while those with GG genotype for g.11614+19G>T had 0.76 kg, which was the greatest value when single SNPs were analyzed. Also, for CWT, LMA and MS, the estimates of the genotype combination of the three SNPs were 342.6 kg, 80.4 cm 2 and 7.35, while the greatest estimates from single SNP analyses were 319.9 kg, 75.5 cm 2 and 5.31, respectively (Tables 2 and 3). DISCUSSION In this study, the bootstrap sampling method (Efron and Tibshirani, 1993) was used to generate 3,830 samples that were based on the 476 steers in Lee et al. (2010), and the top ten SNP combinations of two-and three-way SNP interaction for four economic traits of Hanwoo were selected using the SNPHarvester with SVM method (Table 1). Although multifactor dimensionality reduction (MDR) to detect gene-gene interactions worked well when the number of genes were moderate, in genome-wide association (GWA) studies, direct application of thousands of SNPs is computationally limited (Yang et al., 2009). Further, MDR is computationally intensive, especially when more than 10 polymorphisms are evaluated (Ritchie et al., 2001). Lee et al. (2010) reported that the single SNPs of g.34425+102 A>T(AA), g.1161419G>T(GG), and g.4102+36T>G(GT) within CCDC158 gene were associated with body weight and cold carcass weight in Hanwoo. However, they did not report interaction effects between the SNPs. In this study, by applying the SNPHarvester method, the three SNP combinations, i.e. g.34425+102 A>T(AA), g.11614+19G>T(GG), and g.4102+36TM>G(GT), had the greatest test statistics, x 2 value as 560 (Table 1). Also, the estimates of the best genotype combinations for the three SNPs were much greater than the estimates from single SNP analyses, and the differences between the single and combination effects of (Table 2 and Table 3). This result suggests that interaction effects need to be taken into account when multiple SNPs are tested simultaneously to detect significant SNPs for economically important traits in Hanwoo. In conclusion, the application of SNPHarvester with SVM method could be a good option for multiple SNP analyses, especially to characterize interaction effects between SNPs, and the significant SNPs may be applied via marker-assisted selection to the Hanwoo industry for genetic improvement of the economically important traits.
2016-08-09T08:50:54.084Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "91c004132ae934d2cfeb5bef278e9ff11a97cfee", "oa_license": "CCBYNC", "oa_url": "http://www.ajas.info/upload/pdf/ajas-26-6-766-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91c004132ae934d2cfeb5bef278e9ff11a97cfee", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257177150
pes2o/s2orc
v3-fos-license
Anesthetic management of toxic epidermal necrolysis: a report of two cases Toxic epidermal necrolysis (TEN), is an acute, life-threatening emergent disease involving the skin and mucous membranes with serious systemic complications. It is characterized by widespread epidermal sloughing. Drugs are the most common triggers of TEN, but infection, vaccination, radiation therapy and malignant neoplasms can all induce it in susceptible patients. We report two cases in whom a hair dye and a COVID-19 vaccine (BioNTech®, Pfizer) were believed to be the causative agents. These patients have to undergo repeated debridements of the necrotic tissue. In this manuscript the anesthetic management of TEN patients is discussed. Detailed preoperative evaluation, aggressive fluid and electrolyte replacement, avoidance of hypothermia during debridement, minimizing anesthetic agents and limiting traumatic procedures are key points in the management. Abbreviations: BUN: blood urea nitrogen; BSA: Body surface area; Cr: Creatinine; OR: Operating room; SJS: StevensJohnson Syndrome; TEN: Toxic epidermal necrolysis Introduction Toxic epidermal necrolysis (TEN), is an acute, lifethreatening emergent disease involving the skin and mucous membranes with serious systemic complications. It is characterized by widespread epidermal necrolysis and sloughing. 1 Stevens-Johnson Syndrome (SJS) and TEN are considered to have the same pathophysiology and classified based on body surface area (BSA) involved. Patients are diagnosed with SJS, if less than 10% of their BSA is affected, while TEN needs 30% of the BSA or more to be affected. If the range is between 10% and 30%, the patients are categorized as an overlap of SJS and TEN. 1 The cause of TEN in the majority of the patients is drug exposure and a resulting T-cell mediated type-IV hypersensitivity reaction. Drugs such as anti-epileptics, NSAIDs, allopurinol, antibiotics (e.g., trimethoprimsulfamethoxazole, sulfonamides, cephalosporins), corticosteroids, infections, radiation therapy, malignant neoplasms and vaccination, can all induce TEN. 1,2 It is a rare disease with an incidence of 1−2 cases per million persons per year with a mortality rate of 25−35%. 2 We discuss the anesthetic management of this rare entity in two cases. Case Report 1 A 19-year-old female patient was transferred from another hospital to our Burn Unit because of peeling off the skin of her face, neck, thorax, back, upper and proximal lower extremities ( Figure 1). She developed erythema and vesicles on her body followed by widespread sloughing off the skin, about 24 h after applying hair dye. She was hospitalized to our unit 24 h after the lesions had appeared. She had a history of arrhythmia and was on beta blockers. She was awake, agitated and alert. Her blood pressure (BP) was 115/74 mmHg, heart rate (HR) 125 beat per min (bpm) and temperature 37°C. She had a generalized, painful skin eruption with lesions of bullae to denuded skin areas. Her oral, nasal and pharyngeal mucosae were erythematous and edematous. The lesions involved 95% of the BSA. She had a Mallampati Class I airway. Serum level of sodium was 134 mEq/L, potassium 3.7 mEq/L, chloride 106 mEq/L and bicarbonate 19.2 mmol/L. The blood urea nitrogen (BUN) and creatinine (Cr) levels were 17 and 0.36 mg/dL respectively, glucose was 155 mg/dL, and hemoglobin (Hb) 13.2 g/dL. The electrocardiogram (ECG) showed sinus tachycardia. The patient was brought to the operating room (OR) for skin debridement. BP and oxygen saturation were recorded from lower extremities. Central venous catheter was inserted on right jugular vein since there was no intact skin area for venous access. Body temperature was measured through her ear every 5 min. We planned to apply deep sedo-analgesia, since her oral edema and facial lesions seemed to make it difficult to secure, insert and remove an endotracheal tube. Furthermore, we intended to avoid a burst of blebs, a damage on the larynx or bleeding inside the mouth. Anesthesia was conducted with midazolam, fentanyl, ketamine and propofol. Oxygen flow of 3L was applied nasally. The fluid replacement consisted of crystalloids at a rate to achieve a urine output of 0.5−1 ml/kg/h and a mean arterial blood pressure (MAP) of 60 mmHg. Room temperature was adjusted to 24°C, intravenous fluids were warmed and she was covered with warm sheets to prevent hypothermia. Her vital signs remained stable throughout the procedure. The desquamated skin was removed and BioBrane® synthetic skin dressing was applied over her back and Suprathel® temporary skin substitute over the remaining areas. The procedure lasted 90 min. Following an uneventful and short OR course, she was transferred to the Burn ICU. Intravenous morphine was used for postoperative pain. 10 days later she returned to the OR once more for debridement of 50% of the BSA and dressing changes. The patient consistently got better and was eventually transferred to dermatology clinic to get adjuvant therapy 17 days after admission and discharged from hospital 20 days after transfer without any specific problem. Case Report 2 A 39-year-old woman was admitted to our Burn Unit, diagnosed with TEN after COVID-19 vaccination (BioNTech®, Pfizer). She had a widespread sloughing around 70% of BSA ( Figure 2). She had a history of atopic dermatitis and lactose allergy. She was awake, agitated and alert. Her BP was 151/77 mmHg, HR 68 bpm, and temperature 36.4°C. She had a generalized, painful skin eruption with lesions of bullae to denuded areas. Her oral, nasal and pharyngeal mucosae were erythematous and edematous. She had a Mallampati Class I airway. Serum level of sodium was 133 mEq/L, potassium 4.3 mEq/L, chloride 103 mEq/L and bicarbonate 27.4 mEq/L. The BUN and Cr levels were 24 and 0.63 mg/dL, respectively. Blood glucose was 94 mg/dL, and Hb 12 g/dL. The ECG was normal. The patient came to the OR for skin debridement. BP and oxygen saturation were recorded from lower extremities. Central venous catheter was inserted on right jugular vein again in this case since there was no intact epidermal area for venous access. Body temperature was measured through her ear every 5 min. Oxygen flow of 3L was applied nasally. Under deep sedo-analgesia with midazolam, fentanyl, ketamine and propofol, the desquamated skin was removed and BioBrane® synthetic skin dressing was applied over her back and gluteal areas. The remaining areas were covered with Suprathel® temporary skin substitute (Figure 3). Follow-up of the patient, fluid replacement, measures against hypothermia were the same as with case 1. There was no remarkable change in her vital signs. The procedure lasted for 60 min. Her disease progressed and she was taken to OR two more times for debridement and Suprathel® application to hands and lower extremities. She died on 10th day due to septic shock. Figure 3: Face covered with Suprathel® (Case 2) Both cases received methylprednisolone 1 mg/kg IV twice a day for 3 days on their admission and 3 g/kg Intravenous immune globulin for 3 days. They were both treated with cyclosporine 2.5−3 mg/kg/day. Case 2 also received plasmapheresis on the fifth day. Room temperature was maintained at 24°C and the patients covered with warn sheets to prevent the patients from hypothermia. Discussion The clinical features of TEN include a prodrome of fever and malaise for several days, followed by a rapidly progressing cutaneous lesions with mucosal involvement. Lesions are initially erythematous macules, or atypical target lesions on the trunk that progress to flaccid blisters with positive Nikolsky sign (detachment of the epidermis with light pressure) and sheets of denuded epidermis within days.1,3 Many of the patients have oral involvement with mucositis and ulceration. Ocular involvement occurs frequently with severe complications such as epithelial defects of the ocular surface or conjunctivitis with pseudomembrane.1 Genitourinary involvement occurs in approximately onethird of TEN patients. 3 Multisystem involvement necessitates early multidisciplinary involvement to help prevent sequalae of the disease. Drugs, as mentioned before, are the most common triggers of TEN, but infection, vaccination, radiation therapy and malignant neoplasms can all induce TEN. In our first case hair dye and in our second case COVID-19 vaccine (BioNTech®, Pfizer) were believed to be the cause of TEN. Anesthetic management of TEN patients involves a detailed preoperative evaluation, aggressive fluid and electrolyte replacement, avoidance of hypothermia during debridement, effective postoperative pain control with opioids. The skin lesions of the patients with TEN resemble those of patients with second-degree burns. On the other hand, the microvascular damage was reported to be less serious compared to burn patients requiring a less aggressive fluid replacement. 4 The fluid requirements due to insensible losses through wounds were reported to be about 30% less in TEN patients than in burn patients with similar cutaneous involvement.1,5 Although there is an additional insensible loss through mucous membranes of various organs in TEN patients, the fluid replacement should be adjusted to aim a urine output of 0.5−1 ml/kg/h. 1,5 Our fluid replacement in these cases consisted of crystalloids at a rate to achieve a urine output of 0.5−1 ml/kg/h and a mean arterial blood pressure of 60 mmHg. In case of hyponatremia, hypokalemia or hypophosphatemia, which frequently occur, replacement therapy is required. The peripheral lines should be inserted over normal epidermis, if possible. Lines inserted through denuded dermis may play a role in bacterial entry. 6 Unfortunately, there was no intact epidermis where we could insert a line, so we had to insert a central venous catheter on right jugular vein for fluid resuscitation and blood sampling in both of our cases. An arterial catheter is recommended for cases when it is impossible to use a blood pressure cuff or multiple blood samplings are foreseen. Since we were able to measure blood pressure from lower extremities, we did not consider an additional arterial access. TEN not only invades the skin but also mucous membranes, so patient's oral cavity and airway should be examined for edema, erythema or ulcers. As both of our patients had face involvement, we planned not to apply face masks to prevent skin peeling. We also avoided intubation, not to cause a trauma like a bleeding inside the mouth, a burst of blebs or damage larynx. We decided to apply deep sedo-analgesia for the short procedures. Our main anesthetic choice was ketamine with minimal impact on cardiovascular and respiratory system, and strong analgesic effect. Anesthesia was conducted with midazolam, fentanyl, ketamine and propofol. Oxygen flow of 3L was applied nasally. Dysphagia, excessive salivation, and painful oral ulcerations are encountered in early oral and upper airway involvement. The mucosal involvement can extend down to the larynx, with inflammation and edema requiring endotracheal intubation. Bronchial epithelial detachment may result in lower respiratory tract obstruction, edema, infection, and atelectasis. 3 Early pulmonary complications were found in 25% of cases. It was reported that respiratory symptoms related to bronchial epithelium detachment developed within 4 days after the onset of mucocutaneous symptoms. Bronchial involvement in TEN was not found to be correlated with the extent of epidermal detachment or with related drugs. 7 TEN and SJS patients have been successfully taken to operations and debridement performed under general anesthesia with etomidate, ketamine, inhalation agents and total intravenous anesthesia. 8−10 Endotracheal intubation and extubation should be performed cautiously to avoid any damage to the mucous membranes. Intraoral lesions and edema can be challenging. Smaller sized tubes and gentle suction before extubation may be helpful. Pleural blebs may rupture and lead to pneumothorax, for this reason airway pressures should be controlled, and high pressure and volume ventilation be avoided. Drugs like NSAID, antibiotics, anticonvulsants, barbiturates and sedatives which may precipitate TEN should be used cautiously. Postoperative pain can be managed with paracetamol, tramadol and opioids, avoiding NSAIDs. TEN patients are prone to hypothermia. Measuring body temperature is necessary in TEN patients. It has been suggested that the OR temperature should reach 28°C.9 In our cases we were unable to control the temperature of our operating room because our hospital building has a central heating system. We could adjust the room temperature to 24°C. We used warmed intravenous fluids and covered our patients with warm sheets to prevent hypothermia. The durations of the surgeries were reasonable. Conclusion Since drugs play an important role in toxic epidermal necrolysis, it is important to minimize anesthetic agents and limit traumatic procedures. Anesthetists should keep in mind and be ready for the pulmonary complications such as pulmonary edema due to fluid overload, bacterial pneumonia, atelectasis, bronchial obstruction, bronchospasm and laryngospasm. Since these patients are prone to infections, special care should be given to prevent septicemia. Fluid replacement and precautions for hypothermia are important issues in anesthetic management.
2023-02-25T16:23:38.836Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "af51b8009a2acd53fd1295041457903208503abe", "oa_license": "CCBYNC", "oa_url": "https://www.apicareonline.com/index.php/APIC/article/download/2113/3247", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1855df184ab0f23d5eee1c5eac11c5c775171109", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
247160718
pes2o/s2orc
v3-fos-license
Supporting Our Parent-Trainees: Exploring Curricular and Cultural Challenges That Limit the Utilization of Parental Leave by Residents Over the last 20 years, there has been a focus on the importance of wellness in the physician workforce. Parental leave has occupied a prominent role in these discussions, and this issue of Academic Psychiatry sheds further light on the issue for trainees. Ravi Shankar provides a faculty viewpoint article about his experiences with paternal leave as a trainee and faculty member [1], Leandre et al. report a national survey of psychiatry program directors about parental leave generally [2], and Dillinger reports a survey of psychiatry program directors about maternal leave specifically [3]. While these authors highlight the wide adoption of maternal leave policies in psychiatry residency programs— 94.1% of programs according to Leandre et al. and 95.9% according to Dillinger — parental leave for resident physicians remains wanting. Leandre et al. show that most psychiatry program directors believemore parental leave would be beneficial, yet more than one-quarter report lacking the resources to provide this leave, and more than one-third report that parenthood negatively affected resident wellbeing [2]. All of the authors highlight that parental leave policies conflict with extensive research demonstrating benefits to both mother and child associated with extended parental leave [4]. Extensive work has explored the experiences of parenttrainees during medical training [5], developed and standardized policies for parental leave for trainees [6], and experimented with interventions aimed to increase the utilization of parental leave by trainees [7]. Much of this work has been performed by colleagues in surgical specialties, whose ability to recruit the next generation of physicians— particularly women — has been perceived as threatened [8–10]. In surveys of non-parent trainees, significant proportions of men and women cite limited financial resources, lack of time, fear of being a burden on colleagues, and general life instability as barriers to becoming parents [11]. Only a minority of parentphysicians are satisfied with the time, financial resources, and emotional resources they have available for their children [11]. These challenges persist despite the American Board of Medical Specialties providing guidance to residency training programs about parental leave policies for trainees [12] — guidance that is vague and, ultimately, leaves the development and implementation of parental leave policies for trainees with residency program directors. Leandre et al. and Dillinger take note of this reality, offering suggestions for program-level policy changes that may improve parental leave utilization by trainees [2, 3]. In this commentary, we reflect on the broader literature and our own experiences to consider steps programs may take in improving parental leave for trainees. While many barriers make the implementation of more extensive parental leave for trainees difficult, we focus here on opportunities for change in the clinical curriculum and the culture in training programs that can be implemented by leaders in individual training programs. Over the last 20 years, there has been a focus on the importance of wellness in the physician workforce. Parental leave has occupied a prominent role in these discussions, and this issue of Academic Psychiatry sheds further light on the issue for trainees. Ravi Shankar provides a faculty viewpoint article about his experiences with paternal leave as a trainee and faculty member [1], Leandre et al. report a national survey of psychiatry program directors about parental leave generally [2], and Dillinger reports a survey of psychiatry program directors about maternal leave specifically [3]. While these authors highlight the wide adoption of maternal leave policies in psychiatry residency programs -94.1% of programs according to Leandre et al. and 95.9% according to Dillingerparental leave for resident physicians remains wanting. Leandre et al. show that most psychiatry program directors believe more parental leave would be beneficial, yet more than one-quarter report lacking the resources to provide this leave, and more than one-third report that parenthood negatively affected resident wellbeing [2]. All of the authors highlight that parental leave policies conflict with extensive research demonstrating benefits to both mother and child associated with extended parental leave [4]. Extensive work has explored the experiences of parenttrainees during medical training [5], developed and standardized policies for parental leave for trainees [6], and experimented with interventions aimed to increase the utilization of parental leave by trainees [7]. Much of this work has been performed by colleagues in surgical specialties, whose ability to recruit the next generation of physiciansparticularly womenhas been perceived as threatened [8][9][10]. In surveys of non-parent trainees, significant proportions of men and women cite limited financial resources, lack of time, fear of being a burden on colleagues, and general life instability as barriers to becoming parents [11]. Only a minority of parentphysicians are satisfied with the time, financial resources, and emotional resources they have available for their children [11]. These challenges persist despite the American Board of Medical Specialties providing guidance to residency training programs about parental leave policies for trainees [12] guidance that is vague and, ultimately, leaves the development and implementation of parental leave policies for trainees with residency program directors. Leandre et al. and Dillinger take note of this reality, offering suggestions for program-level policy changes that may improve parental leave utilization by trainees [2,3]. In this commentary, we reflect on the broader literature and our own experiences to consider steps programs may take in improving parental leave for trainees. While many barriers make the implementation of more extensive parental leave for trainees difficult, we focus here on opportunities for change in the clinical curriculum and the culture in training programs that can be implemented by leaders in individual training programs. Curricular Interventions to Improve Parental Leave Utilization In studies of parental leave use among trainees, the risk of extending training due to leave is commonly cited as a reason for delaying childbearing [13]. Historically, residency programs and medical specialty boards have utilized time-based regimes for certifying competency at the completion of residency training. Under this system, extended parental leave necessarily delayed the completion of training as missed training time had to be "made up" to certify the completion of training. More recently, competency-based certification of training has become increasingly utilized by medical specialty boards for determining board eligibility, including the American Board of Psychiatry and Neurology (ABPN). In its "Leave of Absence Policy," the ABPN specifies that "…it is up to the program director and the program clinical competency committee to determine whether a given resident has met training requirements or must extend their period of training" if extensive periods of training are missed [14]. This structure allows for time away from training for a variety of reasons, including parental leave, without extending the duration of training if clinical competency has been achieved. As even small extensions in training can significantly delay career progression, the flexibility offered under this system can be a particularly useful tool in supporting the utilization of parental leave for psychiatry residents. Program directors should be aware of the ABPN's policies and actively utilize the flexibility offered to support psychiatry residents' use of parental leave and, where appropriate, limit the impact of leaves on the duration of training. Importantly, the ABPN's policies should serve as a general framework for parental leave rather than the de facto leave policy for a training program. By clarifying the ABPN's policy on certification of training and ensuring that extensions in training are absolutely necessary, program directors can directly address residents' fears of extending training and delaying career progression. In many cases, though, residents may prefer to continue to participate in training during the peripartum period. Lack of pay during parental leave in most circumstances, a desire to avoid extending training, or a simple preference to continue training as quickly as possible may drive this decision [5]. Though a psychiatry resident's preference for utilizing parental leave should be honored in all circumstances, clinical and curricular interventions can ease the transition from work in the prepartum period and return to work in the postpartum period. "Parent-Friendly" Rotations Perceived lack of time during residency training due to work responsibilities, lack of access to childcare, and concern about the risk of pregnancy complications are cited by a significant proportion of women residents as playing a role in delaying childbearing [13]. Careful scheduling of rotations and supporting educational experiences outside of the clinical setting during the peripartum period can support continued participation in training while new parents contend with the myriad responsibilities of parenthood. At our institution, research rotations, medical education electives, and health policy electives allow residents to work outside of the hospital setting while achieving broad, competency-based educational objectives, often in an asynchronous way. This format allows residents to complete curricular objectives without being tethered to a clinical site for clearly defined periods of time, allowing residents to, for example, utilize brief periods of "free time" to engage in educational activities. Administrative electives, quality improvement electives, and other non-clinical initiatives can also provide critically important opportunities for professional development while allowing for remote or asynchronous learning. Modifying call schedules, reducing overnight work, and scheduling residents to work on clinical services with reduced time requirements can ease the transition of returning to work in the postpartum period [7]. Chernoby et al. describe the implementation of a scheduling policy for emergency medicine residents who are pregnant or new parents that emphasized these objectives, finding that the policy increased resident satisfaction without introducing additional scheduling burdens [7]. Prioritizing scheduling requests for pregnant residents and new parents to ensure that they rotate on lessdemanding clinical services or non-clinical rotations may facilitate continued involvement in training activities during the peripartum period for residents who choose not to utilize extended parental leave. In Leandre et al.'s report, only 21% of program directors reported that trainees do not provide coverage for residents currently on parental leave [2]. Approximately 35% of program directors reported that residents providing coverage for those using parental leave receive renumeration for that coverage, either financial (1.2%) or a reduction in later coverage responsibilities (33.3%) [2]. Nearly 40% of program directors reported that residents in their programs receive no consideration for providing additional call coverage for residents on leave [2]. These findings highlight an opportunity for many training programs to coordinate with their affiliated healthcare systems to provide non-resident clinical coverage for trainees on leave. In doing so, training programs may relieve both explicit pressuresfor example, the need to provide clinical services to healthcare systemand implicit pressuresfor example, trainees' worries that parental leave will be a burden on their colleagues [11] associated with the use of parental leave. Telepsychiatry and Remote Learning The COVID-19 pandemic has resulted in a rapid expansion of healthcare delivery using telehealth technologies to limit the risk of infection among healthcare workers and patients [15]. Non-clinical academic activities have also been modified to facilitate distance learning with the deployment of asynchronous curricula and video conferencing-based learning activities. The lessons learned during the pandemicand the clinical and administrative infrastructure developed to support remote learning experiencescan be repurposed to support residents' involvement in training during the peripartum period, especially residents involved in longitudinal experiences required by the American Council on Graduate Medical Education's core program requirements for psychiatry residency programs (e.g., outpatient clinic experiences and psychotherapy experiences) [16]. Throughout the pandemic, weekly didactic programming for residents and care in longitudinal outpatient sitesincluding a general medication management clinic and psychotherapy clinichave been delivered remotely at our institution. These experiences have generally received positive feedback from our trainees. Small studies completed during the pandemic demonstrate that remotely delivered healthcare services have generally been met with high acceptability by both clinicians and patients [17][18][19]. In our opinion, in-person educational and clinical experiences are preferable for both trainees and patients, and remote experiences should not occupy a primary role in psychiatric education. Some educational experiences may not be easily "converted" to a remotebased model, particularly on an ad hoc basis. Where they can be implemented, however, remote learning experiences may offer trainees the ability to continue their training while balancing their responsibilities as parents. While curricular interventions can make balancing work and parenting responsibilities easier, we emphasize the importance of honoring a trainee's decision with respect to his/her parental leave, including extended periods of time entirely away from work. The goal of making the clinical curriculum more parent-friendly is not to discourage the use of parental leave by providing more palatable, "on-duty" alternatives. Instead, we acknowledge the entrenched challenges facing trainees that may make extended parental leave infeasiblefor example, the preponderance of unpaid parental leaveand highlight the ability of directed, conscientious curricular interventions to support trainees' ability to be both physicians and parents. Parental Leave and the Culture of Residency Programs The medical education literature exploring parenting during residency includes a strong undercurrent of concern about institutional culture [20][21][22]. Whatever the policies in place, the culture of medicineand the culture of specific training programswill ultimately influence how trainees interpret and actually use the time allowed by policy. In his faculty viewpoint, Dr. Shankar says that the "most important" factor affecting decisions about parental leave was "not having a work culture where there was modeling or encouragement to [take] time off as a new father" [1]. Dillinger identified "culture and support"described further as an environment that promotes "wellness" and "work-life balance"as a frequently mentioned theme driving decisions regarding maternal leave [3]. Institutional culture can clearly influence the realworld implementation of policies that may represent an idealized goal not practiced in reality. On a national level, conversations about wellness and the culture of medicine have been prevalent in recent years [23,24]. While psychiatric educators have an important voice in these broad-reaching conversations, we focus here on the culture of individual training programs. As the articles published in this issue of Academic Psychiatry demonstrate, individual programs shoulder great responsibility in crafting and implementing their own parental leave policies and shaping the culture in which the policies are carried out [2,3]. Leaders in training programs can play a critical role in encouraging rapid assessment of cultural values and implementation of changes to address identified shortcomings. Assessing Parental Culture In discussing culture, we refer to shared beliefs, values, and practices that are implicitly accepted by a group [24]. Because these elements are implicitly accepted, they may avoid scrutiny by members of the group, and non-favorable elements may become surreptitiously accepted as "the norm." Consequently, recognizing the subtleties of, and influencing changes in, the culture of an institution or program may be challenging, especially from within the group that leads to the creation of the culture. Schein describes the structure of organizational culture as having three levels: (1) visible structures, processes, and behaviors (referred to as "artifacts"); (2) espoused values, including ideals, goals, and ideologies; and (3) tacit assumptions that may be unconscious or implicitly accepted [25]. In examining the culture of an individual program, searching for areas where espoused values do not align with practicesi.e., identifying the presence of the "hidden curriculum" related to professional development and parenthood [26] may lead to examination of tacit assumptions that can be a target for cultural change. Consider an example in which a program voices support for parental leave, including the implementation of a generous parental leave policy. An expecting resident, who has been told by his/her program leadership that utilizing parental leave is encouraged by the program, then schedules a period of leave. Another resident assigned to provide coverage then halfheartedly tells the departing resident to "enjoy your vacation." In this case, the language used does not align with the espoused values of the program: referring to parental leave as "vacation" may represent a lack of understanding on the part of the covering resident and assumptions that parental leave is not an accepted reason to take a period of leave from training. Additional exploration of the underlying assumptions regarding leave and how these assumptions are communicated to trainees may reveal opportunities for change to align cultural artifacts with espoused values. Consider another example in which a program's mission statement embraces support for residents during life transitions, including parenthood. However, the training program's coverage is so tightly scheduled that leave for any reason is a substantial imposition on the program's ability to staff clinical services. As a result, every instance of leave is treated as an unexpected interruption rather than an anticipated occurrence in a system that supports life outside of residency. The creation of an inflexible schedule may be fueled by pragmatic concerns about clinical coverage with limited resources, but exploration of the tacit assumptions about what can and should be expected from residents as they balance work and life responsibilities may yield opportunities for cultural change. Because culture is lived from within, recognition of these discrepancies can be difficult, and outside observation may be necessary. The dynamic nature of a residency program creates unique opportunities for reflection: new residents come from different medical schoolseach with unique institutional culturesand bring with them their individual cultural beliefs. While residents quickly become "insiders," they enter programs with fresh perspectives and diverse experiences. Resident feedback in this circumstance can be a powerful catalyst for change, highlighting the shortcomings that "insiders" have become blind to and implicitly accepted over time. However, the hierarchical nature of medical training may make openly sharing this feedback difficult for residents [27,28]. Additionally, if a resident is a first-time parent, he/she may lack the context to assess his/her experience within the current cultural paradigm. Residency program leaders may benefit from seeking feedback from other outside sourcesfor instance, colleagues in other national organizations or other departments at their institutionsto ensure that less-thansupportive aspects of the institutional culture are identified. Setting the Stage for Change Schein describes "disconfirmation" as a driving force for change. Disconfirmation is new information that something is not going as expected, disrupting the overall success of the mission [25]. This can trigger "survival anxiety," a feeling of threat to the success of a person or group. In this case, failure to support resident parents may threaten the workforce of psychiatrists by contributing to burnout and attrition, directly opposing the mission of residency programs to train psychiatrists who are prepared to provide competent care to the public. In contrast, the actual implementation of change may provoke learning anxiety: apprehension about what change will look like and how that change can be supported in the current environment. For example, a program may value parental leave but also face the legitimate challenges of arranging clinical coverage and conforming to duty hour restrictions if multiple residents are on leave simultaneously, a seemingly impossible conflict to resolve. While program leaders can mandate changes to scheduling, this only addresses behaviors while neglecting other aspects of the institutional culture, including values and tacit assumptions, that are necessary for long-lasting change. To address learning anxiety, Schein recommends using role modeling, encouraging individuals or services that have been successful to share their methods [25]. Role modeling is recognized in the medical education literature as an important tool for promoting professional behavior and identity formation in medical education [29]. However, in cases where role modeling is not feasible, Shein also encourages allowing stakeholders to think creatively, often through trial and error, to come up with novel solutions that have not yet been tried. This approach can help group members feel invested in the mission and to internalize the changes being implemented more quickly. Overall, programs can take steps to foster a culture that prioritizes the autonomy of residents and advocates for them to be able to exercise the highest degree of flexibility within the broader constraints of the healthcare system. The goal certainly is not to dictate how much time residents take off, but to preserve their options and provide mentorship so that they can find the situation that works the best for them. In this way, residency programs support professional development that seeks to maintain wellness while promoting lifelong learning. Supporting trainees utilizing parental leave brings to the fore a larger conflict in roles that trainees and training programs face. On the one hand, trainees are expected to care tirelessly for their patients, and training programs are expected to prepare trainees to provide competent care to the publican experiential task that necessarily involves an investment of time. On the other, "physician" is one of many roles that trainees may play in their lives, and training programs may sincerely value supporting work/life balance and providing a compassionate, humane, and accommodating training environment to allow trainees to flourish in all aspects of their lives. We do not claim to have resolved this conflict but, instead, to acknowledge its existence and remind academic leaders of the incredibly powerful, if local, role they can play in supporting trainees utilizing parental leave. Declarations Ethics Approval This manuscript does not include data obtained from a novel research study; thus, IRB approval/exemption was not sought. Disclosures On behalf of all authors, the corresponding author states that there is no conflict of interest.
2022-03-01T14:45:36.740Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "ce3a25053a0daad404b2990a884d0d811b34eabf", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s40596-022-01601-8.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ce3a25053a0daad404b2990a884d0d811b34eabf", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
7091682
pes2o/s2orc
v3-fos-license
Sensory Clusters of Adults With and Without Autism Spectrum Conditions We identified clusters of atypical sensory functioning adults with ASC by hierarchical cluster analysis. A new scale for commonly self-reported sensory reactivity was used as a measure. In a low frequency group (n = 37), all subscale scores were relatively low, in particular atypical sensory/motor reactivity. In the intermediate group (n = 17) hyperreactivity, sensory interests and sensory/motor issues were significantly elevated in relation to the first group, but not hyporeactivity. In a high frequency subgroup (n = 17) all subscale scores were significantly elevated and co-occurrence of hyper- and hyporeactivity was evident. In a population sample, a cluster of low scorers (n = 136) and high scorers relative to the other cluster (n = 26) was found. Identification of atypical sensory reactivity is important for targeting support. Introduction First-hand accounts of Autism Spectrum Conditions (ASCs) regularly describe atypical sensory reactivity and perception. Intense reactions to sounds, touch, and visual to ~95% (Leekam et al. 2007;Tomchek and Dunn 2007) of unusual sensory reactivity in children with ASC has been reported. In comparison with non-clinical samples significant differences were found and when compared to clinical groups to a lesser (Baranek et al. 2006), or a much lesser degree (Grapel et al. 2015). Study results on age differences in sensory reactivity in ASC are inconsistent, with indications of both decreasing unusual sensory reactivity with age (Kern et al. 2006) and increasing sensory reactivity with age (Liss et al. 2006). However, the overall picture is that sensory symptoms are still prominent in adult age (Billstedt et al. 2007;Leekam et al. 2007). It is hard to find information on sex differences of unusual sensory reactivity in ASC. Even large studies e.g. Tomcheck and Dunn (2007), or Leekam et al. (2007) do not account for sex differences. Some studies found a difference in the general population as well as in ASC, with women being more hyperreactive (Tavassoli et al. 2014) as well as having more overall sensory symptoms both in ASC and in a non-psychiatric control group (Eriksson et al. 2013). The most common instruments used in research for measuring sensory differences are the Sensory Profile (SP; Dunn 1999) and the Adolescent Adult Sensory Profile (AASP; Brown and Dunn 2002). The theoretical basis for these scales is a general model for sensory processing applicable to all people (Dunn 1997). There are also ASC specific parent-report instruments such as the Sensory Experiences Questionnaire (SEQ; Baranek et al. 2006) with items derived after review of the literature on atypical sensory reactivity in children with ASC diagnoses including empirical studies, parental report studies, clinical reports, and conceptual models of sensory processing. The SEQ largely reflects hyper-and hypo-reactivity. Additionally, the occurrence of atypical sensory reactivity in a social or non-social context is considered in the SEQ. In contrast the instrument used in this study, the newly developed Sensory Reactivity in Autism Spectrum (SR-AS; Elwin et al. 2016), is based solely on self-reporting from adults who themselves have an ASC diagnosis and consequently their own experiences of sensory differences. It is hard to capture the nature of sensory phenomena. There is substantial variation in sensory reactivity both between individuals with ASC (Crane et al. 2009;Leekam et al. 2007) and within individuals with ASC (Baranek et al. 2006). For example hyper-and hyporeactivity can co-occur and there can be variations due to the emotional state of the person (Smith and Sharp 2013). One way to investigate this variability is to identify clusters of individuals with similar reactivity. This has been the aim of several studies that identified sensory clusters in children and adolescents with ASC. Previous cluster analyses were conducted on parent/caregiver data (Ben-Sasson et al. 2008;Lane et al. 2014;Uljarević et al. 2016). The sensory variables entered into the analyses differ between the studies. Ausderau et al. used four sensory subscales: HYPO, HYPER, SIRS (sensory interests, repetitions, seeking) and EP (enhanced perception), in a latent profile transition analysis of a very large national sample of children with ASC aged 2-12 years. Ben-Sasson et al. (2008) used three sensory subscales: under-responsivity, over-responsivity, and sensation seeking, the participants were parents of children with ASC aged 18-33 months. Lane et al. (2014) used seven sensory channels: tactile, taste/smell, movement, visual/auditory sensitivity, underresponsive/seeks, auditory filtering, and low energy weak, in a model based cluster analysis and participants were parents of children with ASC aged 2-10 years. Uljarević et al. (2016) used the same input variables as Lane et al. (2014), but the participants differed and they included parents of children/adolescents aged 11-17 years. Results from previous cluster analyses demonstrated an association between sensory symptoms and anxiety in children and adolescents with ASC (Uljarević et al. 2016) and between anxiety and depressive symptoms in children with ASC (Ben-Sasson et al. 2008). In a study by Pfeiffer et al. (2005) a positive correlation between anxiety and sensory defensiveness in children and adolescents with Asperger's disorder was found as well as a significant relationship between symptoms of depression and hyporeactivity in the adolescent group. This research indicate that psychiatric comorbid symptoms and the rate of unusual sensory reactivity in children and adolescents with ASC are correlated, but we do not know if sensory symptoms are more prevalent in adult ASC with psychiatric comorbidity than in adult ASC without psychiatric comorbidity. Cluster analyses with sensory sensitivity as input variable have been conducted in a series of studies of the general population. Aron and Aron (1997) developed the Highly Sensitive Person Scale (HSP) to measure a hypersensitive trait. In studies conducted with the HSP (2000 respondents in total) a two cluster structure was identified (Aron and Aron 1997;Aron et al. 2012). In one cluster the respondents were highly sensitive (10-35%) and in the other cluster the respondents were not highly sensitive. In light of this research we were interested in exploring the cluster structure in a sample from the general population with the SR-AS subscales as input variables. As most studies on sensory reactivity in ASC are based on parent reports of children's atypical sensory reactions, less certainty about sensory patterns in adults with ASC has been provided by research. While several studies have identified sensory clusters in children and adolescents referred to above, to the best of our knowledge no study to date, has studied an adult sample, using self-report and a cluster analysis approach. Sensory symptoms are described by some adults with ASC to have a strong and sometimes disruptive effect ), but we do not know how these symptoms vary across the population of adults with ASC. The main purpose of this study was to identify subgroups of adults with ASC who have similar sensory features. Based on qualitative research and former cluster analyses we hypothesized that there would be clusters of individuals with different levels of frequency of sensory symptoms. We also aimed to explore the rate of psychiatric comorbidity and possible associations between cluster membership and comorbidity in the ASC sample. Further aims were to investigate the cluster pattern for the SR-AS in a population sample and additionally possible associations between cluster membership and demographic characteristics in both samples. Participants and Recruitment Data for this study were derived from a foregoing validation study of SR-AS (Elwin et al. 2016). The ASC participants were recruited from psychiatric and habilitation services in two counties in Sweden. The inclusion period lasted from April 2012 to May 2014. Clinic-based personnel were instructed to identify and invite consecutive patients who met the inclusion criteria as they came on regular visits to the clinics. Inclusion criteria were that individuals had to be 18 years of age or older and have a clinical diagnosis of autism, Asperger disorder, or Pervasive Developmental Disorder Not Otherwise Specified (PDD-NOS; ICD-10; WHO 1992) registered in the medical records at the clinics and habilitation services involved. Further inclusion criteria, which were ensured by the personnel at the clinics and habilitation centres, were that the individuals invited to participate were able to understand the language in the questionnaire and cognitively able to answer the questions in a valid way. Their judgement was based on their personal knowledge of the patients, patient's medical records, and prior diagnoses including intellectual level. Patients with clinical diagnoses of intellectual disability were therefore not invited. The clinic-based personnel orally informed patients eligible for participation in the study and provided an information letter. All patients were informed that their participation was voluntary and anonymous. Those who gave informed oral consent were asked to complete the SR-AS and answer background questions on gender, age, age at diagnosis, education, occupation, family circumstances, and comorbid axel I according to ICD-10. After completion the participants were asked to place the questionnaire in a prepaid envelope and seal it. The scale could be completed either at the clinic or later. In all 71 individuals with ASC diagnoses completed and returned the questionnaire. All ASC participants were registered as patients at the psychiatric clinics and the habilitation services involved due to their ASC diagnoses or ASC diagnoses in combination with other psychiatric diagnoses. The participants had been diagnosed by multidisciplinary psychiatric teams specialising in the assessment of childhood onset neuropsychiatric conditions or by a psychiatrist and psychologist in cooperation. Global intellectual ability was always assessed with the Wechsler Intelligence Scales (WISC-III; Wechsler 1991) or the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III; Wechsler 1997; WAIS-IV; Wechsler 2008). The general population participants were selected from the Swedish Population Register (SPAR 2011) which includes all residents in Sweden. A random selection was conducted of residents from the same two counties as the ASC sample. In order to to facilitate a comparison between samples the randomization was conducted with age stratified into groups reflecting the age distribution in the population with ASC who were in contact with psychiatric services included in the study. The initial population sample totalled 500. Fifteen addresses were incorrect so 485 persons received the postal questionnaire. In total 164 persons answered, thus the total response rate was 33.8%. Two questionnaires were excluded due to missing items. A letter with information about the study and the questionnaire were mailed to the sample during February 2013 with a reminder within 3 weeks. The questionnaire was identical to the one given to the ASC sample except for omission of questions about diagnoses. We did not include questions on psychiatric diagnoses in the comparison sample because it was not a volunteer sample, the participants were randomly selected from the general population and we feared that questions about diagnoses would cause non-response bias. Both the ASC and population sample answered the questionnaire anonymously and the participants consented by filling in and sending the questionnaire. The Regional Ethical Review Board in Uppsala, Sweden, approved the study (Reg. No. 2012/049). Measurement Data were collected by the SR-AS, tailored to assess sensory reactivity from the perspective of individuals with ASC. The items in the questionnaire are based on an autobiography study (Elwin et al. 2012) and an interview study (Elwin et al. 2013). The internal consistency (Cronbach's alpha) for the total SR-AS in the combined samples was 0.96 and alphas for the subscales scores were: High awareness/Hyperreactivity 0.93, Low awareness/Hyporeactivity 0.89, strong sensory interest, 0.80, and Sensory/Motor 0.89. The validity of the scale was further explored assessing the scale's discrimination between participants with a diagnosis of ASC from the population sample using Receiver Operating Characteristic (ROC) curve analysis and Area under the Curve (AUC). AUC was estimated at 0.93: CI 0.89-0.96, thus indicating that the probability of a randomly selected subject with ASC scoring higher than a randomly selected subject from the population was approximately 93% in this sample. The SR-AS comprises 32 items in four subscales designed to measure domains commonly reported by adults with ASC diagnoses: High awareness/Hyperreactivity (14 items; e.g. "I often feel great discomfort when other people touch me"); Low awareness/Hyporeactivity: (10 items; e.g. "I often feel no pain at times when other people think I should"); Strong sensory interests (4 items; e.g. "When I look at certain patterns or colors or hear certain sounds/ tones I often find them extremely fascinating"); Sensory/ Motor (4 items; e.g. "In everyday situations I often feel clumsy because I drop things, for example, or spill a lot"). The numbers of items differ in the subscales because some types of sensory reactivity like High awareness/Hyperreactivity were much more varied across senses and manifestations than, for example, the Sensory/Motor descriptions and the items are constructed to reflect the experiences described in the target group. The response format is a 4-point Likert type scale ranging from 0 (totally disagree) to 3 (totally agree). The scale scores were interpreted as follows: Totally disagree (0) = no atypical sensory reactivity, partly disagree (1) = quite low atypical sensory reactivity, partly agree (2) = quite high atypical sensory reactivity, and totally agree (3) = very high atypical sensory reactivity. The High awareness/hyper-reactivity subscale includes hyper-reactivity items and two enhanced perception items. Statistics for the SR-AS in the two groups have been described earlier (Elwin et al. 2016). The scores in the ASC group had a normal distribution verified by the Kolmogorov-Smirnov test (p .20; skewness 0.2, kurtosis −0.8), whereas the population sample scores were non-normally distributed (p < .001; skewness 2.1, kurtosis 6.4) illustrated in Fig. 1. Statistical Analyses The Chi square tests and Fisher's exact test were used as appropriate to compare samples and clusters regarding demographic characteristics and comorbid diagnoses. To obtain manageable comparison group sizes, age groups, family situation, and education were allocated to three levels, and current occupation to two levels (Table 1). A hierarchical agglomerative cluster analysis using Ward's method with the Euclidean distance measure was conducted (Hair et al. 1995) to identify subgroups of people with similar sensory features. Subscales obtained by previous confirmatory factor analysis (Elwin et al. 2016) were entered into the analysis. The agglomeration coefficients and dendrograms were inspected to determine the number of clusters. The stability of the hierarchical Ward's cluster solution for the respective samples was examined using a non-hierarchical k-means cluster analysis with the number of clusters specified in advance based on the hierarchical cluster analysis solutions. Due to the non-normal distribution of data in the population sample we used Mann-Whitney U test for comparison of sensory reactivity in the ASC sample in relation to the population sample and for comparison between clusters in the population sample. One way ANOVA with Tukey post hoc test was used for the comparisons of clusters in the ASC sample. Effect sizes for Mann-Whitney U tests were calculated (r) and differences in F-statistics were calculated as eta squared (proportion of variance explained by group membership). Effect sizes were evaluated in accordance with Cohen's (1988) guidelines: a large effect for η 2 ≥ 0.14 and a large effect for r ≥ .5. A binary logistic regression analysis was performed to test which variables predict cluster membership with cluster membership dichotomized into two levels as dependent variable. The alpha level for all statistical tests was set at p < .05. Description and Comparison of the Samples There were no differences in distribution by gender and age between the ASC and population samples. Almost 60% were women and around 50% belonged to the 25-44 age groups. On average the ASC sample had less advanced education and was more often single and unemployed than people in the population sample (Table 1). A majority of the ASC participants (85%) also had self-reported a comorbid psychiatric diagnose, displayed in Table 2 ordered in ICD-10 categories. The total SR-AS mean score and the subscale scores were significantly higher in the ASC sample as compared to the population sample (Table 3). Sensory Clusters in the ASC Sample To test the hypothesis of groups with different levels of frequency of sensory symptoms, a hierarchical cluster analysis was conducted. The agglomeration coefficients and the dendrogram generated by the cluster analysis in the ASC sample suggested a three-cluster solution (Table 4). Table 4 shows the cluster groups' mean scores based on all individual means (scale 0-3) for the different subscales. The outcome consisted of one larger group (52%) with quite low atypical sensory reactivity and two equally sized groups (17%) with elevated scores. The differences between clusters on each sensory variable were examined with one way ANOVA. Tukey post-hoc test revealed that all subscales, except the Low awareness/Hyporeactivity subscale, differentiated significantly between all clusters. Thus Low awareness/Hyporeactivity was relatively low both in cluster one and two (Table 4). The effect sizes were large (η 2 = 0.43−0.76) and especially large for the Sensory/Motor subscale (Fig. 2). The three-cluster solution was validated by and had good agreement with a k-means cluster analysis and 96% of the participants in the ASC group kept their cluster membership in the k-means three-cluster solution. There are also some relative differences between clusters as Sensory/Motor subscale in cluster one was lower (0.49) relative to the other subscales and near the population mean of 0.29. Cluster one had some atypical sensory reactivity in High awareness/Hyperreactivity, Low awareness/Hyporeactivity and Sensory interests (mean scores around 1 = quite low atypical sensory reactivity) compared to the overall means of the population sample (0.4, 0.3, and 0.4). The third cluster had elevated scores on all subscales in relation to cluster two with above quite high (2) atypical sensory reactivity on all subscales except for Low awareness/ Hyporeactivity (1.91), but this subscale was still significantly different from the subscale mean in cluster two (0.96). Cluster three represented high frequency atypical sensory reactivity on all subscales with Sensory Clusters in the Population Sample Two clusters best fitted the data in the population sample. A first large cluster of low scorers (n = 136) and a second small cluster of high scorers relative to the other cluster (n = 26; Table 5). The individuals in the second cluster had scores that deviated markedly from the subscale means in the population sample. Seven individuals had extreme values with a mean score >1.3. All factors differentiated significantly between the two clusters in the population sample (Mann-Whitney U test, p < .001 for all comparisons). Effect sizes were large r = −.52 to −.62. In the k-means cluster analysis of the population sample, 98% of the participants kept their cluster membership. Demographic and Clinical Characteristics of Clusters in the ASC Sample The demographic variables age, gender, education and occupation were not associated to cluster membership in the ASC sample. We found cluster membership to be associated with the comorbid diagnoses of either ADHD or anxiety as compared to having none of these (χ 2 [1] = 5.58, p = .024). Alcohol/substance use diagnoses occurred only in cluster 2 and 3 (Fisher's exact test two-sided, p = .048). There were more individuals in the first cluster (eight individuals) who did not have a comorbid diagnosis, compared to the collapsed cluster two and three (three individuals) but the difference was not significant. To investigate if ADHD or anxiety, gender or age predicts cluster membership a binary logistic regression analysis was performed with cluster membership as dependent variable dichotomized into cluster 1 as 0 and cluster 2 and 3 as 1. The total SR-AS score, sex, age group and having either ADHD or anxiety were independent variables. Alcohol/substance use included only four individuals and was not included in the analysis. The binary regression showed that the total SR-AS score was an independent predictor of cluster membership regardless of sex, age group, and ADHD and anxiety comorbidity (OR 1.16, 95% CI 1.08-1.24). Cluster Membership and Demographic Variables in the Population Sample In the population sample cluster membership was associated with educational level and current occupation, whereas cluster membership was not associated with gender, age, and family situation. In the second cluster with elevated sensory reactivity the length of education was shorter compared to cluster one (elementary school 3.7% vs. 21.7%, p = .006, Fisher's exact test two-sided) and the rates for currently not studying or working was (5.9% vs. 34.6%, P < .001, Fisher's exact test two-sided). Discussion In this study we identified sensory subgroups of adults with ASC in a psychiatric sample. The results indicated a low, intermediate, and a high atypical sensory cluster. The frequency of sensory symptoms was the main difference between clusters. The cluster solution is in line with the hypothesis of an overall frequency/severity difference between clusters (Fig. 2). In the low frequency group all measures were below the mean for the ASC sample, sensory motor reactivity in particular was low. In the intermediate group High awareness/Hyper-reactivity, Sensory interests, and Sensory/Motor issues were significantly elevated in relation to cluster one, but not Low awareness/ Hyporeactivity. In the high frequency group all measures were high and co-occurrence of High awareness/Hyperreactivity and Low awareness/Hyporeactivity was evident. There seems to be considerable consistency between our results and previous cluster solutions in parent report samples. Ben-Sasson et al. (2008) used similar cluster variables (subscales) as the present study (with the exception of a sensory/motor variable in this study). They found a distinct low and high frequency subgroup and varying intermediate subgroups. Ben-Sasson et al. (2008) found low sensory seeking in the medium cluster in contrast to Ausderau et al. who found two medium clusters, one with high hyperreactivity and enhanced perception in combination with low seeking and one cluster with high hyporeactivity in combination with high sensory seeking. The reason for the discrepancies could be due to differences in age, 18-33 months in the Ben-Sasson et al. study (2008) and 2-12 years in the Ausderau et al. study (2014). The same consideration applies to the Lane et al. study (2014) ages 2-10, compared to the Uljarević et al. study (2016) ages 11-17. Input variables are the same but Lane et al. (2014) found a pattern of reactions to smell/taste and postural attentiveness in the medium clusters not found in the Uljarević et al. study (2016). Developmental level differences can be assumed to explain the differences. The results of the present study resemble the Ausderau et al. study (2014Ausderau et al. study ( , 2016 with respect to a definite co-occurrence of elevated hyper-and hyporeactivity in a high frequency sensory subgroup alone. There is also a resemblance to the Uljarević et al. study (2016) with respect to frequency of sensory symptoms as the main discriminator between the individuals in the clusters. Other previous study results on sensory patterns in ASC are inconsistent, for example, Ermer and Dunn (1998) found a low incidence of sensory seeking, while Tomcheck and Dunn study (2007) found hyporeactivity/seeking to have the highest incidence. Uljarević et al. (2016) discuss the possibility that the relative differences in frequency between sensory reactivity types (subscales) may change with age and reconstruct into a sensory spectrum. Sensory systems are immature at birth and develop with age in typical development (Burr and Gori 2012). Sensory reactivity would differ in toddlers and young children as compared to older children, adolescents and adults, as sensory systems become increasingly refined. There is a broadening of multisensory perceptual capacity and also narrowing processes leading to increased responsiveness to stimuli in the individuals' physical and social environment, while responsiveness to other stimuli decreases (Lewkowicz 2014). Beside developmental changes the use of compensating and coping strategies are likely to develop with age and possibly more so in individuals without intellectual disability. In qualitative research (Chamak et al. 2008;Jones et al. 2003;Robledo et al. 2012;Smith and Sharp 2013) the coping strategies used by adults with ASC are shared features of the findings. The large effect sizes of cluster group membership is another similarity between our study and findings of Ben-Sasson et al. (2008), with eta-squared and partial eta-squared ranging from 0.42 to 0.53 across studies for hyper-, hyporeactivity and sensory interests. The results from the present study and other cluster analyses indicate a sensory spectrum and thus sensory symptoms falling along a continuum. The distributions of scores in both samples are similar to the distribution of scores in ASC and comparison cases in the sensory/motor scale of the RAADS in a study by Andersen et al. (2011). We do not know how self-report of sensory symptoms agree with parent report. There is no research comparing self-report from high functioning children/adolescents or adults with report from their parents, and we do not know if the source of information influences the results in a systematic way. Research on how well self-and parent report correlate is needed when trying to understand more about sensory reactivity and its development across the life span. For adults it is essential that their own judgements are considered. It is possible that parents are not aware of some sensory reactions, since they are not always observable, also parent's knowledge of sensory symptoms may decrease with time. Moreover, adults with ASC and their parents may have different perspectives on sensory issues. Qualitative research on sensory reactivity cited above have shown that the many individuals with ASC place great importance to sensory stimuli and the sensory environment, and this view may not be shared by their parents. It is also possible that individuals with ASC have differences in perception that cause them not to be fully aware of their sensory reactions and both parent and self-report are needed. It is especially important to investigate the impact on the everyday lives of the group with highly elevated atypical sensory reactivity. Although sensory differences can be both positive and negative, they must nevertheless be handled by the individual. An illustration of the strong impact of sensory issues is a written comment from one of our ASC participants, who commented on an item about being fascinated by some stimuli, "Here I would need a further response step with something like: This is essentially who I am". The self-report sale can be used as an important tool in clinical practice with adults. It provides information that can influence treatment approaches as well as make it easier for the adult patients to talk about sensory symptoms. A surprising result is the relatively high incidence of hyporeactivity as measured by the SR-AS in the general population. In general, however, the cluster pattern for SR-AS in the population sample is similar to the cluster pattern for the highly sensitive people scale (HSP; Aron 1997, 2012). A very recent study involving children from the general population showed, in accordance with the results from our study, that approximately 12% had various types of unusual sensory reactivity . The rate of psychiatric comorbidity was high in this study, as is often the case in samples of psychiatrically referred adolescents and adults (Hofvander et al. 2009;Lugnegård et al. 2011). In these studies the majority of people with ASC had at least one psychiatric comorbid diagnosis, and lifetime prevalence rates reported were depressive disorders 50-77%, anxiety disorders around 50% and ADHD around 30-40%. Rates for psychotic disorders were 5-13% and eating disorders around 5%. In studies involving other types of ASC samples, the proportion of individuals with psychiatric comorbidity is smaller with a range of 20% (Hutton et al. 2008) to around 30%, experiencing severe mental health problems (Moss et al. 2015). Anxiety disorders, depressive disorders, and ADHD, are prevalent in the ASC sample in this study. For anxiety disorders the rate is ~30% approximately three times as many as the estimated ~12% population prevalence (DSM-5). For major depressive disorder (ICD-10; F32-F33) the rate was 38% in the ASC sample, five times the estimated population rate of 7% with a three times higher rate in individuals aged 18-29 years than in individuals, age 60 years or older (DSM-5). Prevalence for ADHD is 17 times higher, with 42% in this ASC sample as compared to 2.5% of adults in the general population .This high discrepancy to population prevalence rates for ADHD maybe due to screening for ADHD but no for other psychiatric disorders in the ASC diagnostic procedures in the clinics involved. Because the inclusion of participants in the population sample was completely at random from the population register, we think it is reasonable to assume that the prevalence of ASC in the population sample is ~1%, and that other psychiatric disorders are in the range of what is reported in DSM-5 for the general population. The male to female ratio in this study is at odds with the sex distribution usually found in ASC of approximately 4:1 (DSM-5, APA 2013). In adult psychiatric samples for example Hofvander et al. (2009) and Eriksson et al. (2013) the sex ratio is more even. There is some evidence that females with ASC develop more concomitant psychopathology (Holtmann et al. 2007) which could explain some of the differences in male:female ratio in adult psychiatric samples. The differences in demographic variables between the ASC and population sample were expected. Research on outcomes in ASC, recently reviewed comprehensively by Howlin (2014), has shown poor outcomes for many individuals with ASC diagnoses in education, employment, and in social or close relationships, regardless of intellectual level. In the ASC sample a significant relationship between cluster membership and comorbidity of either anxiety or ADHD was found. A decreased regulation of response to stimulation may be related to increased mental health problems. Ben-Sasson et al. (2008) found more depression and anxiety symptoms and Uljarević et al. (2016) found more anxiety in high frequency sensory clusters. In our study those with less education and those who were currently not working in the population sample were more represented in cluster two (people with elevated atypical sensory reactivity) indicating that this cluster may be a more troubled group. An association between health issues and higher scores on sensory measures in the general population has been found even after controlling for autistic traits (Horder et al. 2013). The lack of difference between sensory clusters on demographics in the ASC sample should be interpreted cautiously. It could be due to lack of power to detect differences in the small demographic subgroups. On the other hand very successful persons with ASC have described a broad range and high frequency of sensory issues (Elwin et al. 2012). There is also some research on the relationship between sensory symptoms and the other criteria in the second dimension of ASC. Boyd et al. (2010) found that high levels of hyperreactivity predicted high levels of repetitive behaviors, regardless of intellectual level and that seeking was significantly related to ritualistic/sameness behaviours. There are several limitations to this study. An overall limitation is lack of more extensive validation of the SR-AS. Another major limitation is the absence of a measure of ASC traits in both samples and lack of information on psychiatric disorders, including ASC, in the population sample. Cluster analyses results cannot be differed from the input variables (Hair et al. 1995). In our cluster analysis as in the cluster analyses by Ben-Sasson et al. (2008) input variables did not include separate sensory modalities and possible variations on sensory modality level cannot be seen in the result. Another limitation is that the ASC participants were clinically recruited and not representative for the general ASC population. Further most of the participants (85%) received their ASC diagnosis in adulthood. The participants are thought to be similar to those refereed for diagnostic evaluations in adulthood and the results from this study may not generalise to adults who were referred as young children. Moreover the comorbidity rates were high which may also limit the generalisability of the results. Clinical Implications and Future Directions The need to assess atypical sensory characteristics was demonstrated. Whether or not an individual belongs to a mildly elevated or a highly elevated sensory subgroup is important information when planning support and interventions. To live with high levels of High awareness/Hyperreactivity and sensory overload cause distress. Sensations are described as a source of both pleasure and discomfort and sensory reactions in general have a stronger and sometimes disruptive impact, compared to the way they 1 3 are experienced by people without autism. This is obvious in the qualitative studies referred to above. Missing items of information from the environment and from one's own body, due to Low awareness/Hyporeactivity can also create problems in social interactions and with daily recurring routines like food, and sleep Elwin et al. 2013;Fiene and Brownlow 2015). There are no prior validated self-report instruments on sensory reactions tailored for adults with ASC, but even though the SR-AS offers promising validity and reliability further assessment of psychometric properties is needed. Another goal for future research on sensory reactivity in ASC is to investigate how the result from self-report compares to reports from parents. Further research also needs to focus on developmental aspects of sensory function in ASC in relation to typical development.
2017-04-27T08:35:35.871Z
2016-12-05T00:00:00.000
{ "year": 2016, "sha1": "49458f958dc4bf8f182d4614bb48a51b0d06d470", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10803-016-2976-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "49458f958dc4bf8f182d4614bb48a51b0d06d470", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
252763295
pes2o/s2orc
v3-fos-license
DIFM:An effective deep interaction and fusion model for sentence matching “Natural language sentence matching is the task of comparing two sentences and identifying the relationship between them. It has a wide range of applications in natural language processing tasks such as reading comprehension, question and answer systems. The main approach is to compute the interaction between text representations and sentence pairs through an attention mechanism, which can extract the semantic information between sentence pairs well. However, this kind of methods fail to capture deep semantic information and effectively fuse the semantic information of the sentence. To solve this problem, we propose a sentence matching method based on deep interaction and fusion. We first use pre-trained word vectors Glove and characterlevel word vectors to obtain word embedding representations of the two sentences. In the encoding layer, we use bidirectional LSTM to encode the sentence pairs. In the interaction layer, we initially fuse the information of the sentence pairs to obtain low-level semantic information; at the same time, we use the bi-directional attention in the machine reading comprehension model and self-attention to obtain the high-level semantic information. We use a heuristic fusion function to fuse the low-level semantic information and the high-level semantic information to obtain the final semantic information, and finally we use the convolutional neural network to predict the answer. We evaluate our model on two tasks: text implication recognition and paraphrase recognition. We conducted experiments on the SNLI datasets for the recognizing textual entailment task, the Quora dataset for the paraphrase recognition task. The experimental results show that the proposed algorithm can effectively fuse different semantic information that verify the effectiveness of the algorithm on sentence matching tasks.” Introduction Natural language sentence matching is the task of comparing two sentences and identifying the relationship between them. It is a fundamental technique for a variety of tasks. For example, in the paraphrase recognition task, it is used to determine whether two sentences are paraphrased. In the text implication recognition task, it is possible to determine whether a hypothetical sentence can be inferred from a predicate sentence. Recognizing Textual Entailment (RTE), proposed by Dagan (Dagan and Glickman, 2004), is a study of the relationship between premises and assumptions. It mainly includes entailment, contradiction, and neutrality. The main methods for recognizing textual entailment include the following: similarity-based methods (Ren et al., 2015), rule-based methods (Hu et al., 2020), alignment feature-based machine learning methods (Sultan et al., 2015), etc. However, These methods can't perform well in recognition because they didn't extract the semantic information of the sentences well. In recent years, deep learning-based methods have been effective in semantic modeling, achieving good results in many tasks in NLP (Jin et al., 2021) (Li et al., 2021) . Therefore, on the task of recognizing textual entailment, deep learning-based methods have outperformed earlier approaches and become the dominant recognizing textual entailment method. For example, Bowman et al. used recurrent neural networks to model premises and hypotheses, which have the advantage of making full use of syntactic information (Bowman et al., 2015a).After that, he first applied LSTM sentence models to the RTE domain by encoding premises and hypotheses through LSTM to obtain sentence vectors (Bowman et al., 2015b). WANG et al.proposed mLSTM model on this basis, which focuses on splicing attention weights in the hidden states of the LSTM, focusing on the part of the semantic match between the premise and the hypothesis.The experimental results showed that the method achieved good results on the SNLI dataset (Wang and Jiang, 2016). Paraphrase recognition is also called paraphrase detection.The task of paraphrase recognition is to determine whether two texts hold the same meaning.If they have the same meaning,they are called paraphrase pairs.Traditional paraphrase recognition methods focus on text features.However,there are problems such as low accuracy rate.Therefore,deep learning-based paraphrase recognition methods have become a hot research topic.Deep learning-based paraphrase recognition methods are mainly divided into two types; 1) calculated word vectors by neural networks,and then calculated word vector distances to determine whether they were paraphrase pairs.For example, Huang et al.used an improved EMD method to calculate the semantic distance between vectors and obtain the interpretation relationship (Dong-hong, 2017). 2) Directly determining whether a text pair is a paraphrased pair by a neural network model,which is essentially a binary classification algorithm.Wang et al.proposed the BIMPM model, which first encodes sentence pairs by a bidirectional LSTM and then matches the encoding results from multiple perspectives in both directions (Wang et al., 2017). Chen et al.proposed an ESIM model that uses a twolayer bidirectional LSTM and a self-attention mechanism for encoding, then it extracts features through the average pooling layer and the maximum pooling layer, and finally performs classification (Chen et al., 2017). These models mentioned above have achieved good results on specific tasks, but most of these models have difficulty extracting deep semantic information and effectively fusing the extracted semantic information,in this paper, we propose a sentence matching model based on deep interaction and fusion .We use the bi-directional attention and self-attention to obtain the high-level semantic information.Then,we use a heuristic fusion function to fuse the low-level semantic information and the high-level semantic information to obtain the final semantic information.We conducted experiments on the SNLI datasets for the recognizing textual entailment task, the Quora dataset for the paraphrase recognition task.,The results showed that the accuracy of the proposed algorithm on the SNLI test set is 87.1%, and the accuracy of the Quora test set is 86.8%. Our contributions can be summarized as follows: • We propose a sentence matching model based on deep interaction and fusion. It introduces bidirectional attention mechanism into sentence matching task for the first time. • We propose a heuristic fusion function. It can learn the weights of fusion by neural network to achieve deep fusion. • We evaluate our model on two different tasks and Validate the effectiveness of the model. BIDAF model based on bi-directional attention flow In the task of extractive machine reading comprehension, Seo et al.first proposed a bi-directional attention flow model BIDAF (Bi-Directional Attention Flow) for question-to-article and article-toquestion (Seo et al., 2016). Its structure is shown in Figure 1. The model mainly consists of an embed layer, a contextual encoder layer, an attention flow layer, a modeling layer, and an output layer. After the character-level word embedding and the pre-trained word vector Glove word embedding, the contextual representations X and Y of the article and the question are obtained by a bidirectional LSTM, respectively.The bi-directional attention flow between them is computed, and it proceeds as follows: a) The similarity matrix between the question and the article is calculated.The calculation formula is shown in Eq.1. Figure 1: Bi-Directional Attention Flow Model where K tj is the similarity of the t-th article word to the j-th question word, X :t is the t-th column vector of X, Y :j is the j-th column vector of Y , and W is a trainable weight vector. b) Calculating the article-to-question attention. Firstly, the normalization operation is performed on the above similarity matrix, and then the weighted sum of the problem vector is calculated to obtain the article-to-problem attention, which is calculated as shown in Eq.2. x t = sof t max (K) c) Query-to-context (Q2C) attention signifies which context words have the closest similarity to one of the query words and are hence critical for answering the query. We obtain the attention weights on the context words by y = sof tmax(max col (K)) ∈ R T , where the maximum function max col is performed across the column. Then the attended context vector isx = t y t X :t .This vector indicates the weighted sum of the most important words in the context with respect to the query.x is tiled T times across the column, thus givingX ∈ R 2d * T . d) Fusion of bidirectional attention streams. The bidirectional attention streams obtained above are stitched together to obtain the new representation, which is calculated as shown in Eq.3. We builds on this work by looking at sentence pairs in a natural language sentence matching task as articles and problems for reading comprehension.We use the bi-directional attention and self-attention to obtain the high-level semantic information.Then,we use a heuristic fusion function to fuse the low-level semantic information and the high-level semantic information to obtain the final semantic information. Method In this section, we describe our model in detail. As shown in Figure 2, our model mainly consists of an embedding layer, a contextual encoder layer, an interaction layer, a fusion layer, and an output layer. Embedding Layer The purpose of the embedding layer is to map the input sentence A and sentence B into word vectors.The traditional mapping method is one-hot encoding.However,it is spatially expensive and inefficient, so we use pre-trained word vectors for word embedding.These word vectors are constant during training. Since the text contains unregistered words, we also use character-level word vector embedding.Each word can be seen as a concatenation of characters and characters, and then we use LSTM to get characterlevel word vectors. It can effectively handle unregistered words. We assume that the pre-trained word vector for word h is h w ,and character-level word vector is h c ,we splice the two vectors and use a two-tier highway network (Zilly et al., 2017) to get the word vector representation of word h:h = [h 1 ; h 2 ] ∈ R d 1 +d 2 ,where d 1 is the dimension of Glove word embedding and d 2 is the dimension of character-level word embedding.Finally,we obtain the word embedding matrix X ∈ R n * (d 1 +d 2 ) for sentence A and the word embedding matrix Y ∈ R m * (d 1 +d 2 ) for sentence B, where n,m represent the number of words in sentence A and sentence B. Contextual Encoder Layer The purpose of the contextual encoder layer is to fully exploit the contextual relationship features of the sentences. We use bidirectional LSTM for encoding which can mine the contextual relationship features of the sentences.Then,we can obtain its representation H ∈ R 2d * n and P ∈ R 2d * m , where d is the hidden layer dimension. Interaction Layer The purpose of the interaction layer is to extract the effective features between sentences.In this module,we can obtain low-level semantic information and high-level semantic information. low-level semantic information The purpose of this module initially fuses two sentences to get the low-level semantic information.We first calculate the similarity matrix S of the context-encoded information H and P ,which is shown in Eq.4. where S ij denotes the similarity between the i-th word of H and the j-th word of P , W s is weight matrices, h is the i-th column of H , and p is the j-th column of P . Then, we calculate the low-level semantic information V of A and B,which is shown in Eq.5. high-level semantic information The purpose of this module is mine the deep semantics of the text, and to generate high-level semantic information.In this module,we frist calculate the bidirectional attention of H and P that is the attention of H → P and P → H. It is calculated as follows. H → P : The attention describes which words in the sentence P are most relevant to H.The calculation process is as follows; firstly, each row of the similarity matrix is normalized to get the attention weight, and then the new text representation Q ∈ R 2d * n is obtained by weighted summation with each column of P ,which is calculated as shown in Eq.6. where q :t is the t-th column of Q. P → H: The attention indicates which words in H are most similar to P .The calculation process is as follows: firstly,the column with the largest value in the similarity matrix S is taken to obtain the attention weight, then the weighted sum of H is expanded by n time steps to obtain C ∈ R 2d * n ,which is calculated as shown in Eq.7 After obtaining the attention matrix Q of H → P and the attention matrix C of P → H,we splice the attention in these two directions by a multilayer perceptron.Finally,we get the spliced contextual representation G, which is calculated as shown in Eq.8. Then,we calculate its self-attention (Vaswani et al., 2017) ,which is calculated as shown in Eq.9. Finally, we pass the above semantic information Z through a bi-directional LSTM to obtain high-level semantic information U . Fusion Layer The purpose of the fusion layer is to fuse the low-level semantic information V and the high-level semantic information U . We innovatively propose a heuristic fusion function, it can learn the weights of fusion by neural network to achieve deep fusion.We fuse V and U to obtain the text representation L = f usion(U, V ) ∈ R n * 2d , where the fusion function is defined as shown in Eq.10: Where W 1 and W 2 are weight matrices, and g is a gating mechanism to control the weight of the intermediate vectors in the output vector. In this paper, x refers to U and y refers to V . Output Layer The purpose of the output layer is to output the results.In this paper, we use a linear layer to get the results of sentence matching. The process is shown in Eq.11. where both W and b are trainable parameters. Z is the vector after splicing its first and last vectors. In this section,we validate our model on two datasets from two tasks.We first present some details of the model implementation, and secondly, we show the experimental results on the dataset. Finally, we analyze the experimental results. Loss function In this paper, the cross-entropy loss function can be chosen as shown in Eq.12. where N is the number of samples, K is the total number of categories andŷ (i,k) is the true label of the i-th sample. Dataset In this paper, we use the natural language inference datasets SNLI, and the paraphrase recognition dataset Quora to validate our model. The SNLI dataset contains 570K manually labeled and categorically balanced sentence pairs. The Quora question pair dataset contains over 400k pairs of data that each with binary annotations, with 1 being a duplicate and 0 being a non-duplicate.The statistical descriptions of SNLI and Quora data are shown in Table 1. parameter settings This experiment is conducted in a hardware environment with a graphics card RTX5000 and 16G of video memory.The system is Ubuntu 20.04, the development language is Python 3.7, and the deep learning framework is Pytorch 1.8. In the model training process, a 300-dimensional Glove word vector are used for word embedding, and the maximum length of text sentences is set to 300 and 50 words on the SNLI and Quora datasets, respectively. The specific hyperparameter settings are shown in Table 2. Experimental results and analysis We compare the experimental results of the sentence matching model based on deep interaction and fusion on the SNLI dataset with other published models.The evaluation metric we use is the accuracy rate.The results are shown in Table 3. As can be seen from Table 3, our model achieves an accuracy rate of 0.871 on the SNLI dataset, which achieves better results in the listed models.Compared with the LSTM, it is improved by 0.065. Compared with Star-Transformer model,it is improved by 0.004. Compared with some other models, it is observed that our model is better than the others model. Bowman et al.(Bowman et al., 2016), b are reported by Han et al.(Han et al., 2019), c are reported by Shen et al.(Shen et al., 2018), d are reported by Borges et al.(Borges et al., 2019), e are reported by , f are reported by Mu et al.(Mu et al., 2018). We conduct experiments on the Quora dataset, and the evaluation metric is accuracy. The experimental results on the Quora dataset are shown in Table 4. As can be seen from Table 4,the accuracy of our method on the test set is 0.868.The experimental results improve the accuracy by 0.054 compared to the traditional LSTM model.Compared with the enhanced sequential inference model ESIM,it is improved by 0.004. The experimental results achieved good results compared to some current popular deep learning methods.Our model achieve relatively good results in both tasks, which illustrates the effectiveness of our model. , h are reported by He et al.(He and Lin, 2016) , i are reported by , j are reported by Chen et al.(Chen et al., 2017 Ablation experiments To explore the role played by each module, we conduct an ablation experiment on the SNLI dataset .Without using the fusion function, which means that the low-level semantic information are directly spliced with the high-level semantic information. The experimental results are shown in Table 5. We first verify the effectiveness of character embedding. Specifically,we remove the character embedding for the experiment, and its accuracy drops by 1.5 percentage points, proving that character embedding plays an important role in improving the performance of the model. In addition,we verify the effectiveness of the semantic information and fusion modules. We removed low-level semantic information and high-level semantic information from the original model, and its accuracy dropped by 1.2 percentage points and 7.6 percentage points. At the same time, we remove the fusion function, and its accuracy drops by about 1.0 percentage points. It shows that the different semantic information and the fusion function are beneficial to improve the accuracy of the model, with the high-level semantic information being more significant for the model. Finally,we verify the effectiveness of each attention on the model.We remove the attention from P to H, the attention from H to P , and the self-attention module respectively. Their accuracy rates decreased by 2.5 percentage points, 0.9 percentage points, and 1.3 percentage points. It shows that all the various attention mechanisms improve the performance of the model, with the P to H attention being more significant for the model. The ablation experiments show that each component of our model plays an important role, especially the high-level semantic information module and the P to H attention module, which have a greater impact on the performance of the model. Meanwhile, the character embedding and fusion function also play an important role in our model. Conclusion we investigate natural language sentence matching methods and propose an effective deep interaction and fusion model for sentence matching. Our model first uses the bi-directional attention in the machine reading comprehension model and self-attention to obtain the high-level semantic information.Then,we use a heuristic fusion function to fuse the semantic information that we get.Finally, we use a linear layer to get the results of sentence matching .We conducted experiments on SNLI and Quora datasets. The experimental results show that the model proposed in this paper can achieve good results in two tasks.In this work,we find that our proposed interaction module and fusion module occupie the dominant position and have a great impact on our model.However,Our model is not as powerful as the pre-trained model in terms of feature extraction and lacks external knowledge.The next research work plan will focus on the following two points: 1) we use more powerful feature extractors, such as BERT pre-trained model as text feature extractors; 2) the introduction of external knowledge will be considered. For example, WordNet, an external knowledge base, contains many sets of synonyms, and for each input word, its synonyms are retrieved from WordNet and embedded in the word vector representation of the word to further improve the performance of the model.
2022-10-10T13:21:30.090Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "dfa4718ee2449f8e9e735c2ae564507db84368dc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "dfa4718ee2449f8e9e735c2ae564507db84368dc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
14364308
pes2o/s2orc
v3-fos-license
Accessing to the minor proteome of red blood cells through the influence of the nanoparticle surface properties on the corona composition Nanoparticle (NP)–protein interactions in complex samples have not yet been clearly understood. Nevertheless, several studies demonstrated that NP’s physicochemical features significantly impact on the protein corona composition. Taking advantage of the NP potential to harvest different subsets of proteins, we assessed for the first time the capacity of three kinds of superparamagnetic NPs to highlight the erythrocyte minor proteome. Using both qualitative and quantitative proteomics approaches, nano-liquid chromatography–tandem mass spectrometry allowed the identification of 893 different proteins, confirming the reproducible capacity of NPs to increase the number of identified proteins, through a reduction of the sample concentration range and the capture of specific proteins on the three different surfaces. These NP-specific protein signatures revealed significant differences in their isoelectric point and molecular weight. Moreover, this NP strategy offered a deeper access to the erythrocyte proteome highlighting several signaling pathways implicated in important erythrocyte functions. The automated potentiality, the reproducibility, and the low-consuming sample demonstrate the strong compatibility of our strategy for large-scale clinical studies and may become a standardized sample preparation in future erythrocyte-associated proteomics studies. Introduction With the emergence of nanotechnology, the engineering of "smart" nanoparticles (NPs) has opened up a host of possibilities in medicine and biology. [1][2][3] Nevertheless, once injected in blood circulation, the fate of these NPs must be anticipated to prevent an NP-associated toxicity. 4,5 When an NP is introduced into a biological fluid, it is well recognized that proteins immediately cover its surface, resulting in the formation of a protein "corona", which will rapidly evolve according to the concentration and the affinity of proteins for the NP. 6,7 Since the last decade, several studies used different analytical methods 8 and focused on the performance of NPs in the presence of complex protein mixtures [9][10][11] to demonstrate the complexity of these NP-protein interactions. 12,13 First, the "nano-dimension" of these particles endows them with properties completely different from those of bulk materials with the same compositions. Moreover, NPs have a very large surface-to-volume ratio, so that even small amount of particles presents large surface areas available for protein binding. Finally, the physicochemical properties of NPs such as their size, their surface charge, or their chemical functionalization would determine their protein corona composition, 11,[14][15][16][17] which may more influence the biological response of the body. 18,19 This capacity of NPs to concentrate at their surface different subsets of proteins according to their physicochemical properties may be useful in proteomics to improve protein detection and identification, and detect potential disease-associated species. [20][21][22] In fact, despite the constant technological advances in mass spectrometry (MS) and sample fractionation methods, the limited dynamic range of current mass spectrometers hinders the entire proteome coverage of biological fluids in which the protein concentration range spans more than ten orders of magnitude. 23,24 Thus, the low-abundant protein compartment where diseaseassociated biomarkers are thought to reside is still unattainable. Various nanomaterial-associated strategies [25][26][27][28] have been integrated in MS in order to detect disease-associated biomarkers, and have been found extremely useful to improve protein enrichment and sample preparation, particularly NPassociated technology. [29][30][31] Finding biomarker for disease detection, stratification, as well as therapeutic monitoring is a biomedical priority. Erythrocytes or red blood cells (RBCs) are a paradigmatic situation for clinical proteomics and also a major circulating compartment to individualize biomarkers. 32,33 RBCs circulate all over the body and deliver oxygen to most organs. They are crucial micro-organites able to capture and provide remote biomolecular information at the periphery. They are also involved in hematological malignancies 34,35 for which relevant biomarkers are still missing. Because of the presence of hemoglobin (Hb), which represents 98% of the total RBC soluble protein content and masks the 2% of biologically interesting low-abundant proteins, it is difficult to investigate the deep RBC proteome in clinical and large-scale studies. Although technical advances in RBC sample preparation and fractionation have greatly contributed to reduce the protein concentration range, allowing to increase the number of identified proteins by tandem MS (MS/MS), 36,37 these approaches are time-consuming and require intensive fractionation on gel and large volumes of sample to be really efficient, thus limiting the clinical translation of such technologies for high-throughput large-scale studies. Until now, the capacity of NPs for protein enrichment in complex protein mixture has never been assessed with RBC proteins. In the present work, the potential capacity of three kinds of NPs to capture RBC proteins was determined by a qualitative and quantitative proteomics analysis. Based on a fast experimental protocol that required low volume of RBC lysates, and without any fractionation step, nano-liquid chromatography (LC)-MS/MS analysis of the different NPtreated samples revealed 893 different proteins, demonstrating the interest of our nanoproteomics approach to analyze the RBC proteome in a large-scale context. We used labelfree quantification 38 to confirm the reproducibility and the influence of the NP surface on protein capture, highlighting subsets of proteins specific to each kind of NPs. Through this work, we demonstrated for the first time the interest of chemically modified NPs in sample preparation for RBC proteomics studies. Magnetic NPs Superparamagnetic NPs of 100 nm in aqueous suspension were obtained from Chemicell GmbH (Berlin, Germany). Three different kinds of NPs grafted with different matrices and functional groups were used in our study, fluidMAG-OS (OS-NP), fluidMAG-PAA (PAA-NP), and fluidMAG-Q (Q-NP) ( Table 1). They have a multi-domain core allowing a fast and easy magnetic isolation by an external magnet (MagnetoPURE, Chemicell GmbH). More detailed features are available at http://www.chemicell.com/. The size and zeta potential of the magnetic NPs were determined with a Malvern Zetasizer NanoZS. NPs were diluted in phosphate-buffered saline (PBS; pH 7.3) before measurement, and the measurement was conducted at 25°C using 0.5 mg/mL NP concentrations. rBc preparation and lysis Blood samples were collected from five healthy consenting donors, three men and two women aged 44-62 years (mean 55.4±7.7 years) by venipuncture in ethylenediaminetetraacetic acid vacutainer tubes. RBC lysis was performed according to the protocol previously published by Roux-Dalvai et al. 37 Briefly, samples were firstly centrifuged at 1,000× g at +4°C for 10 minutes to eliminate plasma and buffy coat. To further eliminate leukocytes, a ficoll gradient separation was performed. After removal of the monolayer containing mononuclear cells, erythrocytes were carefully collected, avoiding top RBC layer and polynuclear cells collected in the pellet. Erythrocytes were then washed three times with PBS + PMSF (154 mM NaCl, 10 mM phosphate buffer, pH 7.4 containing 0.1 mM PMSF). At each step, along with supernatant, the upper RBC layer was removed. The lysis of RBC was operated by hypotonic shock. Red cells were diluted to a 1:3 ratio with lysis buffer (5 mM phosphate buffer, pH 7.4 containing 1 mM ethylenediaminetetraacetic acid, and 0.5 mM PMSF) containing protease inhibitor and left in ice for 30 minutes. After freezing, red cells were thawed at 37°C. The procedure of freezing/ lc-Ms/Ms analysis of rBc proteins The Supplementary materials provide a detailed description of LC-MS/MS identification. Briefly, 10 µg of each duplicated eluate and crude RBC lysate were separately diluted in Laemmli buffer and stacked in one band after a short and low-voltage electrophoretic migration on SDS-polyacrylamide gel electrophoresis (SDS-PAGE) gel. The stacking bands were cut, proteins were in-gel digested by trypsin, and resulting peptides were extracted from the gel and analyzed by nano-LC-MS/MS using an ultimate 3000 system (Dionex, Amsterdam, the Netherlands) coupled to an LTQ-Orbitrap Velos mass spectrometer (Thermo Scientific, Bremen, Germany). The LTQ-Orbitrap was operated in data-dependent acquisition mode with the Xcalibur software. Survey scan MS spectra were acquired in the Orbitrap in the 300-2,000 m/z range with the resolution set to a value of 60,000. The five most intense ions per survey were selected for collision-induced dissociation fragmentation, and the resulting fragments were analyzed in the linear trap quadrupole (LTQ). Database search and data analysis The Mascot Daemon software (version 2.3.0, Matrix Science, London, UK) was used to perform database searches against Homo sapiens entries in Uniprot protein database. The mass tolerances in MS and MS/MS were set to 5 ppm and 0.6 Da, respectively, and the instrument setting was specified as "ESI-Trap". Mascot results were parsed with the in-housedeveloped software Mascot File parsing and Quantification (MFPaQ) v4.0.0 software (http://mfpaq.sourceforge.net/), and protein hits were automatically validated if they satisfied one of the following criteria: identification with at least one top-ranking peptide with a Mascot score of more than International Journal of Nanomedicine 2015:10 submit your manuscript | www.dovepress.com 1872 Zaccaria et al 39 (P-value 0.001) or at least two top-ranking peptides each with a Mascot score of more than 22 (P-value 0.05). When several proteins matched exactly the same set of peptides, only one member of the protein group was reported in the final list. To evaluate the false-positive rate in these experiments, all the initial database searches were performed using the "decoy" option of Mascot. Mascot results were parsed with the in-house-developed software MFPaQ v4.0.0 (http://mfpaq.sourceforge.net/), and protein hits were automatically validated. Label-free quantification Quantification of proteins was performed using the labelfree module implemented in the MFPaQ v4.0.0 software (http://mfpaq.sourceforge.net/). For each sample, the software uses the validated identification results and extracted ion chromatogram (XIC) of the identified peptides in the corresponding raw nano-LC-MS files based on their experimentally measured retention time and monoisotopic m/z values. Quantification of peptide ions was performed based on calculated XIC area values. To compare the abundance profile of one protein in different samples, a protein abundance index (PAI) was calculated for each protein in the different NP samples. PAI was defined as the average of XIC area values for three intense reference tryptic peptides identified for this protein. If only one or two peptides were used to identify the protein, the protein-related PAI was calculated based on the XIC area value of these peptides. As the three different kinds of NPs were experimentally duplicated, six PAI values were obtained for each protein; the highest PAI value was normalized to 1, and the five other PAI values were thus comprised between 0 and 1. Normalized PAIs were submitted to software R for protein hierarchical clustering. 39 Magnetic NPs In this study, we investigated the capacity of three different kinds of 100 nm NPs for protein capture in RBC lysates through the protein identification and quantification of their respective eluted corona. This size of NPs offers a high surface-to-volume ratio and facilitates the NP collection by an external magnet. The main difference between these NPs concerns their surface charge and/or functional group. First, according to dynamic light scattering and zeta potential measurements, the respective NPs were homogeneous in size and monodispersed. Q-NPs displayed a positive charge, and PAA-NPs and OS-NPs both displayed a negative charge (Table 1). sDs-Page analysis As particles fulfilled critical quality criteria, 500 µg of each kind of NPs, developing 280 cm 2 surface area, were separately incubated in duplicates for 45 minutes with 200 µL of RBC lysates (ie, 1 mg of proteins), followed by extensive washing to remove all unbound proteins. First, the Bradford assays revealed 19±1 µg of proteins eluted from the surface of each kind of NPs, which represented 2% of the initial protein input. Prior to deeper characterization of each NP corona, the same quantity of proteins was loaded on an SDS-PAGE gel ( Figure 1) to obtain a protein profile of each condition. For the three types of NPs, SDS-PAGE analysis of the eluted protein corona revealed a pattern obviously different from nontreated sample. Interestingly, the high-abundant Hb was considerably reduced, and the protein profiles of NP-treated samples revealed a great enrichment between 20 kDa and Protein identification by nano-LC-MS/MS To better estimate the protein enrichment offered by the NP treatment, a deeper characterization of each NP corona was performed by nano-LC-MS/MS protein identification. Figure 2A sums up the results obtained for each duplicated sample and gives an insight of the harvesting capacity for each condition. The different protein lists are detailed in the Supplementary materials. The two replicates of the crude RBC lysate revealed 108 different proteins, whereas two of the NPs-eluted coronae significantly highlighted more species (P=0.009, Mann-Whitney test), from 525 to 576 different species according to the type of NPs (Figure 2A). Altogether, 909 different proteins were identified from our RBC samples of which 893 were present in the list of NPtreated samples ( Figure 2B). Among these 893 proteins, 801 were exclusively identified from NP-eluted coronae, and 92 were shared with crude RBC lysate. A focus on the three NP samples revealed a relatively similar number of identified proteins varying from 525 for OS-and Q-NPs to 576 for PAA-NPs, the three kinds of NPs only sharing 193 proteins ( Figure 2C). effect of NP treatment on nano-lc-Ms/ Ms analytical coverage The nano-LC-MS/MS analysis showed that the NP treatment of the sample greatly enhanced the number of identified proteins; this must result from a better MS/MS sampling of (Table 2 and Supplementary materials). Interestingly, according to the sample treatment, the 20 most abundant proteins were not strictly the same. In fact, only Hb-related subunits and spectrin chains, the two major constituents of RBCs, were shared by all conditions. Nevertheless, the rank and thus the relative abundance of these proteins were different. For example, Hb subunit beta and spectrin alpha chain that were respectively the first and the 20th most abundant protein in nontreated sample occupied the 5th and the 1st rank, respectively, in OS-NP sample. And carbonic anhydrase 1 that appeared just after the Hb-related subunits, as the 12th most abundant protein in nontreated RBC, only occupied the 70th rank in OS-NP sample. These results demonstrate that NP treatment modified the relative protein concentration in RBC sample. This modification of the protein abundance rank was also different according to the kind of NPs, suggesting the differential affinity of the NP surface for RBC proteins. Moreover, we also observed that the number of MS/MS queries assigned to these 20 most abundant proteins was greatly lower in NP-treated condition than in control. In control, 94% of the total MS/MS queries that were generated during the run were assigned to these 20 proteins, of which 91% to the Hb-related subunits. In the NP-treated sample, the 20 most abundant proteins only generated 42%-45% of the total MS/MS queries, of which 20%-29% were assigned to Hb-related subunits, according to the kind of NPs. This observation revealed that NP treatment greatly decreased the proportion of abundant proteins. Finally, for each kind of NPs, we showed that 90%-95% of the 20 most abundant proteins are the same between replicates. We also observed that the 20 most abundant proteins as well as Hb-related subunits generated the same part of MS/ MS queries between replicates (Supplementary materials). These results confirmed the reproducibility of capture of each kind of NPs for these 20 most abundant proteins. To determine whether NP treatment increased the proportion of low-abundant proteins in RBC samples, the MS/MS queries, the peptide number, and the sequence coverage of ten low-abundant proteins in nontreated sample (ie, proteins with low MS/MS queries) were determined in OS-NPs sample ( Table 3). In nontreated sample, these ten low-abundant proteins generated from one to three MS/MS queries, with a low coverage of each protein sequence (from 1.4% to 17.1%). They represented 0.1% of the total MS/MS queries generated by the sample. In OS-NP samples, the same proteins generated from five to 63 MS/MS queries, allowing a greater coverage of the protein sequence (from 11.9% to 79.6%) through the detection and identification of different peptides for each protein. These ten proteins represented 2% of the total MS/MS queries generated by the NP-treated sample. This enhanced detection of low-abundant proteins was also observed in Q-and PAA-NPs (Supplementary materials). These observations clearly demonstrated that NP treatment also increased the proportion of low-abundant proteins. Qualitative and quantitative reproducibility in protein harvesting Although NP-treated samples allowed a great increase in protein identification, providing access to low-abundant proteins, it was important to confirm the reproducibility of NPs for RBC protein harvesting. First, in a qualitative approach, we compared the MS spectra of each replicate and observed that the MS signal of 90%-94% of identified peptides was present in both replicates per type of NPs, thus confirming the qualitative reproducibility of NPs for protein harvesting ( Figure 3A). Then, to assess the quantitative reproducibility of NP-strategy, we used the MFPaQ software, 40 which was designed to parse and validate protein identifications from Mascot result files and quantify the identified proteins. The quantification module of MFPaQ v4.0.0 was upgraded so that it could handle label-free quantification as described by Mouton-Barbosa et al. 41 More information is available in the Supplementary materials. Briefly, a PAI was calculated for all identified proteins. To assess the quantitative reproducibility of each type of NPs for protein harvesting, the PAI ratio of all identified proteins was determined in duplicates (Supplementary materials). As shown in Figure 3B, quantitative reproducibility of NPs was found to be good between the duplicates. In fact, the majority of quantified proteins on each kind of NPs revealed a PAI ratio close to the expected value of 1 (Supplementary materials). Differential capture of proteins by the NP surface As the protein capture by NPs was qualitatively and quantitatively reproducible, we aimed to determine if there was an NP-dependent protein harvesting. Based on these quantification results, we normalized the PAIs obtained for each protein in each sample and submitted them to hierarchical clustering with the software R. According to the PAI values of each protein in each condition, the clustering revealed ten differentially quantified protein groups Notes: The total number of MS/MS queries, the number of different identified peptides, and the protein sequence coverage inform about the enrichment of these ten proteins in NP sample. Abbreviations: rBc, red blood cell; NPs, nanoparticles; Ms/Ms, tandem mass spectrometry. Figure 4A). To deeply characterize this protein capture specificity, we compared the molecular weight (MW) and the isoelectric point (pI) of proteins in these three groups. PAA-specific proteins revealed significantly higher MWs than OS-NPs ( Figure 4B and Supplementary materials). In fact, the median protein MW in PAA-group was 50,547 Da; it was only 33,667 Da in OS-group. Regarding the pI, the three protein groups showed significant differences ( Figure 4C and Supplementary materials). The positively charged Q-NPs revealed 90% of specific proteins that were negatively charged at pH 7.4 (ie, with a pI 7.4), whereas negatively charged OS-and PAA-NPs harvested less than 70% of specific protein with pI 7.4, confirming previous reports that suggested NP surface charge would influence the protein corona. Functional analysis of rBc proteins As NPs considerably increased the number of identified proteins in RBC samples and as the different surfaces harvested different subproteomes, we were interested in determining the functional pathways that were significantly represented by NP-harvested proteins in comparison with nontreated sample. In this aim, we submitted our data set to Ingenuity Pathway Analysis software 42 (Ingenuity Systems, Redwood City, CA, USA) to perform the functional characterization of the RBC proteome and identify the main molecular pathways and networks. This automatic annotation tool uses a knowledge database and assigns proteins to functional classes or specific canonical pathways (CPs) related to various biological processes. According to the functional analysis, the 893 proteins identified from the three different kinds of NPs had been associated to 140 different and significantly represented CPs and revealed 98 species related to hematological diseases; the 109 proteins from the crude extract only highlighted 40 different CPs and 31 proteins previously described in hematological diseases. The ten more significant CPs for crude RBC lysate ( Figure 5A) were related to the cellular origin and known functions of RBC. Most of them involved protein and nucleic acid degradation, anaerobic glycolysis, and response to oxidative stress. Although these ten CPs were not the most significant in NP-treated lysates, they were more represented in these samples than crude extract. Figure 5A shows the ratio of proteins identified in these ten CPs for crude and NP-treated RBC extracts. This ratio matches to the number of proteins from our data set that maps to the pathway divided by the total number of proteins referenced in this pathway by the software. We noticed in all CPs that these ratios were higher for the NP-treated samples, intending that NP increased the identification of proteins related to these CPs. On the other hand, when we considered NPs as the reference sample ( Figure 5B), the ten more significant CPs were related to cellular assembly and organization of cytoskeleton, which play an important role in RBC. These pathways that are related to important RBC functions were almost absent from the set of proteins identified in the crude RBC sample, illustrating the impact of the enrichment following protein harvesting using NP technology. 15 performed both qualitative and quantitative proteomics to demonstrate that the NP size significantly determined the protein corona. Interestingly, they also demonstrated that the relative proportion of plasma proteins was modified on NP surface compared to their proportion in control plasma. These results suggest that different kinds of NPs may harvest different subproteomes and modify the range of protein concentration in complex samples. As these two properties may be helpful in proteomics approaches to increase the proteome coverage of a biological sample, we assessed for the first time the ability of chemically modified NPs to increase protein identification in RBC samples. The challenge with RBC sample is to decrease the concentration of Hb, which represents 98% of the total protein concentration. NPs provide great advantages for ex vivo experiments; their functionalized surfaces, their nanometer size, and their great surface-to-volume ratio endow them with unique physicochemical properties making them very attractive for protein harvesting. Discussion In this study, we used three kinds of 100 nm superparamagnetic NPs that strongly differ in their surface chemistry. The magnetic properties of these NPs considerably facilitated their handling during washing and elution steps through the use of an external magnet. We selected a strong anion exchanger (Q-NP), a strong cation exchanger (PAA-NP), and a hydrophobic (OS-NP) surface to increase the probability to harvest different subsets of proteins. After the incubation of NPs with RBC protein extract, the nano-LC-MS/MS analysis of each eluted corona enabled the identification of more than 500 proteins, increasing by 400% the number of identified species compared to a control extract of RBC (108 proteins). Actually, synergic effects occurring at the NP surface can explain this improvement in protein identification. First, the proportion of high-abundant proteins is decreased on NP surface. Moreover, as observed by Tenzer et al 35 In fact, we observed that low-abundant proteins identified in nontreated sample were present in higher abundance on NPs. Here also, the higher affinity of several low-abundant proteins for the different kinds of chemical functionalization allowed their accumulation on the NP surfaces. Thus, through the respective decrease of high-abundant proteins and increase of low-abundant proteins, NP treatment greatly reduces the range of concentration in the sample, making detectable more peptides by the MS instruments. Moreover, because RBC proteins have different affinities for the three kinds of NPs, distinct subproteomes were harvested on each surface, and we finally identified a total of 893 different proteins from these three types of NPs. Among these 893 identified proteins, label-free quantification allowed us to determine which were NP-specific protein signatures. Focusing on the MW and the pI, we found significant differences between these three NP-specific protein groups. While the positively charged Q-NP revealed 90% specific proteins with pI 7.4 (ie, negatively charged at pH 7.4), the negatively charged OS and PAA-NPs bound less than 35% of specific proteins with pI 7.4. We also observed that PAA-NPs harvested specific proteins with higher MW than the ones harvested by OS-NPs. According to these results, we confirmed that properties such as surface charge or chemical functionalization impact on NP-protein interaction. Although the complexity of the NP-protein interactions is far to be understood, an NP-associated sample preparation of RBC samples in clinical proteomics appears obvious. During its 4-month life span, RBC travels along the entire blood stream to deliver oxygen to most organs. As a major circulating compartment, it is considered as a crucial micro-organite able to contain biomarkers associated both to hematological and non-hematological disorders. 33 In the aim to detect protein modifications in pathological context, the access to the RBC proteome is a major issue to identify potential therapeutic targets. Although technical advances in RBC sample preparation and fractionation have greatly contributed to a better proteome coverage in RBC samples, they remain both time-and sample-consuming and thus incompatible with large-scale clinical studies. We demonstrated that our NP treatment is a very reproducible strategy that specifically allowed the access to the RBC minor proteome. Thus, signaling pathways highlighted in nontreated sample and implicated in important RBC functions such as oxygen transport, glycolysis, and protection against oxidative stress were more represented by NP treatment, offering the possibility to better understand mechanisms responsible for dysfunctions of these key functional pathways. Moreover, NP treatment specifically highlighted RhoA, Rac, and Cdc42 signaling that have essential functions in morphology and deformability regulations of the erythrocyte cytoskeleton. 43,44 The maintenance of normal deformability is crucial to permit RBC to enter into narrow capillaries, and dysfunctions of these pathways have been associated with hemolytic anemia. Access to such mandatory pathways for erythrocyte functions would greatly improve investigations on erythrocyte disorders. This access to the minor RBC proteome could also provide precious information for non-hematological disorders. For instance, the specific identification by NP treatment of alpha-synuclein and DJ-1, 45 two major proteins associated to Parkinson's disease, 46,47 may open the way to new proteomics analysis of RBCs in this disease. 48,49 Finally, one great advantage of our approach, in comparison with immuno-depletion 36 or ProteoMiner, 37 concerns its efficiency from low volume of sample (200 µL of RBC containing 1 mg of protein). In their study, Ringrose et al 36 and Roux-Dalvai et al 37 both performed on-gel fractionation after immuno-depletion or ProteoMiner, and identified 700 and 1,578 different proteins from 200 mg and 5.7 g of starting material, respectively. We would certainly identify more proteins if we had combined such fractionation to NP treatment. Although getting high quantity of RBC proteins is not a limiting factor, as 50 mL of total blood could be easily obtained, the possibility to work on low volumes is more compatible with clinical-automated facilities. And the magnetic feature of NPs allowed us to easily automate all the experimental protocol. Villanueva et al 50 have already described this kind of platform and demonstrated the reproducibility of NPs for protein harvesting, suggesting the ability to simultaneously process hundreds of samples in reproducible and clinically adapted conditions. Conclusion For the first time, we took advantage of the complexity of the NP-protein interactions to increase the proteome coverage in RBC lysates, via the capture of complementary subproteomes on different kinds of NPs. Through a qualitative and quantitative proteomics approach, we confirmed the impact of the NP surface functionalization and charge on protein capture, both demonstrating the reduction of the sample concentration range on the NPs and the specificity of capture for each surface. These synergic effects opened a window on the RBC minor proteome, which is a major issue to identify biomarkers associated to hematological and/or non-hematological disorders.
2016-05-12T22:15:10.714Z
2015-03-09T00:00:00.000
{ "year": 2015, "sha1": "617932b192939322b3d8c0469bd7cb1b439d44dd", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=24069", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3fb56696cc9eb639a6793667c8e18889c026835", "s2fieldsofstudy": [ "Biology", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
270670497
pes2o/s2orc
v3-fos-license
The Acid Roles of PtSn@Al2O3 in the Synthesis and Performance of Propane Dehydrogenation In this study, a PtSn/Al2O3 catalyst with bimetallic uniform distribution in the sphere was synthesized. The PDH performance and characterization analyses, such as with FTIR, XPS, and NH3-TPD, were investigated. The effects of acid on the PDH performance were analyzed. Citric acid (CA) acted as a competing adsorbent in the preparation process of the PtSn/Al2O3 catalyst to synthesize the uniform catalyst. Water washing and alkali-treated samples were also studied. SEM line scanning revealed that increased the apparent concentration of Pt metal from 0.23 to 0.30 with citric acid. In contrast to the fresh PtSn/Al2O3 catalyst, the addition of citric acid increased the PDH selectivity from 74% to 93%. After alkali or water washing treatments, the catalyst’s selectivity further increased to 96%. Strong acid sites promoted the breaking of C–C bonds during the PDH reaction, resulting in more methane and ethylene byproducts, and decreased catalyst selectivity for fresh PtSn/Al2O3. From the PDH reaction thermodynamic analysis, a relatively sub-atmospheric pressure environment with a lower propane pressure could be the reasonable choice. Introduction Propylene, as one of the fundamental raw materials among the three major synthetic materials, is used to produce industrial products such as propionaldehyde, polypropylene, acetone, acrylonitrile, and epoxy propane [1,2].Recently, with the development of economies, the demand for propylene has steadily increased.However, as fossil energy resources are rapidly depleting, propylene production through traditional processes is unsustainable.Therefore, some emerging propylene production processes have gradually evolved [3][4][5][6][7].Propane dehydrogenation (PDH) technology, with simpler reactants, limitations in by-products, and lower investment costs, is one of the most promising methods for olefin production.In the PDH industry, platinum-based catalysts are widely employed.However, the low selectivity for propylene reduces the propylene yield [8]. From previous studies, the structural characteristics of carrier materials, such as the specific surface area, acid-base properties, and thermal stability, can affect the catalytic performance of Pt-based catalysts in PDH processes [9][10][11][12].Therefore, amounts of researchers have attempted to regulate the acidity and alkalinity of catalysts to improve their performance and increase the propane conversion and propylene selectivity.Ponomaryov et al. [13] introduced NaCl into MFI zeolite using impregnation, which significantly improved the distribution of Pt, effectively suppressed the acidity of the zeolite, and prevented the sintering of metal Pt during the calcination process.Jang et al. [14] controlled the acidity and alkalinity of a γ-Al 2 O 3 support by varying the calcination temperature of the carrier, revealing the relationship between Lewis acid sites on the carrier surface and coke behavior.The study showed that the deactivation rate of the catalyst significantly decreased and the stability greatly improved during the PDH reaction process after high-temperature calcination.Due to the fact that high-temperature calcination reduces the total amount of acid sites on the γ-Al 2 O 3 support, the result is a decrease in the amount of coke produced during the reaction.On the other hand, scholars have studied the role of acid sites in PDH reactions [15][16][17], and there is still no systematic explanation for which acid sites play a key role in improving the selectivity of catalysts.Based on this important issue, a strong theoretical guidance for the development of efficient catalysts needs to be discussed. To elucidate the role of acidity in catalyst synthesis and performance, we employed the impregnation method to prepare PtSn bimetallic catalysts with γ-Al 2 O 3 as the support.The impregnation depth was controlled using citric acid as a competing adsorbent during the impregnation process.The PtSn/Al 2 O 3 was also treated with alkaline treatment or water washing.Basic characterizations of the catalyst, including the morphology and chemical state of the active metals, were conducted using SEM, XRD, BET, XPS, and other methods.The Pt species in the catalyst samples were investigated with CO-DRIFT, and the acid sites of the catalyst were characterized using NH 3 -TPD.This exploration aimed to further study the influence of the support's acidity and basicity on the catalytic performance of the catalyst.The results indicated that after adding citric acid for regulation, the strong acid sites were significantly reduced, and the by-products, such as methane and ethylene, caused by C-C bond cleavage were reduced.The isolated Pt sites were increased, and the selectivity of the catalyst was greatly improved.S4).The isotherm suggested interconnected and irregularly arranged pores, categorizable as mesoporous structures with H 2 adsorption hysteresis.Additionally, from the pore size distribution of the PtSn bimetallic catalyst and support, the pore size of the loaded metal remains basically unchanged regardless of the pretreatment (Table S1). Dispersed PtSn Catalyst revealing the relationship between Lewis acid sites on the carrier surface and coke behavior.The study showed that the deactivation rate of the catalyst significantly decreased and the stability greatly improved during the PDH reaction process after high-temperature calcination.Due to the fact that high-temperature calcination reduces the total amount of acid sites on the γ-Al2O3 support, the result is a decrease in the amount of coke produced during the reaction.On the other hand, scholars have studied the role of acid sites in PDH reactions [15][16][17], and there is still no systematic explanation for which acid sites play a key role in improving the selectivity of catalysts.Based on this important issue, a strong theoretical guidance for the development of efficient catalysts needs to be discussed. To elucidate the role of acidity in catalyst synthesis and performance, we employed the impregnation method to prepare PtSn bimetallic catalysts with γ-Al2O3 as the support.The impregnation depth was controlled using citric acid as a competing adsorbent during the impregnation process.The PtSn/Al2O3 was also treated with alkaline treatment or water washing.Basic characterizations of the catalyst, including the morphology and chemical state of the active metals, were conducted using SEM, XRD, BET, XPS, and other methods.The Pt species in the catalyst samples were investigated with CO-DRIFT, and the acid sites of the catalyst were characterized using NH3-TPD.This exploration aimed to further study the influence of the support's acidity and basicity on the catalytic performance of the catalyst.The results indicated that after adding citric acid for regulation, the strong acid sites were significantly reduced, and the by-products, such as methane and ethylene, caused by C-C bond cleavage were reduced.The isolated Pt sites were increased, and the selectivity of the catalyst was greatly improved. Dispersed PtSn Catalyst Figure 1 presented the XRD spectra of the γ-Al2O3 support and the PtSn/Al2O3 catalyst, with diffraction peaks at 2θ = 46.3°and 66.8° attributed to γ-Al2O3 [18].No diffraction peaks corresponding to metallic Pt and Sn were observed, indicating well-dispersed Pt and Sn on the support surface.Figures S1 and S2 showed SEM images of the γ-Al2O3 support and PtSn/Al2O3 catalyst.The PtSn/Al2O3 catalyst and γ-Al2O3 support demonstrated a typical type IV isotherm (Figure S4).The isotherm suggested interconnected and irregularly arranged pores, categorizable as mesoporous structures with H2 adsorption hysteresis.Additionally, from the pore size distribution of the PtSn bimetallic catalyst and support, the pore size of the loaded metal remains basically unchanged regardless of the pretreatment (Table S1). Citric Acid Regulates Immersion Depth Citric acid was used as a competitive adsorbent to regulate the immersion depth of active metal Pt.The SEM line-scan spectrum showed the radial concentration changes of the Pt element in the PtSn/Al 2 O 3 spherical catalyst (Figure 2).It indicates that after adding citric acid, the radial distribution of the Pt element in the catalyst is relatively uniform.By contrast, the Pt element was enriched on the surface of PtSn/Al 2 O 3 compared to its inside (Figure S3).The quantitative analysis in Table S2 showed the apparent concentration and atomic percentage of the Pt increase when adding citric acid.However, as the citric acid content increases, the apparent concentration and atomic percentage of Pt begin to decrease.In a previous study, hydrochloric acid was used to regulate the immersion depth of Pt and Re on γ-Al 2 O 3 nanosheets [19].In an acidic solution, the surface of alumina was hydrated and dissolved by Al(OH) 2+ ions, and then, aluminum cations and chloroplatinate anions are re-deposited on the alumina.After adding HCl, the dispersion of the active metal is increased by three times [20].From EDS images (Figure 2b), the active metals Pt and Sn were equally evenly dispersed on the cross section of the carrier.This result suggested that citric acid could effectively change the dispersion state of Pt inside the carrier as a competitive adsorbent.Moreover, under the same pre-impregnation concentration and time, the adsorption rate of citric acid was faster. Citric Acid Regulates Immersion Depth Citric acid was used as a competitive adsorbent to regulate the immersion depth of active metal Pt.The SEM line-scan spectrum showed the radial concentration changes of the Pt element in the PtSn/Al2O3 spherical catalyst (Figure 2).It indicates that after adding citric acid, the radial distribution of the Pt element in the catalyst is relatively uniform.By contrast, the Pt element was enriched on the surface of PtSn/Al2O3 compared to its inside (Figure S3).The quantitative analysis in Table S2 showed the apparent concentration and atomic percentage of the Pt increase when adding citric acid.However, as the citric acid content increases, the apparent concentration and atomic percentage of Pt begin to decrease.In a previous study, hydrochloric acid was used to regulate the immersion depth of Pt and Re on γ-Al2O3 nanosheets [19].In an acidic solution, the surface of alumina was hydrated and dissolved by Al(OH) 2+ ions, and then, aluminum cations and chloroplatinate anions are re-deposited on the alumina.After adding HCl, the dispersion of the active metal is increased by three times [20].From EDS images (Figure 2b), the active metals Pt and Sn were equally evenly dispersed on the cross section of the carrier.This result suggested that citric acid could effectively change the dispersion state of Pt inside the carrier as a competitive adsorbent.Moreover, under the same pre-impregnation concentration and time, the adsorption rate of citric acid was faster.To investigate the chemical states of the active metals in the catalyst, the XPS test was conducted (Figure 3 and Table 1).The PtSn/Al2O3 catalyst exhibited Pt 4d peaks at 332.0 eV and 309.5 eV, corresponding to Pt 4d3/2 and Pt 4d5/2 of metallic Pt, respectively [21][22][23].Peaks at 334.2 eV and 316.9 eV as well as 336.4 eV and 320.1 eV corresponded to Pt 2+ and Pt 4+ [21][22][23], respectively.For the PtSn/Al2O3-CA catalyst, the binding energies of Pt 0 and Pt 2+ shifted to lower energy levels with the addition of CA.It indicated more electron-rich Pt.The binding energy Pt 4+ in the PtSn/Al2O3-1.2CAcatalyst was the highest, indicating that Pt was in a high oxidation state [24].However, no signal peak for Sn was detected in the XPS spectra of the calcined PtSn/Al2O3 catalyst.Therefore, Sn was loaded directly on the γ-Al2O3 support and detected with the XPS spectrum of the as-loaded Sn/Al2O3 catalyst.As shown in Figure 3, by fitting the peaks into three types of peaks in the Sn 3d XPS spectrum of Sn/Al2O3, which belong to Sn 0 , Sn 2+ , and Sn 4+ species, metallic tin was confirmed [25].For oxidized Sn species (Sn 2+ and Sn 4+ ), the peaks for Sn 3d5/2 were located at To investigate the chemical states of the active metals in the catalyst, the XPS test was conducted (Figure 3 and Table 1).The PtSn/Al 2 O 3 catalyst exhibited Pt 4d peaks at 332.0 eV and 309.5 eV, corresponding to Pt 4d 3/2 and Pt 4d 5/2 of metallic Pt, respectively [21][22][23].Peaks at 334.2 eV and 316.9 eV as well as 336.4 eV and 320.1 eV corresponded to Pt 2+ and Pt 4+ [21][22][23], respectively.For the PtSn/Al 2 O 3 -CA catalyst, the binding energies of Pt 0 and Pt 2+ shifted to lower energy levels with the addition of CA.It indicated more electron-rich Pt.The binding energy Pt 4+ in the PtSn/Al 2 O 3 -1.2CAcatalyst was the highest, indicating that Pt was in a high oxidation state [24].However, no signal peak for Sn was detected in the XPS spectra of the calcined PtSn/Al 2 O 3 catalyst.Therefore, Sn was loaded directly on the γ-Al 2 O 3 support and detected with the XPS spectrum of the as-loaded Sn/Al 2 O 3 catalyst.As shown in Figure 3, by fitting the peaks into three types of peaks in the Sn 3d XPS spectrum of Sn/Al 2 O 3 , which belong to Sn 0 , Sn 2+ , and Sn 4+ species, metallic tin was confirmed [25].For oxidized Sn species (Sn 2+ and Sn 4+ ), the peaks for Sn 3d 5/2 were located at 485.8 eV and 484.9 eV, while Sn 3d 3/2 was at 494.2 eV and 493.2 eV [25][26][27].Therefore, the active Sn species may enter the bulk phase from the catalyst's surface.The Pt species on the catalyst sample were accurately characterized by CO-DRIFT.As shown in Figure 4a, the weak band at 2088 cm −1 was attributed to the linearly bound CO located on individual Pt δ+ atoms.The peak demonstrated a red shift direction with the addition of CA until invisible [28,29].The band 2035cm −1 represents Pt 0 atoms, which are CO linearly adsorbed by isolated Pt nanoparticles [30,31].This band intensity was weak on PtSn/Al2O3-0.4CAcatalysts but more pronounced on PtSn/Al2O3-0.8CA(blue shift to 2076 cm −1 ) and PtSn/Al2O3-1.2CA(blue shift to 2067 cm −1 ) with increasing citric acid concentration.In general, with the increasing positive charge of a metal ion, the contribution of the π bonding to the interaction of CO with the cation decreases, whereas the The Pt species on the catalyst sample were accurately characterized by CO-DRIFT.As shown in Figure 4a, the weak band at 2088 cm −1 was attributed to the linearly bound CO located on individual Pt δ+ atoms.The peak demonstrated a red shift direction with the addition of CA until invisible [28,29].The band 2035cm −1 represents Pt 0 atoms, which are CO linearly adsorbed by isolated Pt nanoparticles [30,31].This band intensity was weak on PtSn/Al 2 O 3 -0.4CAcatalysts but more pronounced on PtSn/Al 2 O 3 -0.8CA(blue shift to 2076 cm −1 ) and PtSn/Al 2 O 3 -1.2CA(blue shift to 2067 cm −1 ) with increasing citric acid concentration.In general, with the increasing positive charge of a metal ion, the contribution of the π bonding to the interaction of CO with the cation decreases, whereas the electrostatic interaction and σ donation increase [32].The intensity of the band range (2067-2076 cm −1 ) in Figure 4 gradually increased.It indicated that the intensity of isolated Pt sites (Pt 1 ) increased.This is attributed to the competitive adsorption effect of citric acid, which makes Pt dispersion uniform, and platinum atoms replace platinum clusters.Previous studies have shown that the Pt 1 site promotes the selectivity of the catalyst [33].There are currently other opinions on the impact of the Pt 1 site on selectivity.DFT calculations indicate that the activation energy of the C-H bond and the dehydrogenation energy barrier of β-H at the Pt 1 site are smaller during propane dehydrogenation, which is beneficial for the generation of the target product C 3 H 6 [34].However, in the third dehydrogenation, the Pt 1 site still gives C 3 H 6 a small dehydrogenation energy barrier.This leads to a small amount of C 3 H 6 deeply cracking into CH 4 and C 2 H 6 .Since the literature only provides theoretical calculations, the selectivity of the Pt 1 site in specific experiments is not clear, so further research is needed.It should be pointed out that in catalysts with a high Sn/Pt ratio, this type of adsorption was limited.Therefore, doping Pt with an appropriate amount of Sn can dilute the Pt atoms in the particles, resulting in continuous Pt clusters to segregate into isolated Pt atoms. electrostatic interaction and σ donation increase [32].The intensity of the band range (2067-2076 cm −1 ) in Figure 4 gradually increased.It indicated that the intensity of isolated Pt sites (Pt1) increased.This is attributed to the competitive adsorption effect of citric acid, which makes Pt dispersion uniform, and platinum atoms replace platinum clusters.Previous studies have shown that the Pt1 site promotes the selectivity of the catalyst [33].There are currently other opinions on the impact of the Pt1 site on selectivity.DFT calculations indicate that the activation energy of the C-H bond and the dehydrogenation energy barrier of β-H at the Pt1 site are smaller during propane dehydrogenation, which is beneficial for the generation of the target product C3H6 [34].However, in the third dehydrogenation, the Pt1 site still gives C3H6 a small dehydrogenation energy barrier.This leads to a small amount of C3H6 deeply cracking into CH4 and C2H6.Since the literature only provides theoretical calculations, the selectivity of the Pt1 site in specific experiments is not clear, so further research is needed.It should be pointed out that in catalysts with a high Sn/Pt ratio, this type of adsorption was limited.Therefore, doping Pt with an appropriate amount of Sn can dilute the Pt atoms in the particles, resulting in continuous Pt clusters to segregate into isolated Pt atoms. Influence of Acid Impregnation on Catalyst Performance By altering the acidity of the catalyst, the effects of acidity on the catalytic performance of PDH were investigated.Figure 5 illustrated the catalytic performance of PtSn/Al2O3 catalysts with different acidity levels in the PDH reaction.The catalytic performance of the PtSn/Al2O3 catalyst significantly decreased in the first 20 min of the PDH reaction.After about 4 h of reaction, the final conversion is 18.7%.At the same reaction temperature and WHSV, the PtSn/Al2O3 catalyst efficiency was about 10% after 4 h of the reaction [8].In contrast, the PtSn/Al2O3 catalyst treated with citric acid competition impregnation exhibited a lower propane conversion compared to the untreated PtSn/Al2O3 catalyst.However, the deactivation constant of PtSn/Al2O3-CA series catalysts was lower (Table 2), indicating better stability.This may be due to the increased immersion depth of the active metal Pt in the catalyst prepared using citric acid impregnation.For the Influence of Acid Impregnation on Catalyst Performance By altering the acidity of the catalyst, the effects of acidity on the catalytic performance of PDH were investigated.Figure 5 2), indicating better stability.This may be due to the increased immersion depth of the active metal Pt in the catalyst prepared using citric acid impregnation.For the commercial PDH catalyst Pt/Al 2 O 3 , the active metal is loaded on the surface of the support.During the PDH fluidized bed reaction, commercial catalysts inevitably wear out and deactivate.In this study, Pt and Sn were distributed both on the surface and in the deeper layers of the catalyst.This design avoided catalyst deactivation caused by surface Pt metal loss, maintaining a stable conversion.In addition, compared with untreated PtSn catalysts, the selectivity of catalysts impregnated with citric acid was greatly improved, which was precisely due to the increase in Pt 1 sites during the impregnation process.The products of the PDH reaction process were shown in Figure 6.Obviously, the fresh PtSn/Al 2 O 3 catalyst yielded the most abundant by-products, with methane and ethylene as the main components.This further suggested that strong acid sites promote the cleavage of C-C bonds during the reaction.In contrast, PtSn/Al 2 O 3 -CA catalysts primarily produce propylene, with low proportions of methane and ethane as by-products.Notably, the PtSn/Al 2 O 3 -1.2ACcatalyst shows no detectable ethylene content in by-products after 150 min of the PDH reaction. Molecules 2024, 29, x FOR PEER REVIEW 6 of 14 commercial PDH catalyst Pt/Al2O3, the active metal is loaded on the surface of the support. During the PDH fluidized bed reaction, commercial catalysts inevitably wear out and deactivate.In this study, Pt and Sn were distributed both on the surface and in the deeper layers of the catalyst.This design avoided catalyst deactivation caused by surface Pt metal loss, maintaining a stable conversion.In addition, compared with untreated PtSn catalysts, the selectivity of catalysts impregnated with citric acid was greatly improved, which was precisely due to the increase in Pt1 sites during the impregnation process.The products of the PDH reaction process were shown in Figure 6.Obviously, the fresh PtSn/Al2O3 catalyst yielded the most abundant by-products, with methane and ethylene as the main components.This further suggested that strong acid sites promote the cleavage of C-C bonds during the reaction.In contrast, PtSn/Al2O3-CA catalysts primarily produce propylene, with low proportions of methane and ethane as by-products.Notably, the PtSn/Al2O3-1.2ACcatalyst shows no detectable ethylene content in by-products after 150 min of the PDH reaction.To elucidate this phenomenon, the acid sites of the PtSn/Al 2 O 3 catalyst were further analyzed.The NH 3 -TPD curves of the fresh PtSn/Al 2 O 3 catalyst showed three desorption peaks with maximum temperatures of 173 • C, 286 • C, and 406 • C, respectively (Figure 7a).After impregnation with citric acid, four more desorption peaks were observed.NH 3 desorption in temperature ranges of 120~200 • C, 200~350 • C, and 350 • C corresponds to weak, medium, and strong acid sites, respectively.To obtain semi-quantitative results for total acidity and acid strength distribution, a Gaussian peak fitting method was used to deconvolute the NH 3 -TPD curves.The fitted peaks and results were shown in Table 3.The total peak areas for PtSn/Al 2 O 3 -0.4CA,PtSn/Al 2 O 3 -0.8CA,and PtSn/Al 2 O 3 -1.2CAcatalysts are 407.9,480.4,and 515.9, respectively.From previous studies, side reactions, such as cracking, isomerization, and coking, were mainly due to strong acid centers [35].With the increase in citric acid content, the number of strong acid centers significantly decreased, while the number of weak acid centers increased.Therefore, the selectivity and stability of catalysts were improved.To elucidate this phenomenon, the acid sites of the PtSn/Al2O3 catalyst were further analyzed.The NH3-TPD curves of the fresh PtSn/Al2O3 catalyst showed three desorption peaks with maximum temperatures of 173 °C, 286 °C, and 406 °C, respectively (Figure 7a).After impregnation with citric acid, four more desorption peaks were observed.NH3 desorption in temperature ranges of 120~200 °C, 200~350 °C, and 350 °C corresponds to weak, medium, and strong acid sites, respectively.To obtain semi-quantitative results for total acidity and acid strength distribution, a Gaussian peak fitting method was used to deconvolute the NH3-TPD curves.The fitted peaks and results were shown in Table 3.The total peak areas for PtSn/Al2O3-0.4CA,PtSn/Al2O3-0.8CA,and PtSn/Al2O3-1.2CAcatalysts are 407.9,480.4,and 515.9, respectively.From previous studies, side reactions, such as cracking, isomerization, and coking, were mainly due to strong acid centers [35].With the increase in citric acid content, the number of strong acid centers significantly decreased, while the number of weak acid centers increased.Therefore, the selectivity and stability of catalysts were improved.By using pyridine as a probe molecule, the types of acids were further characterized.Figure 8 demonstrated the acid sites of the catalyst at desorption temperatures of 150 °C, 200 °C, and 300 °C, and the acid amounts were quantitatively analyzed (shown in Table S3).The adsorption peaks around 1450 cm −1 and 1600 cm −1 were usually associated with pyridine adsorption on Lewis acid sites [36], while the peak at 1541 cm −1 was related to pyridine adsorption on Brønsted acid sites [36].The peak at 1490 cm −1 was attributed to the interaction of pyridine with both Lewis and Brønsted acid sites [36], and the intensity By using pyridine as a probe molecule, the types of acids were further characterized.Figure 8 demonstrated the acid sites of the catalyst at desorption temperatures of 150 • C, 200 • C, and 300 • C, and the acid amounts were quantitatively analyzed (shown in Table S3).The adsorption peaks around 1450 cm −1 and 1600 cm −1 were usually associated with pyridine adsorption on Lewis acid sites [36], while the peak at 1541 cm −1 was related to pyridine adsorption on Brønsted acid sites [36].The peak at 1490 cm −1 was attributed to the interaction of pyridine with both Lewis and Brønsted acid sites [36], and the intensity of the peaks significantly diminishes with higher temperatures [37].From Figure 8, the acid site location was essentially the same before and after the addition of citric acid.However, the Lewis acid sites of PtSn/Al 2 O 3 -CA were enhanced after the addition of citric acid (shown in Table S3).The stronger Lewis acid sites were prone to induce coking, which may be a reason for the decreased conversion of the PtSn/Al 2 O 3 -CA catalyst. Molecules 2024, 29, x FOR PEER REVIEW 8 of 14 of the peaks significantly diminishes with higher temperatures [37].From Figure 8, the acid site location was essentially the same before and after the addition of citric acid.However, the Lewis acid sites of PtSn/Al2O3-CA were enhanced after the addition of citric acid (shown in Table S3).The stronger Lewis acid sites were prone to induce coking, which may be a reason for the decreased conversion of the PtSn/Al2O3-CA catalyst.Based on PtSn/Al2O3-0.4CA,mild alkali neutralization and water washing catalysts were further performed to investigate the catalysts' acidity (Figure S7).PtSn/Al2O3-WS and PtSn/Al2O3-OH catalysts maintained the selectivity of over 96% after nearly 4 h of the PDH reaction.Compared to PtSn/Al2O3-CA, the selectivity further increased.Similarly, PtSn/Al2O3-WS, PtSn/Al2O3-OH, and PtSn/Al2O3 exhibit similar Brønsted acid sites and Lewis acid sites (Figure S6).However, due to the further decrease in the strength of its strong Lewis acid sites, its propylene selectivity was further improved. XPS was used to investigate the effects of the chemical state of active metals in the catalyst on its catalytic activity.For PtSn/Al2O3-WS and PtSn/Al2O3-OH catalysts, the binding energies of Pt 0 , Pt 2+ , and Pt 4+ all shifted to lower energy levels.It indicated an electronrich Pt state (Figure S8 and Table S5).For the PtSn/Al2O3 catalyst, the metallic Pt content was 37.5%, which increased to 45.2% after alkali neutralization.The PtSn/Al2O3-WS catalyst showed a further increase in the content of metallic Pt to 58.3%, which was more favorable for the selectivity of the catalyst [38]. Reaction Thermodynamic Analysis of Catalyst A detailed thermodynamic analysis was conducted as shown in Figure 9.The propane dehydrogenation reaction was significantly influenced by the component pressure.The propane dehydrogenation reaction was conducted at 580 °C in this study, where the thermodynamic limit of the propane conversion ratio was around 30%.To further increase the utilization ratio of propane, it is suggested to conduct the reaction at a lower propane partial pressure at the thermodynamic viewpoint, but it would be economically unfavored if too low of a propane partial pressure is applied.S6).However, due to the further decrease in the strength of its strong Lewis acid sites, its propylene selectivity was further improved. XPS was used to investigate the effects of the chemical state of active metals in the catalyst on its catalytic activity.For PtSn/Al 2 O 3 -WS and PtSn/Al 2 O 3 -OH catalysts, the binding energies of Pt 0 , Pt 2+ , and Pt 4+ all shifted to lower energy levels.It indicated an electron-rich Pt state (Figure S8 and Table S5).For the PtSn/Al 2 O 3 catalyst, the metallic Pt content was 37.5%, which increased to 45.2% after alkali neutralization.The PtSn/Al 2 O 3 -WS catalyst showed a further increase in the content of metallic Pt to 58.3%, which was more favorable for the selectivity of the catalyst [38]. Reaction Thermodynamic Analysis of Catalyst A detailed thermodynamic analysis was conducted as shown in Figure 9.The propane dehydrogenation reaction was significantly influenced by the component pressure.The propane dehydrogenation reaction was conducted at 580 • C in this study, where the thermodynamic limit of the propane conversion ratio was around 30%.To further increase the utilization ratio of propane, it is suggested to conduct the reaction at a lower propane partial pressure at the thermodynamic viewpoint, but it would be economically unfavored if too low of a propane partial pressure is applied. The propene yield is also dependent on hydrogen partial pressure, as shown in Figure 10.Hydrogen is a by-product of the propane dehydrogenation reaction, which also reversely suppresses the reaction.Theoretically, if hydrogen could be totally removed in the system, propene could reach a yield limit of up to around 60% at 580 • C. The working condition of the catalyst applied in this study only limited the propene yield to around 30%.A lower hydrogen partial pressure would favor the yield of propene, but the possible deactivation due to carbon deposition should also be considered.The propene yield is also dependent on hydrogen partial pressure, as shown in Figure 10.Hydrogen is a by-product of the propane dehydrogenation reaction, which also reversely suppresses the reaction.Theoretically, if hydrogen could be totally removed in the system, propene could reach a yield limit of up to around 60% at 580 °C.The working condition of the catalyst applied in this study only limited the propene yield to around 30%.A lower hydrogen partial pressure would favor the yield of propene, but the possible deactivation due to carbon deposition should also be considered.By combining the dependencies of two components in the reaction, the yield of propene can be calculated as shown in Figure 11.The yield of propene favors a condition where both hydrogen and propane are in relatively low pressure.But the contour line has a deeper slope, which means the drop in hydrogen pressure would contribute more compared with propane pressure.Herein, the optimal working condition would be a relatively sub-atmospheric pressure environment, with a lower propane pressure.Hydrogen should be removed as much as possible when recycling the reaction gas to ensure a higher propane utilization ratio.The propene yield is also dependent on hydrogen partial pressure, as shown in Figure 10.Hydrogen is a by-product of the propane dehydrogenation reaction, which also reversely suppresses the reaction.Theoretically, if hydrogen could be totally removed in the system, propene could reach a yield limit of up to around 60% at 580 °C.The working condition of the catalyst applied in this study only limited the propene yield to around 30%.A lower hydrogen partial pressure would favor the yield of propene, but the possible deactivation due to carbon deposition should also be considered.By combining the dependencies of two components in the reaction, the yield of propene can be calculated as shown in Figure 11.The yield of propene favors a condition where both hydrogen and propane are in relatively low pressure.But the contour line has a deeper slope, which means the drop in hydrogen pressure would contribute more compared with propane pressure.Herein, the optimal working condition would be a relatively sub-atmospheric pressure environment, with a lower propane pressure.Hydrogen should be removed as much as possible when recycling the reaction gas to ensure a higher propane utilization ratio.By combining the dependencies of two components in the reaction, the yield of propene can be calculated as shown in Figure 11.The yield of propene favors a condition where both hydrogen and propane are in relatively low pressure.But the contour line has a deeper slope, which means the drop in hydrogen pressure would contribute more compared with propane pressure.Herein, the optimal working condition would be a relatively subatmospheric pressure environment, with a lower propane pressure.Hydrogen should be removed as much as possible when recycling the reaction gas to ensure a higher propane utilization ratio. Discussion A PtSn bimetallic catalyst supported on γ-Al2O3 was prepared using citric acid as a competitive adsorbent, and a series of characterizations were carried out.No diffraction peaks of Pt and Sn were detected in the XRD spectrum, indicating a good dispersion of Pt and Sn on the alumina support.By using citric acid as a competitive adsorbent to regulate the immersion depth of active metals Pt and Sn, the apparent concentration and atomic percentage of metal Pt increase after adding citric acid, with the PtSn/Al2O3-0.4CAcatalyst being the highest.However, as the citric acid content increases, the apparent concentration and atomic percentage of Pt begin to decrease, which may be due to excessive citric acid covering the active metal Pt.XPS shows that for the PtSn/Al2O3-CA catalyst, the binding energies of Pt 0 and Pt 2+ shift towards lower energy levels with the addition of CA, indicating that Pt tends to be more electron rich.The characterization of acid sites revealed that the addition of citric acid regulated the distribution of acid sites in the catalyst, and the number of strong acid sites significantly decreased.The catalytic performance of the catalyst was further evaluated with GC-MS, and it was found that compared with the untreated PtSn/Al2O3 catalyst, the selectivity of the PtSn/Al2O3-CA catalyst was greatly improved, which is attributed to the increase in Pt 0 sites during the impregnation process.With the addition of citric acid, the number of strong acid sites in PtSn/Al2O3-CA catalysts significantly decreases, the number of weak acid sites increases, and the side reactions caused by strong acid sites decrease, which is beneficial for improving selectivity and stability.In addition, thermodynamic and kinetic analyses show that under appropriate conditions, lower hydrogen and propane pressures are beneficial for obtaining higher propylene yields, and compared to propane pressure, a decrease in hydrogen pressure is more conducive to improving the yield.In the future, in-depth research on the impact of changes in catalyst acidity and alkalinity on catalytic performance will help optimize the reaction conditions of propane dehydrogenation to propylene technology and improve propylene yield and selectivity. Discussion A PtSn bimetallic catalyst supported on γ-Al 2 O 3 was prepared using citric acid as a competitive adsorbent, and a series of characterizations were carried out.No diffraction peaks of Pt and Sn were detected in the XRD spectrum, indicating a good dispersion of Pt and Sn on the alumina support.By using citric acid as a competitive adsorbent to regulate the immersion depth of active metals Pt and Sn, the apparent concentration and atomic percentage of metal Pt increase after adding citric acid, with the PtSn/Al 2 O 3 -0.4CAcatalyst being the highest.However, as the citric acid content increases, the apparent concentration and atomic percentage of Pt begin to decrease, which may be due to excessive citric acid covering the active metal Pt.XPS shows that for the PtSn/Al 2 O 3 -CA catalyst, the binding energies of Pt 0 and Pt 2+ shift towards lower energy levels with the addition of CA, indicating that Pt tends to be more electron rich.The characterization of acid sites revealed that the addition of citric acid regulated the distribution of acid sites in the catalyst, and the number of strong acid sites significantly decreased.The catalytic performance of the catalyst was further evaluated with GC-MS, and it was found that compared with the untreated PtSn/Al 2 O 3 catalyst, the selectivity of the PtSn/Al 2 O 3 -CA catalyst was greatly improved, which is attributed to the increase in Pt 0 sites during the impregnation process.With the addition of citric acid, the number of strong acid sites in PtSn/Al 2 O 3 -CA catalysts significantly decreases, the number of weak acid sites increases, and the side reactions caused by strong acid sites decrease, which is beneficial for improving selectivity and stability.In addition, thermodynamic and kinetic analyses show that under appropriate conditions, lower hydrogen and propane pressures are beneficial for obtaining higher propylene yields, and compared to propane pressure, a decrease in hydrogen pressure is more conducive to improving the yield.In the future, in-depth research on the impact of changes in catalyst acidity and alkalinity on catalytic performance will help optimize the reaction conditions of propane dehydrogenation to propylene technology and improve propylene yield and selectivity. Synthesis of Catalysts Chemicals used in this study included chloroplatinic acid (H 2 PtCl 6 .6HPreparation of Sn/Al 2 O 3 pellets: CaCl 2 (4.0000 g) was dissolved in 400 mL water, then hydrochloric acid was added to adjust the solution pH to 2. We mixed 20.000 g of SB powder with SnC 2 O 4 (0.3297 g), took 3.750 g of the mixed powder and stirred it with 10 mL water, and then added 0.090 g of sodium alginate to form a slurry.It was stirred for 2 h until all material mixed evenly.The above slurry was injected into a syringe and dripped into the calcium chloride solution to form small pellets.After 30 min of reaction, the pellets were cleaned with pure water 3 times and dried in ambient air completely.The pellets were calcined in a muffle furnace with the following process: calcined 30 min from room temperature to 110 Characterization of Catalysts The characterization of catalyst morphology was performed on a field emission scanning electron microscope instrument (SEM JMS-7500F, Nippon Electronics, Tokyo, Japan).The instrument is equipped with an 80 mm 2 energy-dispersive X-ray spectrometer (EDS, Oxford, UK).The specific surface area and pore volume of the samples were analyzed with a nitrogen adsorption instrument (NADS, ASAP 2046M, Micromeritics, Norcross, GA, USA) through N 2 adsorption/desorption isotherms under 77 K.The specific surface area of the catalysts was calculated using the Brunauer-Emmett-Teller (BET) method, and the pore volume and pore size were determined using the Barrett-Joyner-Halenda (BJH) method.An X-ray diffractometer (XRD, 18KW D/MAX2500V+/PC, Rigaku Corporation, Akishima, Japan) was used to determine the surface chemical states of samples.The electronic states of the metal and the surface composition of the catalyst were determined through photoelectron spectroscopy (XPS ESCALAB250, Thermo Fisher Scientific, Waltham, MA, USA). The infrared spectrum of CO as the probe was measured using a Fourier transform infrared spectrometer (FTIR, Thermo Fisher Scientific).About 50 mg samples were reduced at 540 • C pure H 2 (20 mL/min) for 60 min and then cooled at pure N 2 (20 mL/min) to 30 • C to collect background spectra.CO (20 mL/min) was injected for 30 min, and then, N 2 (20 mL/min) was purged individually and purified for 30 min during experiments. The acid sites on the catalyst surface were determined with programmed temperature desorption method (NH 3 -TPD AutoChem1 II 2920, microstructures).Approximately 50 mg of the sample was pretreated at 300 • C in He (30 mL/min) for 60 min to remove moisture, physically adsorbed water, and other impurities.After pretreatment, 5% NH 3 (30 mL/min, balanced with He) was introduced at 50 • C for 60 min, followed by He gas (30 mL/min) purging at 50 • C for 60 min.The TPD data were recorded as the temperature range from 50 to 800 • C under He gas. The type of acid was determined with pyridine as the probe molecule and with in situ diffuse reflectance infrared spectroscopy (DRIFT Nicolet iS50, Thermo Fisher, Waltham, MA, USA).The samples were compressed into thin sheets and placed in the reaction cell.The system was evacuated to 10 −3 Pa at 300 • C and maintained for 60 min, followed by cooling to room temperature.Pyridine vapor was introduced into the system for 30 min until it reached equilibrium.The temperature was then raised to 200 • C, followed by another evacuation to 10 −3 Pa, and it was kept 30 min before cooling to room temperature.Infrared spectra were scanned in the wavenumber range of 1400 to 1700 cm −1 , and the infrared spectrum of pyridine adsorption at 200 • C was recorded.The same procedure was repeated for desorption treatments at other specified temperatures, with corresponding spectra collected. Catalytic Performance Test In a fixed-bed quartz reactor with an inner diameter of 6 mm, propane dehydrogenation on PtSn/Al 2 O 3 catalyst was conducted under atmospheric pressure.The PDH reaction was performed in a mixture of 16 vol% C 3 H 8 , 20 vol% H 2 and N 2 , with a propane weight hourly space velocity (WHSV) of 4.7 h −1 and a temperature of 580 • C. The reaction products were analyzed using gas chromatography with a flame ionization detector (FID).FID was employed to measure the concentrations of all hydrocarbons, including CH 4 , C 2 H 6 , C 2 H 4 , C 3 H 6 , and C 3 H 8 (with no detection of dimerization or aromatization products).The formulas for propane conversion and propylene selectivity are as follows: In the formulas, CH 4 , C 2 H 6 , C 2 H 4 , C 3 H 6 , and C 3 H 8 represent the concentrations of the corresponding gas components in the outlet gas. The stability of the PtSn/Al 2 O 3 catalyst was quantitatively determined using the deactivation rate constant, considering deactivation as a first-order process.The deactivation rate constant is defined as follows: Reaction Equilibrium Calculation The reaction equilibrium is calculated based on van't Hoff equation: where K eq is the equilibrium constant at a given temperature T; ∆ r H • is the standard reaction enthalpy change, which can be calculated from reference values; and R is the ideal gas constant.The standard equilibrium constant K • eq can be calculated by its relationship with Gibbs free energy ∆ r G • , which can also be calculated from reference values: For the propane dehydrogenation reaction, the reaction equilibrium constant can be used to calculate the theoretical reaction limit: where the notation "out" stands for the outlet concentration.By applying the law of mass conservation, the outlet concentration can be related to the inlet or initial concentration.With tedious calculation, the outlet propene concentration can be calculated as follows: where the notation "in" stands for inlet or initial concentration. Conclusions This section is not mandatory but can be added to the manuscript if the discussion is unusually long or complex. In this study, we used citric acid as a competing adsorbent to prepare a PtSn bimetallic catalyst supported on γ-Al 2 O 3 and conducted a series of characterizations to study the effects of acid to the PDH reaction.The conclusions are as follows: (1) Through characterization analyses, such as EDS, XRD, and N 2 adsorption-desorption, Pt and Sn active metals are well dispersed.SEM scanning showed that the addition of an appropriate amount of citric acid increased the apparent concentration and immersion depth of the active metal Pt.Citric acid, as a competitive adsorbent, increased the number of Pt1 sites during the impregnation process.(2) With the effects of acid sites, compared to a fresh PtSn/Al 2 O 3 catalyst, the addition of citric acid led to a slight decrease in PDH conversion.However, the selectivity of the catalyst increased significantly from 74% to 93%.After neutralization with alkali and washing treatment, the selectivity of the catalyst further improved to 96%.(3) The fresh PtSn/Al 2 O 3 catalyst possesses the strongest acid sites.During the PDH reaction, the strong acid sites promote the cleavage of C-C bonds, leading to the generation of more by-products, such as methane and ethylene.This, in turn, reduces the selectivity of the catalyst.(4) From the PDH reaction thermodynamic analysis of catalyst, a relatively sub-atmospheric pressure environment with a lower propane pressure could be the reasonable choice.When recovering reaction gases, hydrogen should be removed as much as possible to ensure higher propane utilization efficiency. Figure 1 Figure 1 presented the XRD spectra of the γ-Al 2 O 3 support and the PtSn/Al 2 O 3 catalyst, with diffraction peaks at 2θ = 46.3• and 66.8 • attributed to γ-Al 2 O 3 [18].No diffraction peaks corresponding to metallic Pt and Sn were observed, indicating welldispersed Pt and Sn on the support surface.Figures S1 and S2 showed SEM images of the γ-Al 2 O 3 support and PtSn/Al 2 O 3 catalyst.The PtSn/Al 2 O 3 catalyst and γ-Al 2 O 3 support demonstrated a typical type IV isotherm (FigureS4).The isotherm suggested interconnected and irregularly arranged pores, categorizable as mesoporous structures with H 2 adsorption hysteresis.Additionally, from the pore size distribution of the PtSn bimetallic catalyst and support, the pore size of the loaded metal remains basically unchanged regardless of the pretreatment (TableS1). Figure 2 . Figure 2. The impregnation of Pt before and after adding the citric acid line scanning of SEM: (a) PtSn/Al2O3-1.2CAand (b) surface scanning from EDS. Figure 2 . Figure 2. The impregnation of Pt before and after adding the citric acid line scanning of SEM: (a) PtSn/Al 2 O 3 -1.2CAand (b) surface scanning from EDS. illustrated the catalytic performance of PtSn/Al 2 O 3 catalysts with different acidity levels in the PDH reaction.The catalytic performance of the PtSn/Al 2 O 3 catalyst significantly decreased in the first 20 min of the PDH reaction.After about 4 h of reaction, the final conversion is 18.7%.At the same reaction temperature and WHSV, the PtSn/Al 2 O 3 catalyst efficiency was about 10% after 4 h of the reaction [8].In contrast, the PtSn/Al 2 O 3 catalyst treated with citric acid competition impregnation exhibited a lower propane conversion compared to the untreated PtSn/Al 2 O 3 catalyst.However, the deactivation constant of PtSn/Al 2 O 3 -CA series catalysts was lower (Table Figure 8 . Figure 8. Pyridine Infrared Spectroscopy of PtSn/Al 2 O 3 series catalysts.Based on PtSn/Al 2 O 3 -0.4CA,mild alkali neutralization and water washing catalysts further performed to investigate the catalysts' acidity (Figure S7).PtSn/Al 2 O 3 -WS and PtSn/Al 2 O 3 -OH catalysts maintained the selectivity of over 96% after nearly 4 h of the PDH reaction.Compared to PtSn/Al 2 O 3 -CA, the selectivity further increased.Similarly, PtSn/Al 2 O 3 -WS, PtSn/Al 2 O 3 -OH, and PtSn/Al 2 O 3 exhibit similar Brønsted acid sites and Lewis acid sites (FigureS6).However, due to the further decrease in the strength of its strong Lewis acid sites, its propylene selectivity was further improved.XPS was used to investigate the effects of the chemical state of active metals in the catalyst on its catalytic activity.For PtSn/Al 2 O 3 -WS and PtSn/Al 2 O 3 -OH catalysts, the binding energies of Pt 0 , Pt 2+ , and Pt 4+ all shifted to lower energy levels.It indicated an electron-rich Pt state (FigureS8and TableS5).For the PtSn/Al 2 O 3 catalyst, the metallic Pt content was 37.5%, which increased to 45.2% after alkali neutralization.The PtSn/Al 2 O 3 -WS catalyst showed a further increase in the content of metallic Pt to 58.3%, which was more favorable for the selectivity of the catalyst[38]. Figure 9 . Figure 9.The pressure dependence of propane as a function of temperature. Figure 10 . Figure 10.The pressure dependence of hydrogen as a function of temperature. Figure 9 . Figure 9.The pressure dependence of propane as a function of temperature. Figure 9 . Figure 9.The pressure dependence of propane as a function of temperature. Figure 10 . Figure 10.The pressure dependence of hydrogen as a function of temperature. Figure 10 . Figure 10.The pressure dependence of hydrogen as a function of temperature. Figure 11 . Figure 11.Pressure dependence on the combination effects of hydrogen and propane. Figure 11 . Figure 11.Pressure dependence on the combination effects of hydrogen and propane. Table 1 . Peaks of the Pt and Sn species of the PtSn/Al2O3 catalysts. Table 1 . Peaks of the Pt and Sn species of the PtSn/Al 2 O 3 catalysts. Table 2 . The catalytic performance of series catalysts PtSn/Al2O3 for PDH reaction. Table 2 . The catalytic performance of series catalysts PtSn/Al 2 O 3 for PDH reaction. a Deactivation rate constant. Table 2 . The catalytic performance of series catalysts PtSn/Al2O3 for PDH reaction. • C, kept for 30 min; calcined 60 min from 110 • C to 350 • C, kept for 30 min; and calcined 120 min from 350 • C to 560 • C, kept for 4 h.The Sn/Al 2 O 3 pellets were prepared after calcination.Synthesis of PtSn/Al 2 O 3 catalyst: The Sn/Al 2 O 3 pellets (3.0000 g) were soaked in 30 mmol/L chloroplatinic acid solution (3.3 mL) for 3 h.After impregnation, the sample was dried overnight at 110 • C and calcined for 2 h at 400 • C to obtain PtSn/Al 2 O 3 catalyst.Synthesis of PtSn/Al 2 O 3 -CA series catalysts: We mixed 3.3 mL chloroplatinic acid (30 mmol/L) with 0.1680 g CA to prepare an impregnation solution.The Sn/Al2 O 3 pellets (3.0000 g) were soaked into the impregnation solution for 3 h.Then, the sample was dried overnight at 110 • C and calcined in a muffle furnace at 400 • C for 2 h to obtain PtSn/Al 2 O 3 -0.4CA.Following the above steps, only changing the amount of citric acid added to the impregnation solution to 0.3360 g and 0.5040 g resulted in PtSn/Al 2 O 3 -0.8CAand PtSn/Al 2 O 3 -1.2CAcatalysts with citric acid concentrations of 0.8 mol/L and 1.2 mol/L, respectively.Synthesis of PtSn/Al 2 O 3 -WS and PtSn/Al 2 O 3 -OH: The PtSn/Al 2 O 3 catalyst pellets (3.0000 g) were soaked in water and potassium hydroxide (0.35 mol/L) solution for 60 min, respectively.The catalysts were then cleaned with deionized water 3 times and then dried in a 50 • C oven overnight to obtain PtSn/Al 2 O 3 -WS and PtSn/Al 2 O 3 -OH catalysts, respectively. Table S1 : BET specific surface area, pore volume, and average pore size of PtSn bimetallic catalysts and supports; TableS2: SEM line-scan elemental analysis of PtSn/Al 2 O 3 catalyst; Table S3: Pyridine infrared quantitative results (unit: µmol/g); Table S4: The catalytic performance of PtSn/Al 2 O 3 catalyst for the propane dehydrogenation reaction; Table S5: The peak centers of the Pt species of the PtSn/Al 2 O 3 catalysts.
2024-06-23T15:22:54.948Z
2024-06-21T00:00:00.000
{ "year": 2024, "sha1": "612f3ea24ed2f78c75f3822baf04e49f3ff50baf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/29/13/2959/pdf?version=1718974273", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "85f3e8180cd7625382ae1c2ff77188633997d3a8", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [] }
9299907
pes2o/s2orc
v3-fos-license
PTEN M-CBR3, a versatile and selective regulator of inositol 1,3,4,5,6-pentakisphosphate (Ins(1,3,4,5,6)P5). Evidence for Ins(1,3,4,5,6)P5 as a proliferative signal. The PTEN (phosphatase and tensin homologue deleted on chromosome 10) tumor suppressor is a phosphatidylinositol 3,4,5-trisphosphate (PtdInsP3) 3-phosphatase that plays a crucial role in regulating many cellular processes by antagonizing the phosphoinositide 3-kinase signaling pathway. Although able to metabolize soluble inositol phosphates in vitro, the question of their significance as physiological substrates is unresolved. We show that inositol phosphates are not regulated by wild type PTEN, but that a synthetic mutant, PTEN M-CBR3, previously thought to be inactive toward inositides, can selectively regulate inositol 1,3,4,5,6-pentakisphosphate (Ins(1,3,4,5,6)P5). Transfection of U87-MG cells with PTEN M-CBR3 lowered Ins(1,3,4,5,6)P5 levels by 60% without detectable effect on PtdInsP3. Although PTEN M-CBR3 is a 3-phosphatase, levels of myo-inositol 1,4,5,6-tetrakisphosphate were not increased, whereas myo-inositol 1,3,4,6-tetrakisphospate levels increased by 80%. We have used PTEN M-CBR3 to study the physiological function of Ins(1,3,4,5,6)P5 and have found that Ins(1,3,4,5,6)P5 does not modulate PKB phosphorylation, nor does it regulate clathrin-mediated epidermal growth factor receptor internalization. By contrast, PTEN M-CBR3 expression, and the subsequent lowering of Ins(1,3,4,5,6)P5, are associated with reduced anchorage-independent colony formation and anchorage-dependent proliferation in U87-MG cells. Our results, together with previously published data, suggest that Ins(1,3,4,5,6)P5 has a role in proliferation. PTEN (phosphatase and tensin homologue deleted on chromosome 10) 1 is a dual specificity phosphatase that is mutated in a wide range of human sporadic tumor types (1). The PTEN gene encodes a 403-amino acid protein, which is a member of the protein-tyrosine phosphatase family. However, there have been no good phosphoprotein substrates identified to date. The tumor suppressor function of PTEN relies on its ability to metabolize acidic nonprotein substrates (2,3). Indeed, PTEN dephosphorylates the signaling molecules, phosphatidylinositol 3-phosphate (PtdIns(3)P), phosphatidylinositol 3,4-bisphosphate (PtdIns(3,4)P 2 ), phosphatidylinositol 3,5-bisphosphate (PtdIns(3,5)P 2 ), and phosphatidylinositol 3,4,5-trisphosphate (PtdInsP 3 ) in vitro by removal of the phosphate at the 3-position of these substrates. The preferred substrate was found to be PtdInsP 3 by a factor of some 200-fold (4). In this respect, PTEN acts as a functional antagonist of phosphoinositide 3-kinase signaling pathways, promoting apoptosis and inhibiting cell-cycle progression (5)(6)(7). Evidence for this includes a naturally occurring mutation (PTEN G129E), identified in sufferers of Cowden disease, in which patients encounter multiple hamartomatous lesions, especially of the skin, mucous membranes, breast, and thyroid. The PTEN G129E mutant has comparable activity to PTEN against the synthetic phosphoprotein substrate, polyGluTyr P , but is unable to dephosphorylate inositol lipids, indicating that the lipid phosphatase activity and not protein phosphatase activity is required for tumor suppressor function (3). PTEN has several structural features, including an N-terminal phosphatase domain requiring a reduced cysteine (Cys 124 ), a calcium-independent C2 domain, that has been shown to bind lipid vesicles in vitro, and a sequence shown to bind PDZ domains (see Ref. 1). In cells, PTEN exists as a phosphoprotein, with phosphorylation occurring at a region to the C terminus of the C2 domain (8,9). It has recently been shown that the C2 domain, and not the PDZ-binding sequence, plays a crucial role in membrane targeting and substrate specificity (10 -12). Functional interference with this C2 domain, exemplified by the artificially modified PTEN M-CBR3 protein first described by Lee et al. (10), causes a reduction in the ability to interact with lipid membranes but marginally increases phosphatase activity toward inositol phosphate substrates (4,10). Expression of GFP-tagged PTEN suggests that it is predominantly cytoplasmic, in agreement with most studies utilizing PTEN-selective antibodies (11,13,14). Its role as a lipid phosphatase, however, requires interaction with membranes, such that PTEN M-CBR3 is unable to regulate PtdInsP 3 levels, whereas myristoylated PTEN, which is anchored to the membrane, is more effective than PTEN in altering effects downstream of PtdInsP 3 (12,15). It has recently been suggested that the cellular substrates of PTEN may include inositol phosphates, particularly Ins(1,3,4,5,6)P 5 (16). Therefore, the effects of PTEN could be mediated by regulation of these inositol phosphates. In this study we have clarified the effects of PTEN expression on inositol lipid and inositol phosphate levels in cells. PTEN expression lowered PtdInsP 3 and PtdIns(3,4)P 2 . We found no suggestion that PTEN could be a physiological regulator of inositol phosphates. The PTEN M-CBR3 mutant, however, selectively affected inositol phosphate levels, especially Ins(1,3,4,5,6)P 5 , without altering the levels of 3-phosphoinositide lipids. With this new insight into the effects of PTEN M-CBR3 we have re-evaluated previously published data in determining the roles played by Ins(1,3,4,5,6)P 5 and suggest that Ins(1,3,4,5,6)P 5 is a proliferative agent, because lowering its levels correlated with decreased cell growth and anchorageindependent colony formation. EXPERIMENTAL PROCEDURES Cell Culture-Tissue culture media and additives were provided by Invitrogen. U87-MG cells, obtained from the European Collection of Animal Cell Cultures, were maintained in minimal essential medium, plus 2 mM glutamine, 1% nonessential amino acids, 1 mM sodium pyruvate, and 10% fetal bovine serum. Expression vectors were introduced into the U87-MG cells using a previously described baculoviral delivery system adapted for mammalian expression (12). Assays were performed 24 h following DNA delivery for all experiments except FACS analysis, where cells were analyzed after 36 or 60 h. Levels of expressed protein were determined using fluorescence, as well as Western blotting of extracts from U87-MG cells expressing GFP-tagged protein. Transfection of U87-MG cells for proliferation assays was performed as described below. Vectors were prepared as previously described, except PTEN M-CBR3/G129E, which was prepared by cleaving PTEN G129E and PTEN M-CBR3 with PpuMI and BamHI in FastBacMam-EGFP, and replacing the wild type C-terminal region of the protein (from PTEN G129E) with that containing the M-CBR3 mutation. The sequence was verified, and virus was prepared as above. Internalization of 125 I-EGF-We performed 125 I-human EGF (Amersham Biosciences) internalization assays as described by Sorkina et al. (19). Cells were grown in 24-well dishes. Unless otherwise stated, 125 I-EGF was added to cells in minimum essential medium with Earle's salts containing 0.1% bovine serum albumin at 37°C for up to 10 min. At the end of the incubation, the medium was aspirated and the monolayers were washed three times with ice-cold minimum essential medium with Earle's salts to remove unbound ligand. The cells were then incubated for 5 min with 0.2 M acetic acid (pH 2.8) containing 0.5 M NaCl at 4°C. The acid wash was combined with another short rinse in the same buffer and used to determine the amount of surface-bound 125 I-EGF. The cells were then lysed in 1 M NaOH to determine the intracellular (internalized) radioactivity. The ratio of internalized to surface radioactivity was plotted against time. Nonspecific binding was determined in the presence of 200 ng/ml unlabeled EGF. Flow Cytometric Analysis of Cell Cycle Distribution-Adherent cells were harvested by trypsinization, washed once in PBS, and re-suspended in ice-cold 70% (v/v) ethanol in water. Cells were washed twice in PBS plus 1% (w/v) bovine serum albumin and stained for 20 min in PBS plus 0.1% (v/v) Triton X-100 containing 50 g/ml propidium iodide and 50 g/ml RNase A. The DNA content of cells was determined using a FACSCalibur flow cytometer (BD Biosciences) and CellQuest software. Red fluorescence (585 Ϯ 42 nm) was acquired on a linear scale, and pulse width analysis was used to exclude doublets. Cell cycle distribution was determined using FlowJo software (Tree Star Inc.). Proliferation Assays-Anchorage-independent colony assays were adapted from those described previously (20). Briefly, U87-MG cells were transiently transfected (FuGENE 6, Roche Applied Science) with pCDNA3.1ϩ alone, or PTEN expression constructs. 24 h after transfection, cells were suspended in 15% serum-containing media with 0.5 mg/ml G418 and 0.3% agar and layered in triplicate onto 0.6% agar medium in 6-well plates. Plates were then incubated for 3 weeks, with the addition of 0.5 ml of fresh medium after 10 days. To test anchoragedependent growth, a similar method was employed to that used by Furnari et al. (21). U87-MG cells were transfected and changed to fresh medium with 1 mg/ml G418 24 h post-transfection. Five days after transfection, nontransfected controls had very little viability. Cell numbers were determined at days 5, 7, and 9 using CellTitre96 reagent (Promega) according to the manufacturer's instructions. RESULTS As previously described, expression of PTEN in U87-MG cells caused a decrease in PtdInsP 3 levels. Greater expression of PTEN (far above the levels of endogenous expression in PTEN-positive cells) caused a significant decline in PtdIns(3,4)P 2 but not of PtdIns(3)P and PtdIns(3,5)P 2 (Fig. 1). We have attempted to express lower levels of wild type and mutant PTEN to study the cellular consequences of such protein expression. A comparison with extracts obtained from cells with normal PTEN status showed that levels of expression of GFP-PTEN were comparable with the range of PTEN expression found in a variety of cell types (Fig. 2). We were surprised to see that 1321N1 astrocytoma cells were either PTEN-null or expressed levels of protein below the detection limit of these assays, because this had not previously been reported and these cells have normally low basal levels of PtdInsP 3 and PKB phosphorylation. Under the conditions used, levels of PtdIns(3)P, PtdIns(3,4)P 2 , and PtdIns(3,5)P 2 were not significantly decreased (Table I). Expression of similar levels of the catalytically dead mutant, PTEN C124S, and the lipid phosphatasedead mutant, PTEN G129E (which retains phosphoprotein phosphatase activity), slightly raised PtdInsP 3 levels. It has been suggested that this is due to "substrate trapping," resulting in a stable lipid-enzyme complex, protecting the PtdInsP 3 from metabolism by other phosphatases (3). PTEN M-CBR3, PTEN G129E/M-CBR3, and EGFP did not alter inositol lipid levels (Table I). Expression of PTEN M-CBR3 in U87-MG cells was unable to alter the phosphorylation state of PKB. Expression of similar levels of wild type PTEN, which lowers PtdInsP 3 levels, caused a marked decrease in phospho-T 308 and S 473 -PKB, whereas PDGF receptor activation increased phosphorylation at both sites (Fig. 4). These results suggest that Ins(1,3,4,5,6)P 5 does not affect PKB phosphorylation directly or interfere with Pt-dInsP 3 dependent phosphorylation of PKB. In this respect, PKB appears to have a higher selectivity toward inositol lipids than previously suggested (22,23). These results reiterate our recent findings with the substrate specificity of PTEN, in that short-chain inositol lipid analogues, or deacylated lipids can yield misleading data with respect to ligand affinities, because such studies fail to replicate binding in the context of a complete biological membrane surface. Inositol phosphates, including Ins(1,3,4,5,6)P 5 , have been implicated in inhibition of clathrin-mediated internalization, by means of preventing triskelion formation (24,25). EGFR internalization was monitored by means of incubating 125 I-EGF with U87-MG cells. Rates of internalization were identical whether PTEN M-CBR3 was expressed or not. Incubating U87-MG cells with 125 I-EGF at 4°C prevented any internalization (Fig. 5). Similarly, PD158780, an EGFR kinase inhibitor previously shown to block receptor phosphorylation, and hence internalization, was found to behave as expected, significantly inhibiting internalization (data not shown). Several complex cellular processes have been identified that rely to some degree on phosphoinositide 3-kinase signaling and can be inhibited by PTEN, including colony formation in soft agar (10,26) and cell spreading (12). The ability of PTEN M-CBR3 to inhibit anchorage-independent colony formation was compared with that of wild type PTEN. U87-MG cells were transfected with expression vectors for untagged PTEN proteins carrying a Neomycin/G418 resistance gene and after 24 h seeded into soft agar with G418 selection. Cells transfected with vector alone or phosphatase-dead PTEN formed large numbers of colonies within 3 weeks (Fig. 6a). PTEN greatly inhibited colony formation, but this was not mimicked by the PTEN C124S, PTEN G129E, or PTEN G129E/M-CBR3 mutants. PTEN M-CBR3 had a small, but significant effect in reducing colony number (Fig. 6a), confirming its effects on anchorage-independent growth. The effects of PTEN on anchorage-independent growth and cellular proliferation appear to be mediated by its ability to cause G 1 arrest. The effects of PTEN M-CBR3 on cell-cycle distribution were also monitored. Cells treated with baculoviral expression vectors for 36 or 60 h were analyzed by FACS. As Profiles of all other mutants tested showed no difference to GFP controls (Fig. 7), suggesting that Ins(1,3,4,5,6)P 5 is not required to pass through any particular part of the cell cycle. DISCUSSION The biological roles of PTEN have generally been attributed to its ability to metabolize the lipid second messenger, Pt-dInsP 3 . More recently, it has been shown that PtdIns(3,5)P 2 , PtdIns(3,4)P 2 , and PtdIns(3)P are also substrates in vitro (28,29). We show clearly that the primary substrate for PTEN is PtdInsP 3 and that the inositol lipid bisphosphates are not lowered following PTEN expression at close to physiological levels in U87-MG cells. Higher levels of expression can regulate PtdIns(3,4)P 2 but not PtdIns(3,5)P 2 . This may reflect the differences in rate of hydrolysis observed previously (4) or the relative rates of turnover of these molecules in vivo. Alternatively, this may merely reflect the product-precursor relationship existing between PtdInsP 3 and PtdIns(3,4)P 2 via 5-phosphatases, such as SHIP and SHIP2 (30), whereas PtdIns(3,5)P 2 synthesis is likely to be independent of PtdInsP 3 . It has been shown that PTEN can also dephosphorylate inositol phosphates in vitro, but their significance as physiological substrates has not been fully resolved. Ins(1,3,4,5)P 4 was shown to be a weaker substrate than PtdInsP 3 by a factor of between 10 2 and 10 4 depending upon the conditions of assay (4). Ins(1,3,4,5,6)P 5 and InsP 6 are considered to be the substrates of a distinct phosphatase, MIPP. Indeed, studies involving brain and liver extracts from MIPP knockout mice were unable to detect any significant Ins(1,3,4,5,6)P 5 phosphatase activity in preparations where PTEN should have been present (31), although these assays were performed in the absence of a reducing agent, required for optimal activity of PTEN. PTEN overexpression, however, did lower cellular Ins(1,3,4,5,6)P 5 levels (16). In agreement with this study, we found that PTEN expression lowered Ins(1,3,4,5,6)P 5 levels (Table II and Fig. 3), but because these effects were also observed following expression of the catalytically inactive PTEN C124S, and the lipid-phosphatase inactive PTEN G129E mutants, we conclude that this effect is not mediated by the phosphatase activity of PTEN. They are, however, related to expression of PTEN-like proteins, because the same decline in Ins(1,3,4,5,6)P 5 is not observed when EGFP alone is expressed. We have previously found that inositol lipid metabolism by PTEN requires the C2 domain and that interfering with this domain (as with the PTEN M-CBR3 mutant) severely impedes lipid phosphatase activity but enhances the activity observed using a soluble substrate (4,10). We now show that this artificial mutant can lower Ins(1,3,4,5,6)P 5 levels without similar effects on inositol lipids or other inositol phosphates. We also note that levels of Ins(1,3,4,5)P 4 , another substrate of PTEN M-CBR3, are not affected. This observation can be explained by the relative rates of turnover of each molecule and their product-precursor relationships. The turnover of Ins(1,3,4,5)P 4 by endogenous phosphatases is far more rapid than that of Ins(1,3,4,5,6)P 5 . The concomitant rise in Ins(1,3,4,6)P 4 levels is likely indicative of a compensatory increase in Ins(1,3,4,5,6)P 5 biosynthesis, because the former is considered to be the metabolic precursor of Ins(1,3,4,5,6)P 5 . These cellular effects are mediated by the lipid phosphatase-like activity of PTEN M-CBR3, because PTEN G129E/M-CBR3, which should retain its protein phosphatase activity while losing activity toward inositol phosphates, was without effect. This suggests that Ins(1,3,4,5,6)P 5 is a key factor in mammalian cells and that, under normal circumstances, its level is under tight control. These effects are present without any detectable effect on any known inositol lipid. PTEN M-CBR3 thus provides a valuable tool whereby Ins(1,3,4,5,6)P 5 can be selectively regulated. This has enabled us to evaluate some of the physiological roles that have been previously ascribed to Ins(1,3,4,5,6)P 5 . The pleckstrin homology (PH) domains of many proteins have been associated with the ability of these proteins to associate with membrane surfaces. Although the ability of a small number of these proteins to bind inositol phosphates has been studied (see Refs. 22 and 32), it is only recently that these interactions have been considered to be physiologically rele- vant. Competition between inositol phosphates and inositol lipids for PH domains is clearly observed in the case of phospholipase C␦ 1 (PLC␦ 1 ). We were able to address directly whether Ins(1,3,4,5,6)P 5 has a role to play in PKB regulation. We showed that PDGF was able to further raise, and that PTEN expression was able to reduce, the phosphorylation status of PKB in these cells. Reduction of Ins(1,3,4,5,6)P 5 levels following expression of PTEN M-CBR3 was without significant effect on PKB phosphorylation, suggesting it is not a physiological regulator and is incapable either of activating directly or competing effectively with the lipid activators of this protein kinase. We have also studied the effect of PTEN M-CBR3 expression on protein trafficking, because Ins(1,3,4,5,6)P 5 and InsP 6 have been proposed to attenuate the desensitization of substance P receptors (25). The ability of Ins(1,3,4,5,6)P 5 to inhibit triskelion formation in vitro has been questioned, due to a particularly low affinity for AP-3 (24). The internalization of low levels of EGF is mediated by clathrin-coated pits (see Ref. 19). Low levels of EGF caused rapid internalization that was sensitive to the EGF receptor kinase inhibitor, PD158780, and reduced temperature. The effects were not altered by lowering Ins(1,3,4,5,6)P 5 , suggesting that this inositol phosphate has no role to play in trafficking of tyrosine kinase-coupled receptors. Our approach has been somewhat different to that of other studies, for example, in which InsP 6 is injected into oocytes or other cells and the consequences monitored (25). We have determined the consequences of PTEN M-CBR3 expression on endogenous inositol phosphate levels, whereas the injection of a particular inositol phosphate does not guarantee that it is not metabolized to generate other compounds with biological activity. It also remains possible that Ins(1,3,4,5,6)P 5 -mediated trafficking is strictly limited to serpentine receptors coupled to heterotrimeric G-proteins. The higher inositol phosphates, Ins(1,3,4,5,6)P 5 and InsP 6 , have also been implicated in cell proliferation. Overexpression of cytosolic MIPP, achieved by removal of the N-terminal endoplasmic reticulum-targetting sequence and the C-terminal endoplasmic reticulum-recycling signal (Ser Asp Glu Leu), has been shown previously to lower Ins(1,3,4,5,6)P 5 by 60%, and InsP 6 levels by 40%, and to cause a decrease in the rate of cell proliferation (31). It has also been reported that the transition from proliferation to hypertrophy in chicken chondrocyte maturation is accompanied by the up-regulation of Band 17, subsequently identified as the chicken homologue of MIPP (33). These results suggest that a reduction in cell growth correlates with a reduction in the levels of Ins(1,3,4,5,6)P 5 . Expression of PTEN M-CBR3 has been shown to inhibit proliferation of U87-MG (10,15) and LNCaP cells (15). We show that these effects correlate with a decline in Ins(1,3,4,5,6)P 5 without affecting PtdInsP 3 levels (Fig. 6, this study). These results would suggest that Ins(1,3,4,5,6)P 5 alone FIG. 4. Only wild type PTEN alters PKB phosphorylation levels. U87-MG cells were transfected using virus as described in the legend to Table II. Supernatants were prepared from lysates, and these (20 g of total protein) were analyzed by SDS-PAGE (Novex) and probed with anti-phospho-T 308 , phospho-S 473 , and total PKB antibodies. 6. a, inhibition of anchorage-independent colony formation by PTEN proteins. U87-MG cells were transfected with expression vectors for untagged PTEN proteins also carrying antibiotic resistance genes. Cells were seeded in soft agar and selected in antibiotic for 3 weeks. Colonies formed were counted, and numbers directly compared with vector-transfected controls. Data points are presented as % control Ϯ S.E. calculated from six dishes plated from three independent transfections. b, inhibition of cellular proliferation by PTEN proteins. U87-MG cells were transfected with expression vectors for untagged PTEN proteins. Cells were selected in G418 and plated into 96-well multiplates and assayed for viable cells after 5, 7, or 9 days. Data (representative of n ϭ 2) are presented as mean Ϯ S.E. absorbance at 490 nm per well from 6 wells. can alter growth rates of cells and act as a proliferative agent itself. PKB phosphorylation is not altered by PTEN M-CBR3 overexpression in any of the studies described. Because PKB phosphorylation is highly sensitive to small changes in PtdInsP 3 concentration (34), this suggests that PtdInsP 3 pools are not affected by PTEN M-CBR3, strengthening the argument that Ins(1,3,4,5,6)P 5 is the causative agent in altering proliferation. The studies of anchorage-independent and anchoragedependent proliferation (this study) yield much the same data as those described above. Overexpression of PTEN M-CBR3, which lowers Ins(1,3,4,5,6)P 5 levels, decreased colony number and proliferation rate, although not to the same extent as wild type PTEN. Cell cycle analysis further strengthens our argument that the effects observed using PTEN M-CBR3 are mediated by Ins(1,3,4,5,6)P 5 and not by PtdInsP 3 . PTEN M-CBR3expressing cells, while growing more slowly than control cells, showed no change in their cell cycle profile (Fig. 7). In contrast, PTEN-expressing cells showed evidence of G 1 arrest. In summary, we have characterized the effects of PTEN expression on inositol phosphate and inositol lipid levels and shown they are limited to inositol lipids, primarily PtdInsP 3 . Endogenous PTEN probably does not play a significant role in metabolism of inositol phosphates in vivo. We have determined that the PTEN M-CBR3 mutant specifically lowers Ins(1,3,4,5,6)P 5 without affecting PtdInsP 3 . Using PTEN M-CBR3 as a selective tool, we find that Ins(1,3,4,5,6)P 5 plays no significant role in PKB phosphorylation or receptor trafficking, but plays a positive role in proliferation, albeit not as strongly as PtdInsP 3 . PTEN M-CBR3 is a versatile and specific regulator of Ins(1,3,4,5,6)P 5 and can be used to determine the physiological roles played by this relatively abundant and widespread inositol phosphate. FIG. 7. Cell cycle effects of PTEN proteins. Flow cytometric analysis of U87-MG cells transfected using virus for 60 h. DNA was stained with propidium iodide, and cellular content was analyzed. The percentage of cells in G 1 , S, or G 2 /M phases were calculated using CellQuest software.
2018-04-03T00:00:36.909Z
2004-01-09T00:00:00.000
{ "year": 2004, "sha1": "501318aaf8560d9db7b207441a6d981221d0f783", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/279/2/1116.full.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "473f13e7384e873c128cdb1cf5e77a35d6cea813", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine", "Biology" ] }
260303163
pes2o/s2orc
v3-fos-license
Time-Lapse Observation of Crevice Corrosion in Grade 2205 Duplex Stainless Steel The objective of this study was to investigate and visualize the initiation and propagation of crevice corrosion in grade 2205 duplex stainless steel by means of time-lapse imaging. Transparent Poly-Methyl-Meth-Acrylate washer and disk were coupled with duplex stainless steel to create an artificial crevice, with electrochemical monitoring applied to obtain information about the nucleation and propagation characteristics. All nucleation sites and corroding areas inside crevices were recorded in situ using a digital microscope set-up. Localized corrosion initiated close to the edge of the washer, where the crevice gap was very tight, with active corrosion sites then propagating underneath the disk into areas with wider gaps, towards the crevice mouth. The growth was associated with a rise in anodic current interlaced with sudden current drops, with parallel hydrogen gas evolution also observed within the crevice. The current drops were associated with a sudden change in growth direction, and once corrosion reached the crevice mouth, the propagation continued circumferentially and in depth. This allowed different corrosion regions to develop, showing selective dissolution of austenite, a region with dissolution of both phases, followed by a region where only ferrite dissolved. The effect of applied electrochemical potential, combined with time-lapse imaging, provides a powerful tool for in situ corrosion studies. Introduction Duplex stainless steels (DSS) have a dual-phase structure consisting of an austenitic (γ) phase present in an island-like pattern within the ferritic (α) matrix phase. Compared to standard austenitic stainless steels such as Type 304 and Type 316, DSS has a lower Ni content and a higher Cr and Mo content. In general, DSS are superior in mechanical properties such as proof stress and tensile strength and corrosion resistance against localized corrosion such as pitting corrosion, crevice corrosion and stress corrosion cracking (SCC) to general purpose austenitic stainless steels or ferritic stainless steels because of their chemical composition and their dual phase structure. Therefore, DSS are used in severe corrosive environments like high chloride concentration solutions [1][2][3] or oil and gas critical infrastructure. However, it has been reported that DSS can develop crevice corrosion when exposed to seawater [4][5][6]. The crevice corrosion behavior and its mechanism in DSS have not been fully understood yet, with a major challenge related to the corrosion of both phases present in DSS. Crevice corrosion can occur in relatively mildly corrosive environments, and in engineering structures consisting of a number of components, crevice sites are unavoidable. Once crevice corrosion is initiated, the rate of propagation can be quite fast [6,7], with increased crevice corrosion rates reported under stress [8,9]. It has also been reported, in general, that crevice and pitting corrosion are accelerated in the presence of mechanical stresses [10,11]. Accordingly, the occurrence of crevice corrosion is still a major concern for the application of duplex stainless steel in severe service environments, and it is of pragmatic importance to develop effective measures toward its prevention based on the underlying corrosion mechanism. Yakuwa et al. [5] distinguished three distinct regions within a crevice, from the fringe to the center, when DSS is exposed to natural sea water in the Red Sea and the Arabian Gulf for over one year. This result showed that preferential dissolution of either the α-or γ-phase can proceed within the crevice of a DSS. The author has [12,13] systematically investigated the influence of the potential on the preferential dissolution behavior of both the α-or γ-phases in DSS in high chloride concentration and low pH solutions. In addition, there are several available reports on corrosion behavior for several DSS's in varying environments [14][15][16][17][18][19][20][21][22][23][24][25]. Crevice corrosion with preferential dissolution of duplex stainless steels has also been reported by J. N. Al-Khamis et al. [26] using the IR mechanism. However, this study was carried out in an acid-chloride solution environment and therefore analyzed the corrosion behavior after sufficient growth of crevice corrosion. Recently, various in situ observation techniques have been used to obtain a more indepth understanding of corrosion kinetics. For example, direct observation of atmospheric corrosion rates of duplex stainless steels by X-ray tomography over approximately two years has been reported, where localized corrosion and preferential dissolution on DSS were observed, in situ [27]. The research reported here is centered on providing supporting evidence of the nucleation and propagation characteristics of crevice corrosion in 2205 DSS in a neutral chloride environment, with a focus on better understanding the dissolution behavior of each phase within the crevice. To elucidate this degradation mechanism, a thorough understanding of the spatial and temporal propagation of corrosion is necessary. The objective of this study was to investigate the crevice corrosion initiation and propagation behavior of grade 2205 duplex stainless steels by means of in situ imaging observations using a transparent washer-crevice set-up. Materials and Methods A type 2205 (UNS S32205) DSS sheet with the chemical composition shown in Table 1 was used for this study. The test piece dimensions were 40 × 24 × 6 mm (L × W × T mm). A hole with ø6 mm in diameter was drilled into the sample, with Figure 1 showing a schematic drawing of the rectangular test piece. The entire surface was wet polished with #600 SiC grinding paper to remove any contaminants and create a uniform and controlled surface condition for subsequent crevice corrosion tests. The electrolyte used was a reagent-grade 0.6 M NaCl solution with a constant test temperature of 50 ± 1 • C. For all electrochemical measurements, a Pt counter electrode and an Ag/AgCl reference electrode (KCl sat.) were used. In the following text, all potential values are expressed in V vs. Ag/AgCl. For carrying out in situ observation of both crevice corrosion initiation and propagating behavior, the DSS test piece was interfaced with transparent poly-methyl-methacrylate (PMMA) artificial crevice formers, made up of a washer and a disk. A transparent PMMA was chosen to be able to observe the local dissolution behavior on the test piece surface by means of an optical microscope. The observed dissolution behavior was then correlated to the recorded current trace over time to understand the development of the corroded crevice sites. Figure 2 shows the schematic configuration of the in situ observation system, which was held together with electrically isolated Ti bolts. The artificial crevice-forming PMMA disk (ø20 mm) and PMMA washer (ø12 mm) were inserted between the test piece and the PMMA electrolytic cell wall, with in situ observations carried out through the outside sidewall of this PMMA cell set-up. The surface for the in situ observation was further wet polished with SiC up to a #1200 finish, just before the crevice corrosion set-up was assembled. Thereafter, the PMMA disc and washer were set between the test piece and the PMMA electrolytic cell wall by tightening the Ti bolt and nut with a torque of 1.96 Nm, with the sample immersed in the test solution. This ensured that electrolyte was able to remain inside the crevice, here simulated between the disk and sample surface. In this configuration, a constant pressure was applied to the crevice zone beneath the PMMA washer, while with increasing radial distance from the center of the crevice, the crevice gap tended to increase due to the nature of the PMMA cell set-up [28]. The open circuit potential (OCP) was measured, and the sample was then potentiostatically polarized with the in situ observation test. Applied potentio-static conditions were +0.01 V, +0.03 V, and +0.05 V vs. Ag/AgCl. Three potentio-static tests were carried out, with a new sample used for each individual test. The chosen potentials are above the threshold potential for crevice corrosion initiation of the specimen at this exposure temperature, since the aim was to observe both nucleation and growth of a crevice. The critical pitting corrosion temperature and associated Pitting Resistance Equivalent Number (PREN), which gives an indication of the temperature range for initiating crevice corrosion, can be estimated using the chemical composition [29,30]. The DSS composition here gives an estimated PREN of 38.4 and a CPT of 45 • C. This means that if the potential of the specimen is held at these chosen potentials, crevice corrosion can be expected to occur. In addition, in order to explore the influence of holding potentials on crevice corrosion behavior in this study, three potential conditions were used, varying by 0.02 V. The current response of the test piece was recorded over time. The corroding area inside the crevice was continuously monitored, and an image was taken with the microscope at 15 s intervals during the test. The total duration for each test was 120 min, resulting in 481 images taken during each test, including the reference condition at t = 0 s. The dissolved area over time was analyzed with ImageJ analysis software, using a time interval of 16 images, resulting in a snap shot every 4 min [31]. After importing the images into ImageJ and setting the scale, the edges of the corroded area, i.e., the corroded/base metal boundary, were edged with a polygon section, and the area of the corroded area was measured. The height profile of the tip of the corroded area after the test was completed was measured using a wide-area 3D profilometer, the Keyence VR-3200 (Keyence, Tokyo, Japan). Furthermore, for the test at +0.03 V, the corroded area inside the crevice after the test was observed using a scanning electron microscope (SEM). In order to understand the preferential dissolution in the corroded area, energy dispersive x-ray (EDX) analysis was carried out to identify the ferritic (α) and austenitic (γ) phases based on their chemical composition. The elements used for discrimination were Cr, Ni, and Mo, which are effective elements for corrosion resistance and typically have characteristic concentrations in each phase, with Fe defining the matrix element. These four elements were added together to obtain an approximated value of 100 mass% (Cr + Ni + Mo + Fe = 100 mass%) and used for the discrimination of each phase. Results The measured OCP of the duplex sample for all three tests ranged between −0.118 V and −0.127 V. The sample was then potentio-statically polarized, and the current versus time was recorded. In parallel, the earliest onset of corrosion nucleation sites was recorded by monitoring the current response and correlating it to the recorded images, allowing inspection and comparison of all the data in detail. Figure 3 shows the typical change of the anodic current measured during the potentio-static holding at +0.03 V. The arrows in Figure 3 highlight the initiation of different corrosion sites under the crevice former, where corrosion has been visually identified by analyzing the recoded images. At the early stage of the potentio-static test, the current dropped steadily to 4 µA due to passivation of the test piece. The current started rising after the first 30 min, and then continued to steadily increase over time, interlaced with a number of sudden current drops. These current drops were observed in the samples polarized to +0.03 V and +0.05 V, but not clearly in the samples polarized to +0.01 V. Based on analysis of the in situ observation with discrete intervals of 4 min, the initiation of the first crevice corrosion site in Figure 3 was visually observed after 24 min of exposure, with the second nucleation spot appearing 4 min later, followed by a third independent nucleation point after >60 min. All nucleation sites were located under the crevice former, right at the edge of the washer where the pressure to keep the assembly together was applied. This location at the interface between washer, and crevice former may have provided the right crevice gap geometry, a physical location for electrolyte redox coupling, and access to develop chemistry for the crevice to initiate. As shown in Figure 4b, the first initiation site of crevice corrosion was observed 24 min. after the start of the test, and all corroded areas grew concentrically towards the crevice mouth with time. The drop in current was observed in the +0.03 V polarized sample after approx. 90 and 110 min. (Figure 3), which coincides with the time for both crevices to reach the crevice edge. A large current drop was also observed in the sample polarized to +0.05 V, which was associated with a stop of apparent dissolution at the front, with another part of the area starting to grow from the inside of the crevice to the mouth. These current drops might therefore be related to sudden changes in either the growth directions or even a short time interval of dilution of the crevice environment, with essentially the crevice communicating with the environment outside before the severity of the crevice solution picks-up the growth again. Interestingly, no significant current drop was observed in the +0.01 V polarized sample, possibly because the recorded anodic current was far lower compared to the +0.03 V polarized sample, resulting in a far smaller area of corrosion. Gas bubble evolution was also observed around the corrosion area, with all bubbles migrating toward the crevice mouth (Figures 4c and 5). There are now two possibilities for gas formation here inside the crevice: either (i) oxygen evolution due to significantly increased acting potential reaching transpassivity and further decomposing the water with oxygen evolution, or (ii) the development of local cathodic sites coupling to and short-circuiting anodic reactions that are close by. The first scenario would mean that the recorded charge is an overestimation of the dissolved crevice volume since the measured anodic current would include both metal dissolution and oxygen evolution. The second scenario with hydrogen evolution would mean that measured charge would be an underestimation of the dissolved volume since a portion of the anodic current is then taken up and compensated by these local cathodic reactions. This description here is, however, a simplification, and the rate of gas evolution would certainly affect the redox reaction rate inside the crevice, providing anodic or cathodic redox active species. After the crevice reached the crevice mouth, corrosion further propagated circumferentially along the edge of the crevice, growing in size, as shown in Figure 4d. Such results were observed under all test conditions. The crevice gap is very tight at the edge of the washer because of the contact pressure by the Ti bolt, with the crevice gap increasing gradually as it approaches the crevice mouth due to the PMMA disk being slightly warped towards the crevice mouth [24]. The in situ observation suggested that it was not difficult for crevice corrosion to initiate at the site of a tight crevice gap and propagate to grow toward the mouth, where the gap is looser. It is possible that the crevice gap may influence the crevice corrosion behavior if the crevice gap varies with the surface finish or surface morphology of the specimen. After the crevice corrosion test was terminated, each image was analyzed to identify the earliest onset of localized corrosion. The corroded area over time as a function of every potentio-static test condition is shown in Figure 6a. Under higher applied potentials, crevice corrosion initiated earlier, with an increase in corrosion area over time. The time to initiate corrosion from the start of the test was 68 min, 24 min, and 8 min for an applied potential of +0.01 V, +0.03 V, and +0.05 V, respectively. Even at the highest potential of +0.05 V in the test conditions, the corroded area continued to increase with time until the end of the test. The overall corroded area at the end of the test was up to a corroded area of 95 mm 2 in size, out of a maximum area under the crevice former of approximately 200 mm 2 . A correlation of the measured charge with the corroded area over exposure time for all three different potentio-static test conditions is shown in Figure 6b. In the +0.05 V test, the total amount of charge was, as expected, the highest, with the measured charge scaling with the applied potential. Figure 7 shows a microscopic image of (a) the corroded area with (b) a 3D height distribution image and (c) the height profile along the X-Y line of the corroded area around the edge of the crevice for the test held at +0.03 V. The height profile was obtained using a wide-area 3D profilometer, the Keyence VR-3200. As shown in Figure 7c, the depth of crevice corrosion was typically very shallow below 1 µm, with far deeper dissolution observed close to the edge of the crevice former. The deepest dissolution site was approximately 10 µm in depth, located at the outer circle close to the edge. This result indicates that corrosion possibly propagated in a shallow manner until it reached the edge of the crevice former and then spread circumferentially along the edge of the crevice former and grew in parallel in depth. The time to reach the edge from the start of the corrosion test was 108 min, 84 min, and 48 min. for an applied potential of +0.01 V, +0.03 V, and +0.05 V, respectively. A higher applied potential leads to a higher current density and faster growth of the crevice, which clearly shows that the crevice does not grow at the cathodically limiting rate at the applied potential regimes here. The corroded area also grew in depth at the outer edge of the crevice, indicating faster dissolution at these sites compared to all the inner areas, which remained shallow over time. A SEM image of the corroded area inside the crevice of the test at +0.03 V is shown in Figure 8. The left side of this figure shows the outside of the crevice, with the right-hand side showing the center of the crevice. The chemical composition of each of the points shown in Figure 8 is given in Table 2. Table 2. Chemical composition of each point shown in Figure 8 analyzed by EDX (mass%) *. Figure 8). Steel The Cr-and Mo-rich phases are the ferritic (α) phases, and the Ni-rich phases are the austenitic (γ) phases. The area around the tip of the corroded site can easily be differentiated into four zones (I-IV) of corrosion behaviors, with the αand γ-phases in DSS easily distinguished by their chemical composition based on EDX phase analysis. The area around the tip of the corroded site can easily be differentiated into four zones (I-IV) of corrosion behaviors, with the ferritic (α) and austenitic (γ) phases in DSS easily distinguished by their chemical composition. Based on energy dispersive x-ray (EDX) phase analysis, the austenite has higher Ni contents and the ferrite has higher Cr and Mo contents. The area just ahead of the tip (zone I) was not corroded at all, indicating passive material behavior. The zones in the order from the tip of the corroded area towards the center of the sample were labeled as Zone II to Zone IV, with Zone II showing a preferential dissolution zone of the γ-phase, Zone III indicating dissolution of both the γand α-phase, followed by a preferential dissolution zone of the α-phase only (Zone IV). This observation indicates that the crevice corrosion mechanism evolves over time, dynamically changing how the microstructure is attacked, possibly related to differences in the acting potential for dissolution. Discussion An interesting and important finding using the in situ observation of the initiation of crevice corrosion is the fact that the corrosion initiation site was already present at the lowest point of the anodic current response curve in Figure 3. The site was already present before a significant current increase was measured. This observation means that crevice corrosion initiated at the lowest passive current of the test piece, and it is not clear whether a pre-cursor already existed at this site. The visual observation method employed was limited to a simple optical inspection. The volume of corrosion initiation sites underneath the edge of the PMMA washer was all very shallow, possibly supported by the very tight crevice gap underneath the edge of the washer, but it would also appear that the current density at the initiation site was large enough to grow a stable crevice. Because of the small area, the current value is not large and is within the observed passive background current response of the specimen. The local current density must be comparable to active dissolution, but the very small nucleation site resulted in the nucleation not being directly observed by the electrochemical response here. In other words, this means that a small amount of dissolution generates a stable environment for crevice corrosion to grow, as described below. Crevice corrosion propagated to grow towards the crevice mouth, where the crevice gap was wider, resulting in an increasing anodic current response (Figure 3). It has been generally understood that the galvanic cell covering the length from inside the anodic crevice site to the cathodic region outside of the crevice resulted in a large ohmic potential drop (IR drop) with large solution resistance induced because of the tight shape of the crevice [32,33]. In our in situ observation, gas bubble evolution was observed around the corrosion area. As already discussed, these can either be associated with oxygen formation at high anodic potentials or hydrogen evolution at very low potential regions. Here, at the back of the crevice, the potential is far lower than the external holding potential due to the IR drop [34], and the generation of oxygen cannot take place at such a low potential. It is reasonable to assume that hydrogen gas is produced by the local reduction reaction of protons. Pickering et al. also confirmed the generation of bubbles from localized corrosion sites and reported that they were hydrogen gas as a result of mass spectrometric analysis [35]. Metal ions dissolved at the initiation site of crevice corrosion generate protons and therefore decrease solution pH by hydrolysis. Electrons were produced from the anodic reaction of metal dissolution as well as protons. A part of these electrons was then consumed directly on the test piece's surface inside the crevice, with the local cathodic reaction producing hydrogen gas. This observation suggested that the solution pH in the vicinity of the crevice corrosion area had decreased and that a local cathodic reaction of protons producing hydrogen gas occurred therefore inside the crevice. In principle, it is not possible to directly measure the current based on the electrons consumed by the reaction at the local cathode inside the crevice since the generated cathodic charge is then taken up by local anodic reactions. The effects of cathodic reaction inside crevices should therefore be considered in addition to cathodic reaction on the outside to evaluate crevice corrosion rates. To support this assumption, a comparison was made between the measured volume of material dissolved in the sample polarized to +0.03 V and the charge determined via Faraday's law from the recorded electrical current vs. time plot. The metal dissolution was assumed to have an average metal cation charge of n = 2.19, an atomic weight of M = 55.79 g/mol, and a density of ρ = 7.87 g/cm 3 , with a Faraday's constant of F = 96,485 C/mol. The volume of dissolved material was approximated by dividing each corroded site into two discrete areas, with area (1) defined by a rectangular cross-section of the crevice (20 µm width and 10 µm depth), spanning along the circumference, while the other corroded areas inside the crevice were assumed to be 0.5 µm deep. The estimated volume of the corroded area was 20.9 × 10 −3 mm 3 , and the volume of the corroded area obtained from the measured charge is 15.3 × 10 −3 mm 3 . This means that the actual volume of the corroded area was approximately 1.35 times larger than the volume of measured anode current. This strongly suggested a contribution of the local cathodic reaction inside the crevice to the actual corrosion rate. In the correlation between the total charge measured in the corroded area and every potentio-static test condition, the corrosion area increased linearly with increasing charge at each holding potential. Although the corrosion area continued to increase with time until the end of the test, the 3D height distribution analysis revealed that crevice corrosion grew only in depth along the outer edge of the crevice. It was thought that the relationship between amount of charge and corroded area from visual observation deviated from proportionality because crevice corrosion propagated in depth at higher holding potentials. In order for crevice corrosion to continue to grow stably, a high chloride concentration and a low pH solution must be maintained inside the crevice. This means that the hydrolysis reaction of dissolved metal ions and the entry of chloride ions to maintain the charge balance also need to be maintained, which would support the observation of the deeper sites. Therefore, free water, which is consumed in the metal dissolution and its metal ions hydrolysis, needs to be supplied to maintain crevice corrosion spreading circumferentially outside along the crevice. The chloride ions and free water from the outside are likely to be more readily available at the edge than far inside the crevice. For this reason, the crevice corrosion that initiated inside the crevice was expected to propagate in the direction towards the edge of the crevice, and then the crevice corrosion propagated circumferentially along the edge of the crevice. After the crevice corrosion reaches the edge of the crevice, most of the IR drop occurs at the crevice mouth due to the severe crevice thickness. Introducing IR theory from the study by J. N. Al-Khamis et al. [26], the drop from the potential of the passive zone outside the crevice to the corrosion potential of the active peak in the polarization curve was considered to occur at a small distance from the crevice mouth. Likewise, it was thought that the crevice corrosion was able to continue to grow in the depth direction at the edge of the crevice. The SEM observations after the test showed that the corrosion behavior around the tip of the corroded area was different than the tail towards the specimen center. It has been reported previously that the growth of crevice corrosion maintained four different corrosion morphologies during its propagation stage in DSS [12]. The potential dependence of the preferential dissolution behavior of DSS in a crevice corrosion environment has also been reported [13]. As already mentioned, during crevice corrosion propagation, the outside around the crevice is believed to be cathodic and the inside chiefly anodic. As the mass transfer in and out of the crevice is restricted by the tight gap geometry, the macro-cells in and out of the crevice consist of IR drops due to the large solution resistance. The solution resistance increases with distance from the outer edge of the crevice towards the center, which means that a potential gradient is formed from the outer edge to the center of the crevice. Thus, the potential is noble at the outer edge of the crevice and becomes less noble towards the center of the crevice. This potential gradient is thought to have caused preferential dissolution of the γ-phase in the corroded periphery, followed by dissolution of the α-phase in this zone as the corroded area expanded, and then a change to preferential dissolution of the α-phase as the crevice corrosion propagated. As a result, corrosion grew towards the outer edge of the crevice while maintaining the corrosion morphology of all four regions. The preferential dissolution behavior of the corroded area in the present crevice corrosion test was in good agreement with the findings of other reports (e.g., [36]). Conclusions Crevice corrosion initiation and propagation behavior of Grade 2205 duplex stainless steel were observed by means of in-situ observation and electrochemical monitoring. The crevice corrosion behavior occurring in neutral chloride environments, such as seawater, was visualized and corresponded to the electrochemical response. Corrosion initiated underneath the edge of the washer, where the crevice gap was very tight, and then corrosion grew concentrically toward the crevice mouth. Under higher applied potentials, crevice corrosion initiated earlier, with an increase in corrosion area over time. A higher applied potential leads to a higher current density and faster growth of the crevice. After corrosion reached the edge of the crevice, it propagated circumferentially along the edge and also in depth. In situ observation with image analysis revealed that for the nucleation of crevice corrosion, very little anodic current is required before the anodic current increases. This result means that crevice corrosion was initiated by the anodic current response from the passive current of the stainless steel surface. Gas evolution was observed around the inner dissolved corrosion area. The effects of cathodic reactions within the crevice should be considered for the evaluation of crevice corrosion propagation rates. The measured volume of the corroded area of the sample was estimated at 20.9 × 10 −3 mm 3 and the volume of the corroded area obtained from the measured charge is 15.3 × 10 −3 mm 3 . This means that the actual volume of the corroded area was approximately 1.35 times larger than the volume of measured anode current. This strongly suggested a contribution of the local cathodic reaction inside the crevice to the actual corrosion rate.
2023-07-30T15:12:07.666Z
2023-07-27T00:00:00.000
{ "year": 2023, "sha1": "db621b904a815b06e49236f6574795f4cc02a8ea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/15/5300/pdf?version=1690515896", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cbb8999804c1c039f075c9964ffaddec61fdf951", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
15633306
pes2o/s2orc
v3-fos-license
Odontogenic tumors: A collaborative study of 218 cases diagnosed over 12 years and comprehensive review of the literature Objectives: The objective of this study was to analyze the frequency and distribution of odontogenic tumors (OTs) in the Cappadocia region of Turkey, and to compare the findings with those reported in the literature. Study Design: The records of the Oral and Maxillofacial Surgery and Pathology Departments at Erciyes University, with histologic diagnosis of odontogenic tumors (based on the World Health Organization classification, 2005), over a 12-year period, were analyzed. The relative frequency of different types of tumors was also analyzed and compared with the literature. Results: OTs in the present study constituted 2.74% of all the 7,942 registered biopsies. A total of 218 cases of OTs were collected and reviewed. Of these, (94.04%) were benign and (5.96%) were malignant. The mandible was the most commonly affected anatomic location, with 170 cases (77.9%). Ameloblastoma with a predilection for the posterior mandible was the most frequent odontogenic tumor (30.28%), followed by keratocystic odontogenic tumor (19.5%), odontoma (13.4%), and odontogenic myxoma (8.5%). Conclusions: OTs are rare neoplasms and appear to show geographic variations in the world. In Cappadocia, Turkey, they are more common in the mandible, with ameloblastoma followed by keratocystic odontogenic tumors with the incidences observed in the present study being similar to those of previous studies from Asia and Africa, and in contrast to those reported from American countries. Key words:Odontogenic tumors, WHO classification, prevalence, jaws. Introduction Odontogenic tumors (OTs) constitute a heterogeneous group of lesions, arising from the tooth-producing tissues or its remnants (1). From a biological point of view, some of these lesions represent hamartomas with varying degrees of differentiation, while the rest are benign or malignant neoplasms with variable aggressiveness and potential to develop metastasis (2). OTs are rare lesions of the mandible and maxilla that must be considered as a part of the differential diagnosis of lesions that occur in the jaws (3). In humans, tumors of the odontogenic tissues are comparatively rare, comprising about 1% of all jaw tumors (4). The first internationally accepted classification system for OTs was published in 1971 by the World Health Organization (WHO), which was reviewed and updated in 1992 and in 2005 (2). Knowledge of their epidemiology and clinical presentation is essential, and retrospective studies have been carried out in Asia (5-16), Africa (17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), Europe (31)(32)(33), North America (3,(34)(35)(36)(37)(38)(39)(40), and South America (41)(42)(43)(44)(45)(46)(47)(48)(49) to describe these lesions. The geographic distribution of these lesions is variable, mainly because of high genetic and cultural diversity (41). Their etiology is unknown and the majority develops without an apparent cause (30). It is very important to form a set of criteria such as sex, age, and location of lesion, in the management of OTs. Epidemiological studies are crucial because they allow us to establish more precisely the occurrence of OTs in different populations, which in turn helps in the making of a provisional diagnosis and further planning of the biopsy based on the clinical and radiographic features. It also aids in patient counseling and scheduling of treatment (14). Furthermore, there is no information available in the English-language literature on the relative frequency of OTs in Turkey or, particularly, in the Cappadocia region, according to the 2005 WHO classification. The purpose of the present study was to determine the relative prevalence of different types of OTs and to determine the relative incidence of different OTs in the world population through analysis of published studies and statistics, and by comparing these data with each other and with those already reported in the literature. Material and Methods In the present study, the surgical histopathology records of the Departments of Oral Pathology, Faculty of Medicine and Oral and Maxillofacial Surgery, Faculty of Dentistry, Erciyes University were reviewed retrospectively from August 2001 to January 2013. They were tabulated and systematically analyzed to assess the frequency of occurrence based on age, sex, anatomical site and type. Hematoxylin and eosin stained sections were reviewed to confirm or to correct a previous histological diagnosis according to the criteria suggested for the 2005 WHO classification. The independ¬ent opinions of two examiners were compared to reach the final diagnosis and, in cases of doubt, we consulted another expert oral pathologist to obtain a diagnosis by consensus. A total of 218 cases of OTs were collected and reviewed. The literature was retrieved using Pubmed in English only. Recurrent tumors were considered as a single case. With regard to site distribution, the maxilla was divided into three anatomic regions: anterior, premolar and molar; and the mandible was divided into three anatomic regions: anterior, premolar, molar/ ramus. Data were analyzed using SPSS software (version 11.5; SPSS, Inc, Chicago, IL).Tests were considered statistically significant when the p-value was <0.05. Results From total of 7,942 oral and maxillofacial biopsies registered during the 15-year period from 1998 to 2013, 218 cases of odontogenic tumors were found. The most frequent lesion was ameloblastoma (AME) (30.28 %), followed by keratocystic odontogenic tumor (KCOT) (26.15 %) ( Table 1). The proportion of benign to malignant lesions was 15.8:1. Taken together, AME, KCOT, odontoma (OD), and calcifying epithelial odontogenic tumour (CEOT) corresponded to nearly 78% of the cases. There were no statistically significant differences between the ameloblastoma and keratocystic odontogenic tumor groups. In male patients, KCOT followed by AME and calcifying epithelial odontogenic tumor (CEOT) were the most common lesions; in female patients, AME, followed by KCOT and OD, were the most frequent OTs (data shown in table 1). (Examples of odontogenic tumors diagnosed with histopathological examination were shown in figure 1. Examples of cropped panoramic radiographs of patients with odontogenic tumors were shown in figure 2). The other tumors comprised less than 6% of the series. An almost equal gender distribution was observed; with a slight predominance of males (the sample comprised 110 (50.5%) males and 108 (49.5%) females). Statistical analysis revealed no significant difference in the distribution of OT in relation to gender. AME was the only benign tumor found in patients over than 80 years of age. The age of the patients ranged from 10 years to 84 years, with a mean age of 34.52 years. The majority of cases were distributed between the age of 20 and 49 years with a peak incidence in the fourth decade of life ( Table 2). The anatomical sites of all cases are also presented in table 1. In general, the mandible was the most frequently affected site, corresponding to 77.9 % of the cases, while the maxilla was affected in 22.1% of the cases. The most frequently affected area was the mandibular molar/ramus Table 1. Frequency, gender, and site distribution of 218 odontogenic tumors of patients at two centers. n: number of cases. e37 segment, mainly by AME. The youngest patient who presented with the lesion was 10 years old and the oldest was 84 years old. The single case of ameloblastic fibroma affected a 14-year-old male patient. The lesion was located in the anterior mandible. There were 2 cases of dentinogenic ghost cell tumor (DGCT), which is a new entity according to the 2005 WHO classification. Malignant OT, ameloblastic carcinoma primary type (A Ca, p), primary intraosseous squamous cell carcinoma (PIO SCCa, S), and clear cell odontogenic carcinoma (CCO Ca) were found more frequently in the maxilla. In the upper jaw, PIO SCCa, S was the most common lesion, mainly observed in the molar region, followed by CCO Ca, mainly in the anterior region. Most malignant OTs also predominantly occurred in patients older than 40 years (Table 2). Discussion The fact that most OTs remain painless throughout the course of the disease is the main reason that patients do not present until the tumors have reached enormous sizes (17). Knowing the frequency and basic clinical features of OTs is important because this allow us to establish more precisely the expression of these lesions in diverse populations, which in turn helps to identify the groups at risk and possible factors associated with their development, as well as to develop more precise differential diagnoses (2). In the present study, the relative frequency of odontogenic tumors was 2.74 % of the total biopsied specimens recorded between August 1998 and January 2013. This incidence is similar to what has been reported in other studies, as they represent less than 3% of oral and maxillofacial specimens studied in North American (35,39) (8,9,20), although an Iranian series (7) had a frequency of 1.9%. This study confirms that benign tumors (94.4%) are the most frequently seen OT; however, malignant OT represented 5.6% in the present series. This frequency of malignant tumors is only similar to those reported in China (8,12), but it is higher than that those published in most other series (23,39,40,42,(44)(45)(46)(47)(48). In studies using the new 2005 WHO classification, the most frequent OTs follow the sequence: ameloblastomas (30.28%), KCOT (26.15%), and odontomas (16.06%) ( Table 3). Studies that employed the 1992 classification usually reported ameloblastomas as the prevalent OT, followed by odontomas and odontogenic myxomas ( Table 4). This regional difference has been attributed to the asymptomatic nature of many odontomas and consequent lack of professional management rather than genetic or environmental differences among these populations (19,20,35,44). The present study found ameloblastoma to be the most frequent odontogenic tumor, e38 accounting for 30.28%, followed by KCOT (26.15%), odontoma (16.06%), and CEOT (11.01%). There were no statistically significant differences among the ameloblastoma and KCOT and odontoma groups. These results are comparable with the corresponding data reported by Jing et al. (10), Tawfik et al. (17) and Osterne et al. (46). Ameloblastoma is reported to be the most frequent lesion in Chinese, Egyptian and Brazilian populations, followed by KCOT and odontoma. The high frequency of AME and low frequency of odontoma are consistent with data from Tanzania (21), Nigeria (20,25,44), and Sri Lanka (11) whereas studies from the USA (35), Canada (22), Chennai-India (14) and Estonia (9), stated that odontoma occurs more frequently. These discrepancies in the number of odontomas being less in their populations in comparison with others are probably the result of geographic variation, but it should be mentioned that the incidence of odontoma in some countries was probably underestimated due to the unique clinical features of this tumor and insufficient hospital management (32). Most of these tumors exhibit self-limited growth and do not cause clinical symptoms. Many patients do not think it is necessary to consult a general dentist or even an oral and maxillofacial surgeon. Treatment in many cases was performed in the office and the cases were not recorded or sent for histopathological confirmation (9,17). The present study showed that AME was the most frequent OT, occurring mainly in the posterior region of the mandible. This is similar to other studies reported from Japan (14), Iran (7), India (14,15), Srilanka (16), Africa (17,18,20,23,25,30), Turkey (33), Hong Kong (13), and China (10,12), but in contrast to those reported from Canada (40), Chile (44), USA (35), Chennai (14) and Mexico (34,39), where odontoma is reported as the most common odontogenic tumor. This also strengthens the belief that ameloblastomas are more common in Asians and Africans compared with Caucasians. A study form Brazil reported that ameloblastoma diagnosis exhibits no gender predilection (43). Reichart et al. (49) in an extensive review of all of the cases reported in the literature, reported the average age of initial diagnosis in industrialized countries to be 39.1 years compared with 27.7 years from developing countries. Sriram et al (9) reported that almost 95% of ameloblastomas were located in the mandible, with a very high mandible to maxilla ratio (18.1:1). This is very high compared with the ratios reported by earlier studies ( the present study, this ratio was found as 5:1.Reichart et al (49) also reported that ameloblastomas are seen more frequently in the anterior region among Blacks (21.6%) compared to Caucasians (12.6%) and Asians (11.9%). In the present series, the second most common odontogenic tumor was KCOT (26.15%) and, in accordance with other series (50)(51)(52), it was responsible for nearly a quarter of the evaluated OT. This incidence was somewhat higher (14,17,18,30), and somewhat lower (8,10,15,34,41,46,48) than that seen in other series. This study also demonstrated that KCOT is rare in early childhood and has a strikingly higher prevalence during adolescence, when it was the most common OT. Odontomas are an abnormal mass of calcified dental tissue, usually representing a developmental abnormality. Female patients were more affected than male patients in the present study, which is in agreement with reports from China (10), whereas Ladeinde et al. (20) in Nigeria reported no sex predilection in their study. In the present study, most odontomas were found in the posterior regions of both jaws. This finding was in accordance with many other reports from Mexico, Chile, Brazil, and Estonia (33,39,42,44). Reports from African and Chinese populations generally present the highest frequency of malignant OT (8,10,12,20,25), while studies from North and South America have informed rates of 1.6% and lower (34,36,39,40,42,44) except for tertiary reference centers in the United States (37), Mexico (38) and the present study. In the present study, this ratio was found in 13 cases (4.59%). Published reports also stated that malignant odontogenic tumors are rare and represent 0.1-6.1% of all tumors (Tables 3,4). The mean age of 64 years for malignant tumors in the present study is higher than in other studies in southern Asia (mean 46 y) and eastern Asia (mean 41 y) (8)(9)(10)(11)53). PIOC was the most malignant entity encountered in this analysis and represented 4.58% (10 cases) of odontogenic tumors. It was found to occur more in female patients and in the maxilla. The female predilection is in contrast to other reports (19,20). Site distribution of odontogenic tumors with large studies reported from different countries and regions is shown in table 5. The majority of studies confirm the mandible as the anatomic site most frequently affected by OTs, especially by ameloblastoma and KCOT, which agrees with our findings (5, 17,39,46). The preference for the mandible in this study, 1:3.52, is a mean between several studies in Nigeria and African countries (19,20,25), which provide values of 2.9 to 5.7:1, in contrast to Americans (39,41,44), and Europeans (11,32), where lower values are observed in the jaw with ratios of 1 to 2:1. These values can be explained by the prevalence of AME being far greater in African countries. In present study, almost 83% of ameloblastomas were located in the mandible, with a very high mandible to maxilla ratio of 5:1. This is similar with the studies by Reichart et al. (49) who found, in an extensive review of all the cases reported in the literature, the ratio to be around 5.4:1. In the present study, ameloblastomas were frequently encountered in the molar-ramus region in the mandible and the molar region in the maxilla. In relation to sociodemographic data, a higher proportion of males were affected with OT and the average age at diagnosis was 35 years (48). The gender distribution of odontogenic tumors in large studies reported from different countries and regions was shown in table 5. Avelar et al. (41) reported that male patients were more affected than female patients, agreeing with several studies from China (10,12), Nigeria (19,20), Egypt (17), India (9), and Canada (35). However, the preponderance of females was reported in Sri Lanka (11), Brazil (42,45), Mexico (39), Chile (44), Nigeria (25), and Estonia (32). In present study, an almost equal gender distribution was observed, with a slight predominance of males. The literature states that patients with OT are usually diagnosed in the second to fifth decades of life (8,45), but the frequency of different lesions varies with the age of the patient. In this study, odontogenic tumors showed a peak incidence in the fourth decade of life, which was probably related to the high prevalence of AME and KCOT in this age group; there was a prevalence of odontomas in the second decade, while other studies described a high frequency of ameloblastomas and KCOT. In older patients, there is a predominance of ameloblastomas and KCOT (8,10). Some studies reported that various types of OT, including AOT, odontoma, and calcifying cystic OT (CCOT), were more frequent in the second decade of life (8,12,17). In conclusion, the present study reflects not only differences in the distribution of odontogenic tumors but also similarities among the various population samples assessed both in Asia and around the world. These data are important to assess geographic differences in the incidence of lesions and to allow clinicians to make realistic judgments in counseling patients before biopsy about the probability of diagnosis and risks associated with nonspecific clinical or radiographic lesions. The incidences of OTs observed in the present study are similar to those in previous studies from Asia and Africa and in contrast to those reported from American and European countries.
2018-04-03T04:02:42.750Z
2014-09-30T00:00:00.000
{ "year": 2014, "sha1": "4dd4b2a28e2e4c6515bbabd02eccdbf345caac31", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4317/medoral.19157", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4dd4b2a28e2e4c6515bbabd02eccdbf345caac31", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231650849
pes2o/s2orc
v3-fos-license
Monitoring of mitochondrial oxygenation during perioperative blood loss One of the challenges in the management of acute blood loss is to differentiate whether blood transfusion is required or not. The sole use of haemoglobin values might lead to unnecessary transfusion in individual cases. The suggestion is that mitochondrial oxygen tension can be used as an additional monitoring technique to determine when blood transfusion is required. In this case report, we report mitochondrial oxygen measurements in a patient with perioperative blood loss requiring blood transfusion. BACKGROUND Goal-directed management of perioperative blood loss remains a major challenge for clinicians. Acute anaemia resulting in inadequate oxygen supply should be avoided at all times, but currently, no specific endpoint for personalised transfusion medicine is available. Perioperative insufficient oxygen delivery to tissues is an important determinant for postoperative complications such as stroke, 1 declined cognitive function, 2 kidney injury 3 and cardiac ischaemia. 4 Tissue oxygenation relies on adequate oxygen delivery, which is predominantly maintained by the arterial oxygen saturation, haemoglobin concentration and cardiac output. Transfusion of allogeneic erythrocyte concentrates plays an important role in treating acute anaemia for the prevention of tissue hypoxia. However, allogeneic blood transfusion itself is not without risks and is an independent risk factor for increased mortality and morbidity. 5 One of the challenges in the management of anaemia is to determine whether blood transfusion is required or not. Current transfusion guidelines recommend haemoglobin levels as a trigger for red blood cell transfusion. 6 7 These guidelines are based on mean data and thus incorporate safety margins, which might lead to unnecessary transfusion in individual cases. Additionally, more physiologically based transfusion triggers may enable optimisation and personalised treatment during the management of acute blood loss and may help in preventing transfusionrelated complications like haemolytic reactions, transfusion-related acute lung injury, infections and transfusion-associated circulatory overload. In a recent study in haemodiluted pigs, mitochondrial oxygen tension (mitoPO 2 ) was suggested as a useful parameter to distinguish whether blood transfusion is necessary or not. 8 Given that the mitochondrion is the final destination of oxygen, it seems logical to use mitoPO 2 as a measure for transfusion need. The mitoPO 2 can be measured by the COMET (an acronym for Cellular Oxygen METabolism) measuring system (Photonics Healthcare, Utrecht, the Netherlands). 9 The measurement is based on the principle of oxygen-dependent quenching of delayed fluorescence of protoporphyrin IX (PpIX). 10 11 Application of the porphyrin precursor 5-aminolevulinic acid (5-ALA) on the skin induces PpIX in the mitochondria where it acts as a mitochondrially located oxygen-sensitive dye. 12 After photoexcitation with a pulse of green light, PpIX emits delayed fluorescence of which the lifetime is inversely related to the amount of oxygen. The technique is non-invasive and can be safely used in humans. 13 14 In this case report, we report the results of mitochondrial oxygen measurements in a patient with major intraoperative blood loss requiring blood transfusion. CASE PRESENTATION A 69-year-old man with a history of diabetes mellitus (type II), hypertension and dyslipidaemia and a recent diagnosis of metastatic ascending colon carcinoma (cT4N2M1) required extensive surgery and hyperthermic intraperitoneal chemotherapy (HIPEC). His medication included metformin, gliclazide, enalapril and simvastatin. Preoperative abdominal, respiratory and cardiac examination were unremarkable. Preoperative haemoglobin levels were 93.5 g/L. Prior to induction of anaesthesia, an epidural catheter was placed, and epidural analgesia (ropivacaine/sufentanil) was given. For induction of anaesthesia, an intravenous bolus dose of propofol 110 mg followed by rocuronium 50 mg was administered, together with a continuous infusion of remifentanil 9 µg/kg/hour. Anaesthesia was maintained using sevoflurane. Directly after induction of anaesthesia, a continuous infusion of noradrenalin (0.40-0.60 µg/kg/min) was necessary to maintain normal blood pressure values (mean arterial pressure (MAP) >65 mm Hg) . During surgery, extensive peritoneal carcinomatosis was observed with a Peritoneal Carcinomatosis Index of 16. 15 The surgeons performed a low anterior resection and an omentectomy. Thereafter, warm mitomycin C (chemotherapeutic agent) was rinsed abdominally for 1.5 hours via three inflow and two outflow catheters. Finally, a previously constructing ileostomy was reversed, and a terminal colostomy was placed. After 1 hour of coma, during the low anterior resection, an acute rapid blood loss of 2500 mL occurred. Resuscitation of blood loss was initially done using crystalloids and colloids (figure 1). Haemodynamic parameters were kept stable, without the need for extra vasoactive medication. Blood transfusion was started after haemoglobin levels dropped below 88.6 g/L. INVESTIGATIONS In addition to the standard intraoperative monitoring consisting of invasive blood pressure measurements, peripheral oxygen saturation, 5-lead electrocardiography and temperature measurements, two additional monitoring systems were used during this operation: the COMET for monitoring mitochondrial oxygenation and the oxygen to see (O2C) (LEA Medizintechnik, Giessen, Germany) for monitoring microcirculatory blood flow velocity, tissue oxygen saturation (StO 2 ) and relative amount of recombinant haemoglobin (rHb) in the skin. 16 For the COMET measurements, a 5-ALA patch was applied the evening before surgery. Directly after the induction of anaesthesia, the 5-ALA patch was removed, and the skin sensor of the COMET was fixated to the chest. COMET measurement was performed during surgery with an interval of one measurement per minute. The O2C probe was placed on the skin of the sternum next to the COMET measurement probe providing semicontinuous readings. OUTCOME AND FOLLOW-UP The mitoPO 2 value started around 70 mm Hg and slowly declined in the first hour, parallel to StO 2 , to reach values around 50-60 mm Hg. After approximately 1 hour of operation time, a sudden blood loss of 2.5 L occurred. Initially, adequate haemodynamic status was ensured by infusion of crystalloids and colloids resulting in haemodilution and acute anaemia. Due to anaemia, oxygen delivery to the tissues decreased, accompanied by declining microvascular and mitochondrial parameters, while other parameters such as MAP, StO 2 , rHb and lactate levels did not change during the ongoing blood loss. Heart rate and capillary blood flow did show a response to the bleeding but at a later stage than mitoPO 2 . Directly after resuscitation with red blood cells, a rapid increase of mitoPO 2 was observed, with mitoPO 2 values restored from below 10 mm Hg to up to 40 mm Hg (figure 1). After surgery, the patient was transferred to the intensive care unit (ICU), and discharge from the ICU to a surgical ward was possible after 2 days. The stay in the surgical ward was complicated by a paracolic fluid collection, which was surgically drained. After a further trouble-free recuperation, the patient was released from the hospital 15 days after surgery. Eight months later, a CT scan diagnosed extensive recurrence of disease, eliminating curative treatment options. Unfortunately, the patient died 1 year later from the consequences of his illness. DISCUSSION In this case report, we show mitoPO 2 use during acute perioperative blood loss and propose mitoPO 2 as an additional monitoring parameter for perioperative transfusion management. In the current literature, there is controversy regarding the transfusion of blood components in oncological surgery. As early as in the 1980s, the effect of blood transfusion on malignancy recurrence rate was noted. 17 These findings were based on simple bivariate correlation without the adjustment of confounders. The correlation between blood transfusion and malignant recurrence rate became less obvious with the introduction of new statistical techniques whereby confounding factors were included in the analysis. 18 19 Although the negative effect of blood components does not appear to apply in all cancers types, it is not the case in patients with peritoneal colorectal carcinomatosis undergoing cytoreductive surgery and HIPEC. Two recently published articles both showed an independent relationship between perioperative blood transfusion and survival rate, especially in patients with high-grade mucinous neoplasms. 18 19 These findings underline the importance of blood-sparing protocols during cytoreductive surgery and HIPEC. In this case report, the potential value of perioperative monitoring of the mitochondrial oxygenation in the decision as to whether or not to transfuse a patient is shown. The mitochondria are the largest oxygen consumers within the cell. Therefore, mitoPO 2 reflects the oxygen balance between oxygen supply and oxygen demand. 20 Oxygen supply is dependent not only on the amount of haemoglobin but also on microvascular blood flow, the haemoglobin dissociation characteristics, the level of oxygen saturation of haemoglobin and the diffusion barriers between red blood cells and the tissue cells. 21 Because so many factors are involved in maintaining an adequate tissue oxygenation, it doesn't seem wise to use only haemoglobin levels in the decision of transfusing red blood cells. Therefore, we suggest to use mitoPO 2 and microvascular flow measurements in combination with point-of-care haemoglobin and standard operative measurements, such as blood pressure, heart rate and pulse oximetry, in the decision-making process for blood transfusion. The main goal is to reduce the number of blood transfusions required during oncological surgery, particularly during cytoreductive surgery and HIPEC and thereby improve Figure 1 Haemodynamics and transfusion during surgery. HR, heart rate; MAP, mean arterial pressure; MitoPO 2 , mitochondrial oxygen tension; rHb, recombinant haemoglobin; StO 2 , tissue oxygen saturation. Learning points ► Transfusion is not without risk; a more individualised threshold for determining blood transfusion is needed. ► Monitoring of oxygenation at the mitochondrial level is clinically possible by using the delayed fluorescence of protoporphyrin IX. ► Mitochondrial oxygenation monitoring provides a new tool for research in resuscitation and transfusion medicine.
2021-01-21T06:16:25.218Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "68cc7ab0cb6f190cf88c414fa16f6ee70818e79e", "oa_license": "CCBY", "oa_url": "https://casereports.bmj.com/content/bmjcr/14/1/e237789.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "5db95d0f6f1f954487755a3dbfb3d591a3d89b30", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
222318924
pes2o/s2orc
v3-fos-license
Reversible regulation of conjugation of Bacillus subtilis plasmid pLS20 by the quorum sensing peptide responsive anti-repressor RappLS20 Abstract Quorum sensing plays crucial roles in bacterial communication including in the process of conjugation, which has large economical and health-related impacts by spreading antibiotic resistance. The conjugative Bacillus subtilis plasmid pLS20 uses quorum sensing to determine when to activate the conjugation genes. The main conjugation promoter, Pc, is by default repressed by a regulator RcopLS20 involving DNA looping. A plasmid-encoded signalling peptide, Phr*pLS20, inactivates the anti-repressor of RcopLS20, named RappLS20, which belongs to the large group of RRNPP family of regulatory proteins. Here we show that DNA looping occurs through interactions between two RcopLS20 tetramers, each bound to an operator site. We determined the relative promoter strengths for all the promoters involved in synthesizing the regulatory proteins of the conjugation genes, and constructed an in vivo system uncoupling these regulatory genes to show that RappLS20 is sufficient for activating conjugation in vivo. We also show that RappLS20 actively detaches RcopLS20 from DNA by preferentially acting on the RcopLS20 molecules involved in DNA looping, resulting in sequestration but not inactivation of RcopLS20. Finally, results presented here in combination with our previous results show that activation of conjugation inhibits competence and competence development inhibits conjugation, indicating that both processes are mutually exclusive. INTRODUCTION Bacterial communication, through a process named quorum sensing by secreting and sensing signalling peptides (1,2), allows bacterial communities to adapt and coordinate their survival strategy when encountering adverse conditions by changing their expression profile. In Gram-positive (G+) bacteria, the signalling molecules, aka pheromones, are small extracellular peptides often ranging between 5 and 10 residues. They can interact with sensor kinases embedded in the bacterial membrane that form part of the twocomponent systems, or be (re)imported inside the cell where they then interact with cytosolic receptor molecules (3)(4)(5)(6). A large number of cytoplasmic signal-peptide receptor proteins belong to the so-called RRNPP family of proteins, named after its prototypical members Rap, Rgg, NprR, PlcR and PrgX (for review see, 7,[8][9][10]. Most of the genes encoding the RRNPP proteins in the phylum of Firmicutes are co-transcribed with the gene encoding the signalling peptide. The signalling peptides are synthesized as a prepropeptide, which is cleaved again after being secreted to become the mature peptide. The mature peptide generally corresponds to the C-terminal region of the pre-proprotein, and can be imported inside the cell by the oligopeptide permease system (3,6). The RRNPP proteins are characterized by a two-domain structure that is composed of a large signal peptide binding C-terminal domain containing multiple tetratricopeptide repeats (TPR), and a smaller three ␣-helical N-terminal effector domain. Binding of the signalling peptide to the C-terminal TPR domain modulates interaction or activity of the effector domain with its ligand resulting in downstream regulatory effects. The effector domains can be classified into three groups: many adopt a helix-turn-helix (HTH) conformation allowing them to bind DNA thereby repressing gene expression; some interact with a target protein, for example a transcriptional regulator, and modulate gene expression directly or indirectly; and some have phosphatase activity, which allow them to interfere with phosphorylation relay involved in phosphorylation-mediated activation of Spo0A, the master regulator of sporulation in Bacillus subtilis. RRNPP-mediated quorum sensing mechanisms are involved in various cellular processes such as regulation of differentiation pathways like sporulation and competence, activation of virulence genes and altering surface characteristics (9)(10)(11). Interestingly, some RRNPP proteins play crucial roles in horizontal gene transfer events, for example, by determining whether a phage enters the lytic or lysogenic cycle (12), or by regulating the expression of conjugation genes present on a conjugative element. Conjugation is the process by which a conjugative DNA element is transferred from a donor to a recipient cell through a sophisticated pore that connects both cells. Conjugative elements can be present on a plasmid, which are then named conjugative plasmids, or they can be embedded within a bacterial genome and are then named integrative and conjugative element (ICE) (13, for review see 14). Conjugation is the main horizontal gene transfer route that is responsible for the spread of antibiotic resistance and virulence genes and therefore poses a serious worldwide problem (15)(16)(17)(18). In addition to genes carried on the plasmid, conjugative plasmids can also mobilize the transfer of co-resident rolling-circle type plasmids, many of which contain antibiotic/virulence genes (19). For example, the B. subtilis conjugative plasmid pLS20 itself encodes a putative VanZ protein that would confer resistance to the antibiotic teicoplanin (our unpub-lished results, 20). In addition, it can disseminate antibioticresistance genes carried by several rolling circle plasmids like pUB110, pBC16, pMV158 and pTB913 by mobilizing them (21)(22)(23). Examples of RRNPP proteins that regulate the transfer of conjugative elements are RapI of the B. subtilis ICE element ICEBs1, PrgX of the Enterococcus faecalis plasmid pCF10, which harbours a tetracycline resistance gene (24), and Rap pLS20 of the B. subtilis plasmid pLS20 (25)(26)(27). Curiously, the RRNPP proteins RapI, PrgX and Rap pLS20 regulate expression of the conjugation genes in very different ways. Plasmid pCF10-encoded PrgX is a DNA-binding protein that can interact with two signal peptides exerting opposing effects on DNA binding. Interaction of the plasmid encoded iCF10 with PrgX favours a conformation in which PrgX binds to DNA resulting in repression of the conjugation genes, while binding of the recipient cell encoded cCF10 alters the conformation of PrgX and relieves PrgX-mediated repression (8). In the case of ICEBs1, the conjugation genes are repressed by a repressor named ImmR. Inactivation of ImmR, which results in activation of the conjugation genes and hence conjugative transfer of the ICE, can occur in two ways. First, as a consequence of RecA-dependent SOS response to DNA damage, or second, when RapI stimulates the ICE-encoded protease ImmA to degrade ImmR (28). The conjugation genes of plasmid pLS20, repressed by a plasmid encoded transcriptional regulator named Rco pLS20 , is relieved by Rap pLS20 (27). As for RapI of ICEBs1, Rap pLS20 activates conjugation in the absence or presence of low concentrations of its cognate mature signalling peptide Phr* pLS20 , and at higher levels Phr* pLS20 inactivates Rap pLS20 . Phr* pLS20 concentrations will be relatively high or low when donor cells are predominantly surrounded by donor or recipient cells, respectively. Hence, conjugation will become activated only under conditions in which recipient cells are potentially present. The pLS20 conjugation genes are located in a single large operon that is under the control of the strong conjugation promoter P c . At its left side, the conjugation operon is flanked by the divergently oriented regulatory gene rco pLS20 and the weak P r promoter controlling rco pLS20 expression that overlaps with the P c promoter. The intergenic region encompassing the P c and P r promoters contains two Rco pLS20 operators, O I and O II , separated by 75 bp. Binding of Rco pLS20 to both operators results in DNA looping and causes tight repression of the conjugation promoter P c . Simultaneously, Rco pLS20 regulates its own expression: at low and high Rco pLS20 concentrations the P r promoter is activated and repressed, respectively. Phr* pLS20 -unbound Rap pLS20 activates conjugation by relieving Rco pLS20 -mediated P c repression, and binding of the peptide antagonizes the antirepressive function of Rap pLS20 , reverting the system to its default state (13,27). This multi-layered DNA-looped genetic switch tightly blocks expression of the conjugation genes under conditions unfavourable for conjugation while being sensitive to activate accurately the conjugation genes when appropriate conditions occur. Here, we have studied various unaddressed aspects of this regulatory circuit. We demonstrate that phr pLS20 expression is controlled by two promoters, and we have determined the relative strengths of promoter P c and the promoters of the genes regulating its activity. We found that the multilayered regulation of P c results in population scale on/off switching. We show that Rap pLS20 is sufficient to relieve Rco pLS20 -mediated repression of the P c promoter in vivo, by interacting directly with Rco pLS20 . We also show that each Rco pLS20 operator is bound by one Rco pLS20 tetramer and that DNA looping is achieved through interactions between the two operator-bound Rco pLS20 tetramers, contrary to what has been proposed before. Interestingly, Rap pLS20 preferentially acts to interrupt DNA looping. Finally, Phr* pLS20 shares high similarity to the host-encoded PhrF, and PhrF and derivatives can bind and inactivate Rap pLS20 , suggesting that pLS20 conjugation may be influenced by the host RapF/PhrF signalling pathway. Bacterial strains, plasmids and media Bacterial cultures were grown in LB broth or on LB agar at 37 • C except BTH101, which was grown at 30 • C. Where appropriate the following antibiotics were added to the media: ampicillin (100 g/ml), erythromycin (1 and 150 g/ml for B. subtilis and Escherichia coli, respectively, chloramphenicol (5 g/ml), spectinomycin (100 g/ml), and kanamycin (10 g/ml). E. coli BTH101 was used as the reporter strain for the BACTH system. For BACTH assay, minimal medium M63 supplemented with maltose was used for growth (29,30). Strains and plasmids used are listed in Supplemental Table S1. All B. subtilis strains are isogenic to B. subtilis strain 168 (Bacillus Genetic Stock Centre Code 1A700). Oligonucleotides used (Isogen Life Sciences, The Netherlands) are listed in Supplemental Table S2. See supplemental material for construction of plasmids and strains. Phr* pLS20 , PhrF*, PhrF*-R2K and PhrF*-I5Y peptides were synthesized by the Protein Chemistry facility of the CIB Institute. Transformation Escherichia coli cells were transformed using standard methods as previously described (31). For standard B. subtilis transformations, competent cells were prepared as described by Bron (32). For making knockout version of pLS20cat, high competency protocol was used as described by Zhang and Zhang (33). For co-transformation of plasmids for BACTH assay, electro-competent cells of E. coli were prepared as described earlier (31). Conjugation assays Unless specified otherwise, conjugation was carried out in liquid medium as described earlier (27). Thus, for standard conjugation experiments, overnight cultures of donor and recipient cells, grown in the presence of appropriate antibiotics, were diluted 50-fold in fresh 37 • C pre-warmed LB medium without antibiotics and grown in shaking (180 rpm) water bath until an OD 600 between 0.9 and 1 was reached. Next, 200 l of both donor and recipient cells were mixed in 2.5 ml eppendorf tube and incubated for 15 min at 37 • C without shaking to permit conjugation. Finally, appropriate dilutions were plated on LB agar plates supplemented with proper antibiotics to select either for transconjugants or for donor cells. When conjugation efficiencies were determined as a function of growth, overnight cultures were diluted to an OD 600 of 0.01. Next, donor and recipient cells were grown separately (180 rpm) and 200 l of the donor and recipient cultures were withdrawn at different times and proceeded as described above. Growth was followed by measuring OD 600 at regular intervals. In order to study the effect on conjugation of over-expression of a given gene placed under the control of the inducible P spank promoter, IPTG was added to prewarmed LB medium used for inoculation of the overnight grown cultures. Unless mentioned otherwise, IPTG was added to a final concentration of 1 mM. All conjugation experiments were repeated at least three times. The entry into stationary growth (t = 0) is determined in retrospect based on the growth curve. Consequently, time points at which samples were taken fluctuate slightly between each experiment. Values for specific time points extrapolated from the curves of repeated experiments showed that they differed by <10%. Flow cytometry Overnight grown cultures were diluted 100-fold in prewarmed LB medium. Two milliliters of the culture were centrifuged (1 min 14 000 g) when the OD 600 was between 0.8 and 1.0. After a washing step (2 ml 0.2 M filtered 1× PBS), the pellet was resuspended in 1 ml 0.2 M filtered 1× PBS. Next, cells were directly measured on a FacsCalibur cytometer (Becton Dickinson, United States) equipped with an argon laser (488 nm). The fluorescence of at least 100 000 cells was analysed using a 530/30 nm band pass filter using arbitrary units (AU). Sample data were collected using CellQuest Pro (Becton Dickinson, United States) software and analysed afterwards using FlowJo 6.4.1 mac (TreeStar, United States) software. B. subtilis strain 168 was included in each flow cytometry experiment as negative control. Values showed and represented in graphs, corresponds with Geomean estimated by FlowJo. Fluorescence microscopy Cells grown in LB medium with/out chloramphenicol or spectinomycin were placed on agarose pads as described previously (27). Images were acquired using a Nikon Eclipse Ti-U inverted epifluorescence microscope and a QImaging Rolera EM-C2 EM-CCD Camera under 100× phase oil objective, and were processed using MetaMorph software. TIFF images were further processed in Inkscape. Rap pLS20 -His (6) purification An overnight culture of E. coli BL21 (DE3) carrying plasmid pEST10 B was used to inoculate (100-fold dilution) 1 L of fresh LB medium containing 30 g/ml of kanamycin and incubated at 37 • C with shaking. At OD 600 of 0.5, rap pLS20 -His (6) was induced with 1 mM IPTG at 37 • C and growth was continued for 2 h. Cells were collected by centrifugation and washed in 1/10 vol. of buffer A (250 mM NaCl, 10 mM MgCl 2 , 20 mM Tris-HCl pH 8, 7% glycerol, 10 mM imidazole, 1 mM ␤-mercaptoethanol). Next, cells were centrifuged and re-suspended in 1/3 volume of buffer A and they were lysed by sonication followed by DNase I treatment for 30 min at 4 • C. Next, the lysate was centrifuged twice (15k, 30 min) and the supernatant was collected and mixed with 1 ml of nickel NTA agarose beads equilibrated with buffer A. The mixture was incubated end-over-end for 1 h at 4 • C then packed into a column. The column was washed with extensive amounts (> 50 column volumes) of buffer A containing increasing concentrations (10,20,30,50 and 100 mM) of imidazole. Next, the Rap pLS20 -His (6) protein was eluted in eight fractions of 1 ml of buffer A containing 500 mM imidazole. All fractions were analysed by SDS-PAGE and only the fractions with >95% purity were pooled, dialyzed against buffer B (20 mM Tris-HCl pH 8.0,1 mM EDTA, 250 mM NaCl, 10 mM MgCl 2 , 7mM ␤mercaptoethanol, 20% v/v glycerol) and stored in aliquots at -80 • C. Protein concentrations were determined by Bradford assay. EMSA and Southern blotting The gel retardation assays were carried out as described earlier (34). Thus, different fragments of intergenic regions between gene 28 and rco pLS20 were amplified by PCR using pLS20cat as template. The resulting PCR fragments were purified and equal concentrations (300 nM) were incubated on ice in binding buffer [20 mM Tris HCl pH 8, 1 mM EDTA, 5 mM MgCl 2 , 0.5 mM DTT, 100 mM KCl, 10% (v/v) glycerol, 0.05 mg ml −1 BSA] without and with increasing amounts of purified Rco pLS20 -His (6) or Rap pLS20 -His (6) in a total volume of 16 l. After careful mixing, samples were incubated for 20 min at 30 • C, placed back on ice for 10 min, and then loaded onto 2% agarose gel in 0.5XTBE. Electrophoresis was carried out in 0.5XTBE at 50 V at 4 • C. Finally, the gel was stained with ethidium bromide, de-stained in 0.5× TBE and photographed with UV illumination. The fragments F-A and F-B applied in EMSA and subsequent Southern blot experiments were generated by PCR using as template plasmids pGR49A and primer sets [oGR154-oGR155] and [oGR153-oGR161], respectively. The probes specific for Fragment F-A and F-B were also generated by PCR and pGR49A as template in combination with primer sets [oGR155-oGR163] and [oGR156-oGR162], respectively. The DNA probes were labelled with horseradish peroxidase (HRP) enzyme using glutaraldehyde provided by the ECL Direct Nucleic Acid Labelling and Detection kit (Amersham Biosciences). The conditions for EMSA were equal to those described above. After electrophoresis the gel was first submerged in a depurination solution (250 mM HCl solution) until the bromophenol blue dye had turned completely yellow (10 min), then in a solution of 1 M NaCl and 0.5 M NaOH to denature the DNA until the bromophenol dye regained its blue colour (30 min), and finally for 30 min in a solution of 1.5 M NaCl and 0.5 Tris-HCl at pH 7.5 to neutralize the gel. Next, the DNA was transferred to a positively charged nylon membrane (Amersham Hybrid N + Membrane) using capillary blotting (31). After transfer, the DNA was fixed to the nylon membrane by UV crosslinking using a Stratagene UV Crosslinker. For detection, the membrane was prehybridized for 1 h in hybridization buffer [5× SSC, 2 % (w/v) blocking reagent, 0·1 % (w/v) N-laurosylsarcosine, 7 % (w/v) SDS, 50 mM sodium phosphate buffer (pH 7.0), 50 % (v/v) formamide] at 42 • C. Hybridization was carried out at 42 • C overnight in hybridization buffer containing the denatured probe. After hybridization, the membrane was washed twice in primary wash buffer (0.5× SSC, 6 M urea and 0.4 % SDS) at 42 • C for 20 min each, and then washed twice in secondary wash buffer (2× SSC) at room temperature for 5 min each. Hybridized probes were detected following the manufacturer's guidelines. BACTH experiments The bacterial adenylate cyclase-based two-hybrid (BACTH) system assay (Agilent technologies) was used to identify homogenous and heterogeneous interactions between Rco pLS20 and Rap pLS20 . To perform these experiments, the genes encoding Rco pLS20 and Rap pLS20 proteins were cloned in frame with DNA regions encoding the Cand N-terminal of T18-and T25-domain of Cya protein from Bordetella pertussis in all possible combinations as explained in Supplemental Figure S7. T18 and T25 fragments were present on two different plasmids pUT18 and pKT25, which contain different antibiotic resistance markers (ampicillin and kanamycin, respectively). Different combinations of final plasmids were co-transformed in BTH101 competent cells to have all kinds of interactions between and within Rco pLS20 and Rap pLS20 . Sedimentation velocity assays (SV) Protein and DNA samples in buffer 20 mM Tris, 250 mM NaCl, 10 mM MgCl 2 , 1 mM EDTA, 0.1 mM ␤mercaptoethanol and 1% glycerol, pH 7.4, were loaded (320 L) into 12 mm Epon-charcoal standard double-sector centerpieces and centrifuged in a XL-I analytical ultracentrifuge (Beckman-Coulter Inc.) equipped with both UV-VIS absorbance and Raleigh interference detection systems, using an An-50Ti rotor. SV assays were performed at 48 000 rpm (167 700 g) in the case of proteins, and at 43 000 rpm (134 600 g) for DNA and protein-DNA complexes, and sedimentation profiles were recorded by absorbance at 280, 260 or 230 nm. Differential sedimentation coefficient distributions were calculated by least-squares boundary modelling of sedimentation velocity data using the continuous distribution c(s) Lamm equation model as implemented by SEDFIT (35). These experimental s values were corrected to standard conditions using the program SEDNTERP (36) to get the corresponding standard s values (s 20,w ). Protein-protein and protein-DNA interactions were analysed by multi-signal sedimentation velocity (MSSV). Data were globally analysed by SEDPHAT software (37) using the 'multiwavelength discrete/continuous distribution analysis' model, to determine the spectral and diffusiondeconvoluted sedimentation coefficient distributions, c k (s), from which the number and stoichiometry of Rap pLS20 versus Rco pLS20 or Rco pLS20 versus DNA molecules can be derived (38). Prediction of extinction coefficients for DNA Nucleic Acids Research, 2020, Vol. 48, No. 19 10789 fragment III considering duplex hypochromism at 260 nm was done by means of the Microsoft Excel ® application developed by Tataurov (39). Sedimentation equilibrium assays (SE) Short columns (95 l) SE experiments of Rap pLS20 were carried out at speeds ranging from 7000 to 10 000 rpm (3900-7900 g) and data collected at 280 nm, using the same experimental conditions and instrument as in the SV experiments. A last high-speed run at 48 000 rpm (167 700 g) was done to deplete protein from the meniscus region to obtain the corresponding baseline offsets. Weight-average buoyant molecular weights of Rap pLS20 , alone or in the presence of the pentapeptides, were obtained by fitting a single-species model to the experimental data using the HeteroAnalysis program (40), once corrected for temperature and solvent composition with the program SEDNTERP (36). Equilibrium binding isotherms of Rap pLS20 with different pentapeptides were built using a fixed Rap pLS20 concentration of 6 M titrated with increasing concentrations of each pentapeptide (from 0.3 to 30 M). The oligomerization state of Rap pLS20 was determined from the experimental apparent buoyant mass increments, using 0.7363 as partial specific volume, calculated from its amino acid composition by SEDNTERP. The data were modelled with a three parameter Hill function, as implemented in SigmaPlot 11.0 software. Computer-assisted analysis ClustalW was used to align B. subtilis and pLS20encoded Phr proteins. All graphics work was done by using inkscape (https://inkscape.org/). NIS Elements AR Analysis software provided by Nikon Instruments were used to analyse time lapse video of conjugating cells (https://www.microscope.healthcare.nikon.com/products/ software/nis-elements/nis-elements-advanced-research). Graphics were plotted using Excel or Sigmaplot programs. Rap pLS20 alone is sufficient to relief Rco pLS20 -mediated repression of the P c promoter in vivo We have previously shown that the transcriptional regulator Rco pLS20 represses the main pLS20 conjugation promoter P c , and that Rap pLS20 activates conjugation by relieving Rco pLS20 -mediated repression of the conjugation genes (27,34). However, it was not clear if pLS20-encoded protein(s) other than Rap pLS20 are required to activate the P c promoter, or how Rap pLS20 relieves Rco pLS20 -mediated repression of the P c promoter. As a first approach to address these questions, we constructed an in vivo B. subtilis system in which the rco pLS20 and rap pLS20 genes are uncoupled from their native setting and were placed under different inducible promoters, combined with a P c -lacZ reporter gene. Thus, we constructed strain PKS25 (amyE::P spankrco pLS20 , lacA::P xyl -rap pLS20 , thrC::P c -lacZ; [P spank and P xyl are an IPTG-and xylose-inducible promoter, respectively]). PKS25 cells were plated on Xgal-containing LB agar plates with or without addition of one or both inducers, and the Evidence that the circuit regulating activity of the main conjugation promoter P c is composed of Rco pL20 , Rap pLS20 and Phr* pLS20 . (A) Schematic genetic map of the conjugation operon and upstream genes rap pLS20 (green arrow, rap), phr pLS20 (purple arrow, phr) and rco pLS20 (red arrow, rco). Rco pLS20 is a transcriptional regulator: it represses the P c promoter and activates its own promotor P r . Rap pLS20 is an antirepressor of Rco pLS20 . The Phr* pLS20 signalling peptide inactivates Rap pLS20 . See text for further details. Position and direction of promoters are indicated with bent arrows. Transcriptional terminators are indicated with violet lollipop symbols. Proteins Rap pLS20 and Rco pLS20 are indicated above their corresponding genes using the same colour code. Mature Phr* pLS20 pentapeptide is indicated by purple stars. (B) Regulation of the P c promoter in an uncoupled in vivo system. PKS25 cells (amyE::P spank -rco pLS20 , lacA::P xylrap pLS20 , thrC::P c -lacZ) were plated on Xgal-containing plates that were supplemented or not with IPTG (10 M), xylose (1%), synthetic Phr* pLS20 peptide (10 M), and screened after overnight incubation at 37 • C. colour of the overnight grown colonies was used as an indicator of the P c promoter activity. An overview of the results is presented in Figure 1, representative images of colony colours are shown in Supplemental Figure S1. In the absence of either inducer the P c promoter was active and hence colonies turn blue, but colonies were white in the presence of only IPTG, which is in agreement with our previous results (34) showing that induction of rco pLS20 resulted in repression of the P c promoter. Colonies regained the blue colour when both rco pLS20 and rap pLS20 were expressed (plates containing both IPTG and xylose). This shows that Rap pLS20 alone is sufficient to relieve Rco pLS20 -mediated repression of the P c promoter. A control experiment showed that expression of Rap pLS20 alone did not affect activity of the P c promoter (see Supplemental Figure S2). The activity of other known Rap proteins is regulated by a five or six-residue peptide encoded by a small phr gene mostly located directly downstream of the rap genes, and whose primary product is subject to a secretionprocessing-import pathway (3). A phr gene, phr pLS20 , is located downstream of rap pLS20 and addition of the mature 5-residue peptide Phr* pLS20 to cultures inhibited conjugation (27). Whereas this indicated that Phr* pLS20 inactivates Rap pLS20 , it did not exclude the possibility that inactivation of Rap pLS20 required, besides Phr* pLS20 , other pLS20encoded protein(s). To address this issue, we plated PKS25 cells onto plates containing, besides X-gal, IPTG and xylose, also mature Phr* pLS20 peptide. As shown in Figure 1 and Supplemental Figure S1, colonies grown on these plates were white, demonstrating that Phr* pLS20 is required and sufficient to inactivate Rap pLS20 . The B. subtilis encoded PhrF* signalling peptide interacts with Rap pLS20 in vitro and is able to inactivate Rap pLS20 in vivo The B. subtilis genome encodes 11 rap genes, eight of which are directly followed by a Phr* encoding gene (3,41). When the full-length pre-protein Phr sequence encoded by pLS20 was aligned with those encoded by the B. subtilis genome (Figure 2A), it was clear that the sequence of the mature PhrF* pentapeptide is very similar to that of Phr* pLS20 : residues at positions 1, 3 and 4 are identical; position 2 concerns a conserved substitution of Lysine to Arginine, and position 5 a change from Tyrosine to Isoleucine. The high level of similarity between PhrF* and Phr* pLS20 was surprising and we wondered whether there might be cross talk between PhrF* and Rap pLS20 , and if so, whether PhrF* might affect pLS20 conjugation. Besides PhrF*, we also tested two synthetic variants, PhrF*-I5Y, and PhrF*-R2K, that can be considered intermediates between PhrF* and Phr* pLS20 because they contain only one difference (see Figure 2B). Recently, we have shown that binding of Phr* pLS20 inactivates Rap pLS20 by altering its oligomerization state from dimer to tetramer (42). We therefore used sedimentation velocity (SV) analytical ultracentrifugation (AUC) experiments to test if PhrF* and its two variants PhrF*-I5Y and PhrF*-R2K could inactivate Rap pLS20 as does Phr* pLS20 , by determining the oligomerization state of Rap pLS20 in the presence of these peptides. As shown in Figure 2C, when added in a 10-fold excess all the peptides tested caused tetramerization of Rap pLS20 , indicating that they all could interact with Rap pLS20 . To determine the possible effects of the amino acid differences between PhrF* and Phr* pLS20 on the affinity of these peptides for Rap pLS20, we performed a series of sedimentation equilibrium (SE) assays. In these experiments, the pentapeptide variants PhrF*-I5Y and PhrF*-R2K were also included to determine the possible differential effects of either of the two residues. A fixed Rap pLS20 concentration of 6 M was titrated with increasing peptide concentrations (from 0.3 to 30 M). Figure 2D shows the binding isotherms built from the experimental buoyant mass increments obtained at low speed and 280 nm, through an empirical three parameters Hill plot (equation 1): where y stands for the increase in the buoyant mass, a denotes the maximum buoyant mass increase at saturation, x is the total concentration of peptide, K d is the peptide concentration at half-maximal buoyant mass increase and b is an empirical cooperativity parameter. Taking into account that the maximal buoyant mass increase obtained at the highest peptide concentration corresponds to the Rap pLS20 tetramer, as experimentally determined by the previous SE assays, a tetramerization model can explain the experimental binding isotherm obtained with Phr* pLS20 , with a macroscopic K d of 2.1 ± 0.1 M. Analogously, for PhrF*, a tetramerization model can account for the binding isotherm with a macroscopic K d of 5.3 ± 0.1 M, evidencing the down-modulating effect of the two substitutions within its amino acid sequence. Both PhrF*-I5Y and PhrF*-R2K, induced Rap pLS20 tetramerization with a macroscopic K d of 4.2 ± 0.1 M as shown in Figure 2D, indicating that the residues at positions 2 and 5 of Phr* pLS20 were both important and that they contributed similarly to the specificity of the peptide Rap pLS20 interaction. Next, we tested if the native PhrF* and its variants PhrF*-R2K and PhrF*-I5Y had an effect in vivo. For this we plated PKS25 cells onto LB agar plates containing Xgal and 10 M IPTG, and supplemented with different concentration of Phr* pLS20 , PhrF*, PhrF*-R2K or PhrF*-I5Y. The results obtained are shown in Supplementary Figure S3 and a summary is given in Figure 2E. As expected, colonies were blue in the absence or presence of very low amounts of PhrF* pLS20 , indicating that Rap pLS20 relieved Rco pLS20mediated repression of the P c promoter. Importantly, as for Phr* pLS20 , colonies were white in the presence of each of the other three peptides, strongly indicating that they also inhibited the activity of Rap pLS20 in vivo. In line with the AUC in vitro results presented above, different concentrations of the peptides were required to inactivate Rap pLS20 . While 10 M of Phr* pLS20 was sufficient to obtain white colonies, 60 M of PhrF*-R2K and PhrF*-I5Y and 120 M of PhrF* were required to obtain the same result. Most likely, this is due to the different affinities of PhrF* and the variants for Rap pLS20 as observed in vitro. Relative strengths of promoters involved in regulating conjugation In a previous study we demonstrated, using transcriptional lacZ fusions, that the main conjugation promoter P c is a strong promoter, and that the divergently oriented and overlapping P r promoter driving expression of rco pLS20 is a very weak promoter whose activity was not detected without the expression of its activator rco pLS20 (34). More recently, we have constructed a promoter screening system based on fusions with a gfp reporter gene which is more sensitive and versatile than the lacZ-based system, and allows promoter activity determination in individual cells (43). In that study, we confirmed that P c is a strong promoter (strain AND2A). To obtain a more comprehensive understanding of the relative strengths of the different promoters encoding the players involved in regulation of the conjugation genes, we used this gfp-based promoter-screening system to construct strains containing transcriptional gfp fusions with the P r and the P rap promoters. Based on the following reasoning Sequences of the mature Phr* pLS20 and PhrF* peptides, and the peptide variants PhrF*-R2K and PhrF*-I5Y. The deviant residues at positions 2 and 5 are indicated in red in Phr* pLS20 and blue in PhrF*. This red/blue colour code is also in the peptide variants if their position corresponds to that present in Phr* pLS20 or PhrF*. (C) PhrF* and its peptide variants PhrF*-R2K and PhrF*-I5Y can induce Rap pLS20 tetramerization. Sedimentation coefficient distribution, c(s), corresponding to 4.5 M Rap pLS20 alone (black trace), or with 45 M of Phr* pLS20 (blue trace), PhrF* (red trace), PhrF*-I5Y (green trace) and PhrF*-R2K (cyan trace). (D) Binding isotherms for the interaction of Rap pLS20 with Phr* pLS20 (black circles), PhrF* (yellow circles), PhrF*-I5Y (blue triangles) and PhrF*-R2K (red triangles). The solid curves represent the best fit of the three-parameters Hill equation to the SE experimental data. (E) The mature PhrF* and its variants PhrF*-R2K and PhrF*-I5Y can inhibit Rap pLS20 in vivo. Cells of B. subtilis strain PKS25 (thrC::P c -lacZ, amyE::P spank -rco pLS20 , lacA::P xyl -rap pLS20 ) were plated onto plates containing Xgal, IPTG (10 M) and xylose (1%), and supplemented with the indicated concentration of Phr* pLS20 , PhrF*, PhrF*-R2K or PhrF*-I5Y. Plates were screened for colour after overnight incubation at 37 • C (see Supplemental Figure S3 for original colonies). we tested also the possibility that phr pLS20 may be preceded by a promoter. rap pLS20 and phr pLS20 are transcriptionally coupled (stop codon of rap pLS20 overlaps with the phr pLS20 start codon) and, hence, are both under the control of a promoter P rap . After transcription and translation, synthesized Rap pLS20 remains inside the cell but the small Phr pLS20 is secreted and therefore its concentration will drop. For proper functioning of the quorum sensing system, one might expect that the expression level of phr pLS20 would be higher than that of rap pLS20 . This could be achieved if phr pLS20 is expressed, besides P rap , from an additional promoter. To test this possibility, we constructed strain AL21 in which the region upstream of phr pLS20 was cloned in front of the gfp gene. FACS analysis using standard conditions (see Materials and Methods) was used to determine the fluorescence level of AL21 cells as well as the control strains containing the gfp gene fused to the relatively strong and very strong IPTG-inducible promoters P spank and P hyperspank , respectively, grown in the presence of 1 mM IPTG. Of the pLS20 promoters tested, P c was the strongest (see Figure 3). In line with our previous results, its strength was similar to that of the P hyspank promoter induced in the presence of 1 mM IPTG (see Figure 3 and reference 43). The fluorescence level dropped ∼20-fold in the presence of pLS20spec (strain AND2A P) due to repression of P c by Rco pLS20 synthesized by the plasmid (see below). A very low promoter activity, barely above background levels, was observed for promoter P r when tested in the absence of pLS20. In the presence of pLS20spec, clear fluorescence levels were detected but the promoter activity was still very low, confirming that P r is a weak promoter even when activated by Rco pLS20 . Analysis of strain GR152 showed that P rap controlling expression of rap pLS20 and phr pLS20 was also a weak promoter. Interestingly, promoter activity with a strength Figure 3. Flow cytometry analyses to determine relative promoter strengths and the activity of promoter P c at population level. (A) Relative promoter strengths determined by flow cytometry using strains containing promoter P c , P r , P rap or P phr transcriptionally fused to gfp. A negative control strain and positive control strains containing gfp fused to the IPTG-inducible promoter P spank or P hyspank were included. Samples withdrawn from late exponentially growing cultures (OD 600 between 0.8 and 1) were analysed by FACS. At least 100 000 cells were analysed for each sample. Colour codes: grey, negative control strain 168; black, control strains containing the IPTG-inducible P spank (strain CG35) or P hyspank (strains CG36) fused to the gfp gene (grown in the presence of 1 mM IPTG); blue, red, green and brown, strains containing gfp fused to promoters P c , P r , P rap and P phr , respectively. Light and dark coloured bars reflect strains lacking and containing pLS20spec, respectively. Names of the strains are given below the graphic. For each strain, the mean values of geomean determinations of at least three independent FACS analyses are given together with their standard deviations. (B, C) Homogeneous expression of P c -gfp in strains containing or lacking pLS20. (B) Samples of cultures of AND2A (amyE::P c -gfp, blue pattern), AND2A P (amyE::P c -gfp, pLS20spec, red pattern) or PKS89 (amyE::promoterless gfp, grey pattern) cells, collected at OD 600 = 1, were subjected to flow cytometry analysis. (C) An overnight grown culture of strain B. subtilis 168 strain harbouring pLS20gfp28 (strain PKS182) was diluted 100-fold in fresh prewarmed LB medium. Next, samples were taken at the indicated times and analysed by flow cytometry. almost double that of promoter P rap was observed for strain AL21. This demonstrates that phr pLS20 is controlled by an additional promoter, i.e. expression of phr pLS20 is controlled by promoters P rap and P phr . Finally, contrary to that of P c and P r , similar activities of promoters P rap and P phr were observed regardless whether the strains contained pLS20spec (Figure 3), indicating that Rco pLS20 does not regulate the activity of these two promoters. Computer-assisted and manual analyses of the cloned DNA regions preceding rap pLS20 and phr pLS20 were performed to identify the putative promoters P rap and P phr . This resulted in the identification of sequences that shared similarities with the consensus sequence of A -dependent promoters (5 -TTGACA-17/18bp-TATAAT-3 ). In the case of rap pLS20 and phr pLS20 these sequences correspond to 5 -ttcgtTTGAtA-gacattagtattttaata-TATttT-tcctg-3 and 5 -atgccTTGACt-gaggccttggatcatggc TATgAT-aagcc-3 (putative −35 and −10 hexamer sequences indicated in bold), respectively. The following data provided evidence that the identified sequences corresponded to promoters P rap and P phr . Previously, we have published a heatmap expression profile of pLS20cat based on RNAseq of pLS20catcontaining cells harvested at the end of the exponential growth phase, also the highest conjugation state (27). We reassessed this data and instead of a heatmap we now plotted the expression levels along the plasmid genome for the region spanning rap pLS20 and phr pLS20 . The plot in Supplemental Figure S4 shows that phr pLS20 is indeed expressed at higher levels than rap pLS20 . In addition, the positions of the putative P rap and P phr promoters identified based on similarity with A consensus sequences correspond with the approximate starting positions of expression upstream of rap pLS20 , and that of the increased levels starting upstream of phr pLS20 observed in RNAseq. The P c promoter is homogeneously expressed GFP-based transcriptional fusions allow quantification of the relative promoter activity at single cell level. In addition, heterogeneous or bimodal expression of any particular gene in the population is easily visualized. We used this approach to study if the multi-layered regulation of the P c promoter including the Rap/Phr-based quorum sensing mechanism (34) resulted in heterogeneous activity of promoter P c . Thus, samples taken from cultures of AND2A cells (amyE::P c -gfp), AND2A-P cells (amyE::P c -gfp, pLS20spec) and the control strain PKS89 (amyE::promoterless-gfp) at OD 600 = 1, when conjugation efficiencies are at their maximum (27), were analysed by flow cytometry. The results presented in Figure 3B show a homogeneous activity of P c irrespective of the presence or absence of pLS20spec. The lower fluorescence levels in AND2A P are the consequence of the P c promoter being activated for a shorter time as compared to the constitutively active P c promoter in AND2A. In this set up, the activity of the P c promoter was analysed using an ectopic copy of the promoter located on the bacterial chromosome whereas the proteins regulating its activity are encoded by the resident plasmid, which itself also contains a copy of the P c promoter. Several factors might affect proper regulation of the uncoupled and ectopically located P c promoter such as differences in local supercoiling or spatial location, or the absence of coupled transcription and translation. We therefore constructed a derivative of pLS20cat, pLS20gfp28, in which a copy of the gfp gene was placed behind the first gene of the conjugation operon (gene 28). Strain PKS182 harbouring pLS20gfp28 was then used to determine the fluorescence distribution pattern at single cell level in the population as a function of growth. Thus, on the one hand, samples taken at different times from a growing culture were analysed by flow cytometry, and on the other Nucleic Acids Research, 2020, Vol. 48, No. 19 10793 hand, time-lapse microscopy was used to visualize the fluorescence distribution in a growing microcolony. The results shown in Figure 3C and Supplemental Figure S5 revealed a homogenous pattern of P c promoter activity in this set up. Both approaches show that most cells started to display a rather uniform level of fluorescence whose intensity first increased in time and at later stages declined in a rather uniform manner. Together, these results provide compelling evidence that the different layers of regulation acting on the P c promoter result in a sensitive genetic switch that transiently activates the P c promoter in a coordinated manner in most or all pLS20-containing cells. Rap pLS20 is not a DNA binding protein and does not activate the P c promoter by competing with Rco pLS20 for binding to the Rco pLS20 operator sites The results presented above show that Rap pLS20 is sufficient to relieve Rco pLS20 -mediated repression of the P c promoter. Rap pLS20 belongs to the RRNPP family of proteins. Many RRNPP members regulate transcription by binding to DNA (see Introduction). It was therefore not unlikely that Rap pLS20 could be a DNA binding protein and that it exerts its anti-repressive activity by competing with Rco pLS20 for binding to the same DNA motif. We tested this possibility by Electrophoretic Mobility Shift Assays (EMSA) using a purified C-terminal His tagged version (referred to here as Rap pLS20 for simplicity), which -contrary to an N-terminal His tagged version-was functional in vivo (see Materials and Methods and Supplemental Table S3). However, purified Rap pLS20 was not able to bind a 186 bp DNA fragment encompassing an Rco pLS20 binding site (operator O I ) (see Supplemental Figure S6). This was contrary to Rco pLS20 (see below) which was able to bind this DNA fragment. These results indicate that Rap pLS20 is not a DNA binding protein and that it is unlikely therefore, that it exerts its antirepressive effect by competing with Rco pLS20 for the same DNA binding site. Evidence for homogeneous and heterogeneous interactions between Rap pLS20 and Rco pLS20 in vivo Another possibility of how Rap pLS20 might relieve Rco pLS20mediated repression of the P c is through direct interaction with Rco pLS20 . Possible interaction between Rap pLS20 and Rco pLS20 was tested in vivo and in vitro. For the in vivo approach we used the bacterial two-hybrid system (B2HS). For this, rap pLS20 and rco pLS20 were fused in frame at the 5 and 3 regions encoding the T18 or T25 fragments of adenylate cyclase, and combinations of the resulting plasmids were co-transformed into E. coli BTH101 and plated onto M63 agar plates supplemented with maltose, Xgal and IPTG. A schematic presentation of the fusion genes constructed is shown in Supplemental Figure S7 and relevant crosses are presented in Figure 4A. As expected, whereas negative controls did not give colonies, positive controls resulted in the appearance of blue colonies. Interestingly, blue colonies were also obtained for two crosses, T25rap/rcoT18 and T18rap/T25rco, indicating that Rap pLS20 and Rco pLS20 interact in vivo. Another cross, rapT25/T18rco, did not show a positive interaction possibly because the linkers and the positions of the fusions prevented the interaction. Because these experiments were performed in the heterologous host E. coli, the results obtained imply that interaction between Rap pLS20 and Rco pLS20 do not require other pLS20or B. subtilis-encoded proteins. Taking advantage of the vectors constructed, we used the B2HS also to test possible self-interactions of Rap pLS20 and Rco pLS20 . As shown in Figure 4A, different crosses of T18 and T25 genes fused to either rap pLS20 or rco pLS20 also resulted in the appearance of blue colonies, indicating that both Rap pLS20 and Rco pLS20 self-interact. These in vivo results corroborate our previously published analytical ultracentrifugation results demonstrating that Rco pLS20 forms tetramers in solution (34), and that Rap pLS20 forms dimers in solution (42). Rap pLS20 and Rco pLS20 interact in vitro Possible interaction between Rap pLS20 dimers and Rco pLS20 tetramers was studied by sedimentation velocity (SV). Rap pLS20 at 4.5 M was titrated with different Rco pLS20 concentrations ranging from 0.6 to 27.0 M. Analysis of the mixtures displayed the presence of three new peaks at higher sedimentation coefficient than the Rap pLS20 dimers and Rco pLS20 tetramers alone, corresponding to Rap pLS20-Rco pLS20 complexes. As shown in Figure 4B, Rap pLS20 dimers interacted directly with Rco pLS20 tetramers in vitro to form a species at 7.1S that, once corrected to standard conditions (s 20,w = 7.7S), is compatible with the theoretical mass of a nearly globular (f/f 0 = 1.36) complex made of one Rap pLS20 dimer and one Rco pLS20 tetramer. Besides this predominant complex, minor amounts of species with higher sedimentation coefficients of 11.3S and 14.2S were observed, corresponding to undefined higher oligomerization complexes. To fully extract the maximum information enclosed in the SV data, besides the hydrodynamic separation of the complexes, we took advantage of the simultaneous absorbance data acquisition at 250 and 280 nm and globally analysed them through SEDPHAT to get the diffusiondeconvoluted sedimentation coefficient distributions with spectral deconvolution of the absorbance signals, c k (s) (Figure 4C). Further improvement of the molar ratio resolution was achieved by using both, mass conservation constraint and multi-segmented model restriction, using our prior knowledge that, once mixed, Rap pLS20 at 4.5 M reacts fully with Rco pLS20 at 25 M and no free Rap pLS20 sediments in the low-s region from 0.1S to 6S. The MSSV analysis of the Rco pLS20 -Rap pLS20 complex sedimenting at 7.1S indicated that the areas under the peak corresponded to a stoichiometry of 2.1 mol of Rco pLS20 per mol of Rap pLS20 . This result was in tune with the above mentioned putative complex composition involving one Rco pLS20 tetramer bound to one Rap pLS20 dimer, deduced from the hydrodynamic behavior observed in the previous SV assay. Binding of Rco pLS20 to a DNA fragment encompassing operators O I and O II The intergenic region between rco pLS20 and gene 28, which contains Rco pLS20 operators O I and O II , is intrinsically bent In vivo bacterial two hybrid system analyses to study homo and heterogeneic interactions between Rap pLS20 and Rco pLS20 . In-frame translational fusions were constructed with the N-terminal (T25) and C-terminal (T18) regions of the catalytic domain of the Bordetella pertussis adenylate cyclase (cya) gene (see Materials and Methods) resulting in vectors pEST1 to pEST8. Combinations of these vectors (crosses) were used to transform competent E. coli BTH101, and dilutions were subsequently spotted onto M63 plates supplemented with maltose, IPTG and Xgal. Functional complementation of the T25 and T18 fragments can occur when the proteins fused to these fragments interact with each other, resulting in indirect activation of the lac and mal operons, which then allows growth of the E. coli cells, resulting in the appearance of blue colonies when plated on M63 plates supplemented with maltose, IPTG and Xgal. In other words, appearance of blue colonies indicate interaction of the protein moieties fused to the T25 and T18 fragments. Other possible crosses gave negative results (not shown). Relevant crosses are indicated. Names of the fusion proteins are shown. The panels show crosses to study interactions between (Rap pLS20 and Rco pLS20 ), selfinteractions between Rco pLS20 , self-interactions between Rap pLS20 , and positive and negative controls. (34). Binding of Rco pLS20 to its operators O I and O II results in looping of the 75 bp spacer region and this looped configuration is required for proper regulation of the P c and P r promoters (34, see also Introduction). Previous EM-SAs showed that binding of Rco pLS20 to a 392 bp DNA fragment encompassing operators O I and O II , named Fragment V (FV, see Figure 5A), resulted in the appearance of up to four retarded species, depending on the concentration of Rco pLS20 (34). The results presented above show that Rap pLS20 interacts with Rco pLS20 . However, it is not clear whether Rap pLS20 can interact with Rco pLS20 when bound to DNA, and how Rap pLS20 inhibits the transcriptional regulatory activities of Rco pLS20 . We used AUC (described here) and biochemical approaches (described below) to gain insight into the mechanism by which Rap pLS20 relieves Rco pLS20 -mediated repression of the P c promoter. In a first experiment, we used fragment FV, encompassing Rco pLS20 operators O I and O II (see Figure 5A), to test possible effects of Rap pLS20 on the DNA binding activity of Rco pLS20 . Samples of DNA fragment FV in the absence or presence of different concentrations of Rco pLS20 were analysed by SV at 260 nm to track DNA. Increasing Rco pLS20 concentrations resulted in highly polydispersed sedimentation coefficient distributions, suggesting that, as observed in gel retardation assays, multiple nucleoprotein complexes were formed (see Supplemental Figure S8). The polydispersity of the complexes made it extremely difficult to analyse them in further detail. However, a striking result was that in the presence of Rap pLS20 the species with the highest sedimentation coefficient (ranging from 20S to 35S), probably corresponding to looped DNA-Rco pLS20 complexes, disappeared. Control SV experiments showed that no DNA binding of Rap pLS20 to DNA fragment FV was observed (not shown), in agreement with the EMSA result described above (Supplemental Figure S6). Moreover, the addition of Rap pLS20 did not result in formation of DNA-protein complexes with increased sedimentation coefficient compared to those observed in the presence of only Rco pLS20 (Supplemental Figure S8). This strongly indicates that Rap pLS20 did not form stable DNA-Rco pLS20 -Rap pLS20 complexes. Rather, it suggests that Rap pLS20 affects the DNA binding activity of Rco pLS20 . Rco pLS20 bridges two DNA fragments containing operator O II The results presented above show that Rap pLS20 preferentially acted on high molecular weight Rco pLS20 -DNA com- Right panel, two different ways of how Rco pLS20 tetramers may bind to the DNA fragment. In binding mode 'A' one DNA fragment would be able to bind a maximum of two Rco pLS20 tetramers. One and two Rco pLS20 tetramers would be bound to species RI and RII, respectively. In binding mode 'B' only one Rco pLS20 tetramer can bind to a DNA molecule, corresponding to retarded species RI. Retarded species RII would correspond to a sandwiched configuration in which two DNA molecules are bridged through interactions between Rco pLS20 tetramers bound to either DNA molecule. (C) Schematic representation of the DNA fragments used for gel retardation and subsequent Southern blotting. Both DNA fragments contain Rco pLS20 operator O II (black rectangle) but have a different size and have unique sequences located at the 5 side (small DNA fragment [556 bp], indicated with red line) or the 3 side of the O II operator (large DNA fragment [1,109] bp, indicated with green line). The approximate DNA sequences used for generating probes specific for these unique flanking sequences are indicated with teeth like and flag symbols. (D) EMSA and Southern blot results of individual fragment F-A or F-B, and of fragments F-A and F-B together. DNA fragments were run on an agarose gel either without or in the presence of 3.4 or 6.8 M of Rco pLS20 . After running, the gel was stained with ethidium bromide and photographed. Migrating positions of free DNA and the retarded species (RI and RII) are indicated. In addition, in the Southern blots retarded RII species of the small and large DNA fragment are indicated with a red and green asterisk, respectively. The red-and-green asterisk indicate the additional retarded species that is observed only when the reaction mixture contained both the large and the small DNA fragment. This retarded species, which migrated in between the positions of the retarded species RII of the small and the large DNA fragment, hybridized with probes specific for both of these fragments. A duplicate gel was used for a Southern blot that was hybridized first with a probe specific for the small fragment and after stripping the same blot was used for hybridization with a probe specific for the large DNA fragment. The horizontal lines in the lower part indicate which panels correspond to the stained gels (gel) and Southern blots (Sb, blue line), what fragment was used (F-A, F-B or [F-A + F-B]), and what probe was used; red and green teeth-like symbol for the small and large DNA fragment, respectively. plexes, suggesting that the nature of these complexes is fundamentally different from the lower molecular weight Rco pLS20 -DNA complexes. One attractive possibility is that the high molecular weight Rco pLS20 -DNA complexes correspond to looped DNA molecules. However, in this set up it is hard to determine the nature of these high molecular weight complexes due to the presence of multiple Rco pLS20 -DNA species. Hence, we searched for a simpler experimental set up involving Rco pLS20 -mediated loop formation. In previous work, we showed that gel retardation experiments gave very similar results for the ∼180 bp fragments containing only the Rco pLS20 operator O I or operator O II . In both cases, a maximum of two retarded species were observed depending on the concentration of Rco pLS20 . At very low Rco pLS20 concentrations only one retarded species (named Retarded Species I, RI) was observed, but an additional slower migrating species, (named Retarded species II, RII) was observed at increasing Rco pLS20 concentrations, which became the predominant retarded species at high Rco pLS20 concentrations (34). At the time, we postulated that these results could reflect cooperative binding of two Rco pLS20 tetramers to one DNA molecule containing an Rco pLS20 operator (see Figure 5B for a schematic view). If Rco pLS20 binds DNA in this mode, then the Helix-Turn-Helix domain of two of the four Rco pLS20 monomers of each Rco pLS20tetramer would not be bound to the DNA molecule and hence would be available to bind other DNA molecule(s), which would result in the generation of more than two re-tarded species ( Figure 5B, binding mode 'A'). An alternative mode of DNA binding that explains a maximum of only two retarded species would be that only one Rco pLS20 tetramer is able to bind to a DNA molecule containing a single operator resulting in the fastest migrating retarded species RI. Retarded species RII would then be the result of two DNA molecules that are bridged through interactions of the Rco pLS20 tetramers bound to each DNA molecule ( Figure 5B, binding mode 'B'). This situation would be similar to that of a DNA looped configuration and, if correct, this would be an ideal system to test if Rap pLS20 preferentially acts on DNA-looped structures. We therefore used the approach schematically presented in Figure 5C to test if the retarded RII species observed in EMSA corresponded to two DNA molecules bridged by Rco pLS20 . In short, two DNA fragments were generated having an overlapping region that contains Rco pLS20 operator O II . The fragments were different in size and the regions flanking the operator were unique in each fragment, allowing the fragments to be distinguished in Southern blotting experiments using fragment-specific probes. When analysed separately in gel retardation experiments in the presence of Rco pLS20 , each DNA fragment was expected to give two retarded species, although their migration position would be distinct due to the different sizes of the fragments. When using samples containing both DNA fragments, it was expected that the retarded species migrate to the same positions as observed when each DNA fragment was analysed alone. However, if the RII species corresponded to two DNA molecules bridged by two Rco pLS20 tetramers, an additional retarded species, corresponding to a complex of a large and a short DNA molecule, would be expected. This additional retarded species would migrate in between the positions observed for the retarded RII species formed by the two small or two large DNA molecules. If such an additional species was present, Southern blotting using probes specific for each DNA fragments could demonstrate the presence of both the short and large DNA fragment in this retarded species. The results of this experiment, which are presented in Figure 5D, show indeed the presence of an additional retarded species that migrated in between the positions of retarded species RII formed by the two small and two large DNA fragments, and which hybridized to both probes specific to the small and the large DNA fragments, consistent with the two DNA molecules being bridged by Rco pLS20 tetramers bound to either DNA molecule. To confirm these data by an independent approach, we took advantage of the stoichiometry determination of DNA-protein complexes by multi-signal sedimentation velocity (MSSV) (44). Thus, SV experiments were performed using samples containing the 219 bp DNA fragment FIII (encompassing Rco pLS20 operator O II ) alone or with increasing Rco pLS20 concentrations. Absorbance data at 260 and 280 nm were simultaneously collected and globally analysed through SEDPHAT to obtain the diffusiondeconvoluted sedimentation coefficient distributions with spectral deconvolution of the absorbance signals, c k (s) besides the hydrodynamic separation of the complexes. Sedimentation velocity titration of fragment FIII at 140 nM with Rco pLS20 (1-15 M) showed the presence of two species with higher sedimentation coefficient than DNA or protein alone, pointing to the formation of two different Rco pLS20 -DNA complexes, in line with the results obtained by gel retardation. At the lowest Rco pLS20 concentration assayed (1 M) only a species sedimenting at 11.8S (s 20,w = 12.9 S) was observed, whereas from 2.5 M the second species at 14.9S (s 20,w = 16.3 S) appeared and the amount of both complexes gradually increased ( Figure 6A). The MSSV analysis of Rco pLS20 -fragment FIII complexes indicated that the areas under the peaks at 11.8S and 14.9S corresponded to a stoichiometry of 3.9 and 4.3 mol of Rco pLS20 bound per mol of DNA fragment FIII, respectively (Figure 6B). This stoichiometry perfectly matches the deduced composition by EMSA consisting of one Rco pLS20 tetramer bound to one DNA molecule and two DNA molecules being bridged by two Rco pLS20 tetramers for RI and RII, respectively. Rap pLS20 preferentially acts on Rco pLS20 oligomers involved in DNA looping The results presented above demonstrate that two Rco pLS20 -DNA complexes were formed upon interaction of Rco pLS20 with a DNA fragment comprising only one Rco pLS20 operator, and that the retarded species RII observed in gel retardation experiments, or the peak at 14.9S observed by SV, corresponded to two DNA molecules being bridged by two Rco pLS20 tetramers. Thus, we used this experimental set up applying the DNA fragment containing operator O II (DNA fragment FIII, 219 bp) to assess the effects of Rap pLS20 on the two different Rco pLS20 -DNA complexes by two independent techniques: AUC and gel retardation, whose different underlying principles make them interesting complementary approaches. Using EMSA, we were able to establish conditions in which a certain concentration of Rco pLS20 (0.25 M) resulted in the appearance of only retarded species RI, whereas a four-fold higher concentration of Rco pLS20 resulted in the appearance of retardation species RI and RII (see Supplemental Figure S9). These conditions were used in the gel retardation experiments shown in Figure 7A. Addition of low concentrations of Rap pLS20 to preincubated mixtures of Rco pLS20 and DNA, which in the absence of Rap pLS20 formed two types of Rco pLS20 -DNA complexes (i.e. retarded species RI and RII) in gel retardation studies, resulted in the specific removal of species RII in a concentration-dependent manner ( Figure 7A) without affecting the retarded species RI (right panel). A similar effect was observed by SV assays at 260 nm, where a pre-incubated mixture of DNA fragment FIII (140 nM) and Rco pLS20 at 2.5 M was titrated with increasing concentrations of Rap pLS20 (5-15 M). The addition of increasing concentrations of Rap pLS20 down-modulated the formation of the two Rco pLS20 -DNA complexes at 11.8S and 14.9S observed in the absence of Rap pLS20 , hence the interaction of Rco pLS20 with DNA fragment FIII ( Figure 7B). These results indicate that Rap pLS20 is able to interact with Rco pLS20 bound to DNA and that this binding results in the release of Rco pLS20 from the DNA, as demonstrated by the gradual increase of free DNA at 5.0S when increasing the concentration of Rap pLS20 . Interestingly, when the SV assay was performed at 230 nm to enhance absorbance signal from the proteins, addition of Rap pLS20 at 25 M to the pre-incubated DNA Rco pLS20 mixture showed the removal of Rco pLS20 from the Rco pLS20 -DNA complexes to form free DNA and the 7.1S Rap pLS20 -Rco pLS20 complex ( Figure 7B). This shows that Rap pLS20 is not only able to detach Rco pLS20 from DNA fragment FIII but to bind to Rco pLS20 to form a steady protein complex. Furthermore, particularly at 25 M, Rap pLS20 acted preferentially on Rco pLS20 -DNA complexes having the highest sedimentation coefficient, in tune with the preferential interaction of Rap pLS20 with retarded species RII observed in EMSA. DISCUSSION The family of signal peptide regulated RRNPP proteins contains many members. They all share a similar twodomain structure consisting of a large signal peptide binding C-terminal TPR domain and a smaller N-terminal ef- Figure 7. Rap pLS20 preferentially disrupts retarded species RII. (A) Effect of Rap pLS20 on Rco pLS20 -DNA and Rco pLS20 -sandwiched DNA studied by EMSA. Gel retardations were performed using a DNA fragment encompassing Rco pLS20 operator O II (fragment FIII, 219 bp). The DNA fragment was pre-incubated in the absence (-) or presence of either 0.25 M (blue '+' symbols) or 1 M (purple '+' symbols) of Rco pLS20 . Next, no or increasing concentrations of Rap pLS20 was added to the mixtures and, after 10 min incubation, samples were loaded and run on an agarose gel. After running, the gel was stained with EtBr and photographed. Positions of unbound DNA (free DNA), and the retarded species RI and RII are indicated. Increasing concentrations of Rap pLS20 were prepared using a two-fold dilution method, and ranged from 0.14 to 1.1 M. (B) Effect of Rap pLS20 on Rco pLS20 -DNA complexes studied by AUC sedimentation velocity. Sedimentation coefficient distribution at 260 nm, c(s), corresponding to DNA fragment FIII alone (dashed trace), Rco pLS20 -DNA complexes without Rap pLS20 (black trace) or with increasing Rap pLS20 concentrations: 5 M (green trace), 10 M (red trace) and 15 M (blue trace). Dotted trace stands for sedimentation coefficient distribution at 230 nm, corresponding to an Rco pLS20 -DNA pre-incubated mixture with Rap pLS20 at 25M, showing the emergence of free DNA and a Rap pLS20 -Rco pLS20 complex at 7.1S. Inset zooms in the s-range encompassing the Rco pLS20 -DNA complexes to facilitate comparison of the peak proportions. fector domain. In all RRNPP members studied so far, the direct or indirect transcriptional effects exerted by RRNPP proteins are due to interaction of the N-terminal domain with a target molecule. Binding of the peptide induces allosteric changes in the protein affecting the function of the N-terminal effector molecule (7). Despite these simple basic features, there is an extraordinary plasticity in mechanistic actions, as illustrated by the three RRNPP members that play crucial roles in the regulation of conjugation: PrgX of enterococcal plasmid pCF10, RapI of B. subtilis ICEBs1 and Rap pLS20 of B. subtilis plasmid pLS20 (25)(26)(27). The effector domain of PrgX forms a DNA binding helix-turnhelix domain; binding of one of the two competing signal peptides affects DNA binding activity of PrgX, which is coupled to changes in the oligomerization state of the protein (45,46). RapI activates conjugation of ICEBs1 by relieving ImmR-mediated repression of the excision and conjugation genes (25,47). ICEBs1 encodes a protease, ImmA, which degrades the ImmR repressor, and overexpression of rapI results in excision of a deletion derivative of ICEBs1 containing only four genes: int, xis, immA and immR (48). However, overexpression of rapI did not activate the conjugation genes in the absence of protease-encoding immA gene, indicating that RapI stimulates ImmA to degrade ImmR (28). The exact underlying mechanism is unknown. In the case of pLS20, Rap pLS20 activates conjugation by relieving Rco pLS20 -mediated repression of the conjugation genes (27). Here we made progress in better understanding the circuitry responsible for regulation of the pLS20 conjugation genes, particularly Rco pLS20 -mediated repression of the P c promoter, and the in vivo and in vitro role of Rap pLS20 . In the first place, we demonstrate that the mode of action of the pLS20-encoded RRNPP protein Rap pLS20 acts fundamentally different to those of PrgX and RapI. While PrgX regulates expression of the conjugation genes by binding to DNA, our results show that Rap pLS20 does not bind DNA. This excludes the possibility that Rap pLS20 might activate the P c promoter by competing with Rco pLS20 for DNA binding. RapI activates conjugative transfer of ICEBs1 by stimulating the ICEBs1-encoded protease ImmA to degrade ImmR, the repressor of the conjugation genes. Plasmid pLS20 does not encode a protease required for Rco pLS20 degradation, as expression of Rap pLS20 was sufficient to relieve Rco pLS20 -mediated transcription of the P c promoter in the minimal in vivo regulatory circuitry of the conjugation genes present in strain PKS25 (amyE::P spank -rco pLS20 , lacA::P xyl -rap pLS20 , thrC::P c -lacZ). The presence and absence of a protease dedicated to degrade the repressor may have intriguing consequences for the conjugation pathway. The ICEBs1 encoded ImmR not only represses the conjugation promoter but also activates its own promoter; very low ImmR promoter activity was observed in the absence of ImmR (47). Hence, degradation of ImmR will result in activation of the conjugation genes and simultaneously inhibit de novo ImmR synthesis, suggesting that conjugation is an irreversible process. The pLS20 conjugation pathway may be a reversible process or at least it may be more flexible than the ICEBs1 system based on the following. Like ImmR, Rco pLS20 also represses its conjugation promoter and activates its own expression (34, this work). However, activation of the pLS20 conjugation promoter is not due to degradation of the conjugation repressor but instead is the consequence of sequestration of Rco pLS20 by Rap pLS20 . Inactivation of Rap pLS20 by Phr* pLS20 would result in the release of Rco pLS20 from the complex allowing it to bind again to its operators and resuming its transcriptional role. Evidence supporting this has been recently obtained (42). In the second place, we provide evidence that there is cross talk between the conjugation and the competence pathways. Competence is the state in which B. subtilis cells are able to bind and stably incorporate extracellular DNA into its genome via homologous recombination (for review see, 49,50). During competence, genes are expressed encoding proteins involved in two functionally sep-arated processes: a membrane-associated DNA translocation machinery that binds exogenous DNA and actively imports ssDNA, and proteins involved in homologous recombination acting on the adsorbed ssDNA. ssDNA is also generated during conjugation and also transported through a membrane-embedded ssDNA translocation machinery, but in the opposite direction to the competence machinery. Various similarities exist between competence and conjugation related ssDNA transfer machines (for review see, 51). However, conjugation and competence development may not be compatible with each other. For example, simultaneous expression and assembly of the competence and conjugation related ssDNA translocation machineries might interfere with each other and/or compete for the same cellular position. In addition, the recombination enzymes synthesized during competence may act on ssDNA of the conjugative element. Importantly, the conjugation operon of pLS20 encodes a protein, Rok pLS20 (pLS20cat gene 64) that represses competence. Thus, activation of the conjugation genes simultaneously inhibits competence development (52). Here, we presented additional evidence showing that conjugation and competence are incompatible processes. In addition to the similarity between Phr* pLS20 and PhrF*, we showed that the mature PhrF* peptide can interact with Rap pLS20 in vitro provoking Rap pLS20 tetramerization as observed for Phr* pLS20 (42), and that the calculated macroscopic K d of PhrF* is only about 2.5-fold higher than that of Phr* pLS20 (5.3 and 2.1 M, respectively). Analysis of two synthetic variants of PhrF* containing only one residue difference with Phr* pLS20 revealed that they had a very similar intermediate macroscopic K d of 4.2 M, showing that the nonidentical residues at positions 2 and 5 contribute in similar proportions to the decreased affinity of PhrF* for Rap pLS20 . Importantly, we show that PhrF* was also able to inactivate Rap pLS20 in vivo, raising the possibility that it may inhibit conjugation under natural conditions. PhrF* is the cognate peptide of chromosomally encoded RRNPP protein RapF, which functions as an inhibitor of the competence pathway by interacting with ComA that stimulates transcription of competence genes (53). Thus, on the one hand PhrF* stimulates competence by inhibiting RapF, and on the other hand we provide evidence here that PhrF* can inhibit conjugation. In summary, competence and conjugation appear to be mutually exclusive processes: activation of the pLS20 conjugation pathway results in the production of Rok pLS20 that inhibits competence development, and activation of the competence pathway by PhrF* probably aids in repressing pLS20 conjugation. Interestingly, a H -dependent promoter whose activity increases when cells grown on minimal of sporulation medium enter the stationary phase controls the expression of phrF (41,54,55). Here, we have shown that expression of Phr* pLS20 is controlled by two A -dependent promoters P rap and P phr whose activity are highest during exponential growth, and under standard conditions pLS20 conjugation reaches its maximum at the end of the exponential growth phase when cells are growing in rich medium (27). We have also improved our knowledge regarding transcriptional control of the regulators of the conjugation process. Using transcriptional lacZ fusions we have previously shown that P c is a strong and P r a weak promoter Nucleic Acids Research, 2020, Vol. 48, No. 19 10799 (34). Using the more sensitive gfp reporter gene, we have now confirmed that P c and P r are a strong and weak promoter, respectively. Furthermore, we show that the promoter upstream of rap pLS20 , P rap , is a weak promoter and that phr pLS20 is under the control of a second promoter, P phr , which is about twice as strong as the P rap promoter. Six out of the seven B. subtilis chromosomally located phr genes are also known to be controlled by an additional promoter (41). Upon secretion, the Phr peptides diffuse in the surrounding environment. Enhanced production due to the presence of a second phr-upstream promoter may be important to compensate for the diffusion-related decrease in concentration. In addition, the signal peptide concentration may be boosted under specific conditions when the phr gene is under the control of an alternative -dependent factor as is the case for chromosome-encoded phr genes (41). Activation of several differentiation processes including sporulation, competence and motility depend on stochastic variability in expression of a master regulator and is linked to heterogeneity in behavior of genetically identical cells within a culture (56)(57)(58). The heterogeneity may lead to so-called bet-hedging strategies resulting in the presence of a subpopulation of differentiated cells even in the absence of conditions favouring the differentiation process, which is beneficial for the community at the population level against possible sudden adverse future conditions. Another evolutionary benefit of heterogeneity is division of labor in which only a subpopulation of cells produces products for the benefit of entire community. However, the process of conjugation is an energy consuming process and has major impacts on cell surface and membrane components, requiring tight repression at times when conditions for successful DNA transfer are not apt. Therefore, heterogeneityderived mechanisms will not be suitable for controlling conjugation. Indeed, the efficiency of pLS20 transfer is below the detection limit when cells grow under conditions that are antithetical to conjugation (>6 orders of magnitude lower than those observed during optimal conjugation conditions, 27). Notwithstanding, tight repression of the conjugation genes during most of the times should be compatible with rapidly switching on the conjugation process when favourable conditions occur. This is achieved by the combination of multiple-factored regulatory circuit of the conjugation genes. Thus, the strong P c promoter permits highlevel expression of conjugation genes under conjugation favourable conditions. The relatively strong P phr promoter assures the synthesis of rather high levels of the Phr* pLS20 signalling peptide required to compensate for the diffusion effect on concentration, and accurately return conjugation to its default repressed state when conditions for conjugations are no longer apt. The weak P r and P rap promoters generate low levels of Rco pLS20 and Rap pLS20 , respectively. This, combined with DNA looping and autoregulatory effects of Rco pLS20 on its own synthesis are crucial for proper regulation of the conjugation genes. Low levels of Rco pLS20 permit accurate activation of the conjugation genes when appropriate conditions occur. However, low repressor levels will inherently increase fluctuations within and between cells that can affect the tight control. Particularly, DNA looping counteracts this. Due to enhanced local concentration of the regulator, DNA looping simultane-ously increases specificity and affinity, and at the same time will control stochasticity of cellular processes (59). Consequently, the particular constellation involving multiple players and levels ensure that the conjugation genes are strictly repressed at most times, but permits accurate activation of the conjugation process when appropriate conditions occur. Using transcriptional gfp fusions as reporters to determine promoter activities in individual cells, we show that the P c promoter became activated rather homogeneously in all or most cells in the population, regardless whether the P c -gfp fusion was placed ectopically on the chromosome, or the gfp gene was placed behind the first conjugation gene, gene 28, on the plasmid. However, several considerations have to be taken into account. First, activation of the P c promoter does not imply automatically that it will result in conjugative plasmid transfer. For instance, checkpoints may be present downstream the P c promoter. In addition, even when all conjugation genes are expressed successful transfer may be impeded at several levels, e.g unsuccessful mating pair formation or failure of establishment in the host. Moreover, environmental fluctuations at macro and microscale occurring under natural conditions will affect individual cells or subpopulations that will probably impede population scale activation of the P c promoter as observed under our laboratory conditions. Finally, our work furthered our understanding of Rco pLS20 DNA binding and looping, and the anti-repressor mechanism of Rap pLS20 . We provided compelling evidence that a tetrameric Rco pLS20 subunit binds one operator, and that DNA looping occurs due to interactions between two Rco pLS20 tetramers bound to both of its operators. Both B2H and AUC results indicated that Rap pLS20 and Rco pLS20 interact with each other both in vivo and in vitro. These results are corroborated by our recent SAXS and size exclusion chromatography (SEC) results (42). The AUC SV results and particularly the multi-signal sedimentation velocity (MSSV) demonstrated that the majority of the Rap pLS20 /Rco pLS20 complexes formed corresponded to one Rap pLS20 dimer interacting with one Rco pLS20 tetramer. Importantly, AUC and EMSA results demonstrated that Rap pLS20 was also able to interact with Rco pLS20 when bound to DNA. This interaction did not result in the generation of higher molecular nucleoprotein complexes suggesting that Rap pLS20 would alter the mode of Rco pLS20 DNA binding. Instead, both the AUC and EMSA approach demonstrated that the addition of Rap pLS20 to preformed Rco pLS20 -DNA complexes resulted in the release of Rco pLS20 from the DNA. In addition, AUC results showed that the release of Rco pLS20 from DNA resulted in the concomitant appearance of the Rap pLS20 /Rco pLS20 complex, demonstrating that Rap pLS20 activates the P c promoter by actively removing Rco pLS20 from its operators through the formation of stable heterocomplexes. To fulfil its antirepressive role under natural conditions, Rap pLS20 has to act on DNA looping involved Rco pLS20 complexes. The EMSA results were interesting in this respect since they indicated that Rap pLS20 indeed acted with preference on the Rco pLS20 protomers involved in DNA looping. Rap pLS20 -mediated detachment of Rco pLS20 from DNA might be achieved and/or accompanied by an alteration in the oligomerization state of Rco pLS20 . This is not an unlikely scenario, because RRNPP-mediated alteration of the oligomerization state has been observed: RapF causes dissociation of ComA dimers, which are the transcriptionally functional form (60)(61)(62). However, AUC results showed that molecular weight of the Rap pLS20 -Rco pLS20 complexes corresponded to a stoichiometry of one Rap pLS20 dimer to one Rco pLS20 tetramer, strongly arguing that interaction of the Rap pLS20 does not affect the oligomerization state of Rco pLS20 and hence that Rco pLS20 might resume its regulatory role after it is released from the complex in the presence of Phr* pLS20 . This view is indeed supported by our recent SAXS and SEC results showing that the addition of Phr* pLS20 peptide converts the large Rco pLS20 -Rap pLS20 complex into complexes of smaller sizes that are similar in shape, size and elution volumes of the individual Rco pLS20 and Rap pLS20 complexes (42). Together these results indicate that Rap pLS20 temporarily inactivates the regulatory functions of Rco pLS20 through sequestration, and that Phr* pLS20 mediated relief of Rco pLS20 allows returning the system to its default conjugation repressed state. This regulation is fundamentally different from the RapI-mediated activation of the ICEBs1 element in which RapI does not sequester the repressor but instead causes its degradation by activating the protease ImmA.
2020-10-14T13:05:37.305Z
2020-10-12T00:00:00.000
{ "year": 2020, "sha1": "c16ed76c8d3f45a327e18166eed9c2e3dd199027", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/48/19/10785/34133537/gkaa797.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "923b8e636146d61bf01de3f47a324dd0edb7e088", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
232773734
pes2o/s2orc
v3-fos-license
A Qualitative Study of Barriers and Enablers of Physical Activity among Female Emirati University Students Interventions to promote physical activity participation should reflect social and culturally relevant influences of the target demographic. The aim of this study was to explore perceptions of barriers to and enablers of physical activity participation among female Emirati university students. Five semi-structured focus groups were conducted (n = 25). Participants were asked open-ended questions about benefits, barriers and enablers of physical activity, and recommendations to promote participation. Emergent themes were identified using Nvivo software. Commonly identified benefits included improved health, weight management, improved mood, and stress reduction. The main barriers were low family support, competing time demands from domestic and academic activities, lack of convenient access to women-only facilities, and hot weather. The main enablers and recommendations related to social support from family and friends, accessible and low-cost women-only facilities, and structured supervised sessions. Findings suggest that there are specific social-cultural influences of physical activity among female Emirati university students. Approaches to promote participation could include identifying benefits consistent with family and cultural values, using social media for education, support and modelling, on campus supervised physical activity sessions integrated with the academic timetable, low-cost women-only opportunities in the local residential area, and support for home-based activities. Introduction Physical activity is associated with a reduced risk of all-cause mortality, cardiovascular disease, type-2 diabetes, hypertension, breast cancer, colon cancer, gestational diabetes, ischemic heart disease, and ischemic stroke [1]. Physical activity is also associated with a range of psychological health benefits including reduced risk of depression, anxiety and stress, as well as improved mood [2,3]. For general health benefits, it is recommended by the World Health Organization (WHO) that adults do at least 150 min per week of moderate intensity aerobic physical activity, or at least 75 min of vigorous intensity aerobic physical activity throughout the week, or an equivalent combination of moderate and vigorous intensity activities [4]. Physical activity participation appears to decrease during the transition from adolescence to young adulthood [5,6], which is a time that includes the years spent at university. University students typically report low levels of physical activity, with participation ranging between 30 and 50%, and lower rates among women than men [7,8]. Research with university students in the Gulf Cooperation Council countries identified levels of physical activity ranging from 25 to 47% [9][10][11][12][13][14], which is slightly lower than university students in Western countries [8]. Evidence on the barriers to and enablers of physical activity within specific demographic groups, such as university students, can inform interventions to promote partici-pation. The number of perceived barriers to physical activity significantly increases from high school to university [15] and physically inactive students report a significantly higher number of barriers to physical activity participation than active students [10]. University students typically report lack of time and cost as key barriers [16][17][18]. Lack of social support from parents and/or friends has also frequently been identified [19][20][21]. As a corollary to this, social support has been commonly reported as an enabler of physical activity participation in university students [22,23]. Other commonly identified enablers are availability of lessons and facilities at a reasonable cost [19], on-campus programs [24], and free time [19]. Such evidence from physical activity studies with university students in Western cultures may not be directly applicable to university students in Arab countries due to distinct sociocultural differences, in particular for women. For example, less autonomy with decision making has previously been reported as a key barrier to physical activity among female Kuwaiti university students [25]. A recent review of studies in the Arab region found lower levels of physical activity among women than men in the United Arab Emirates, and cited evidence from a range of studies to attribute this to a convergence of general and gender norms including parents favouring educational and spiritual activities, conservative dress that is unsuitable for physical activity, the need for women to be chaperoned in public spaces, a lack of gender-segregated facilities, a cultural value of comfort and avoiding physical exertion, and the view that public spaces are not appropriate for physical activity [26]. More evidence is needed from culturally and linguistically diverse groups to tailor the development of socially and culturally sensitive interventions. The aim of the current study, therefore, was to examine perceptions of barriers to and enablers of physical activity participation among a group of female Emirati university students. Materials and Method This study was awarded ethical approval by the Human Ethics Research Committee at The University of Queensland (#2017000013). Participants and Procedure All participants were recruited from a Higher Education Institute in the Middle East. The university has separate campuses for men and women. Participants were required to be a woman aged over 18 years, and a United Arab Emirates (UAE) national and native Arabic speaker. A separate member of the research team who was known at the Education Institute attended classes at the female student campus, discussed the study with students, and verbally invited participation. Students were given a written information sheet and those interested were asked to complete a written consent form. After providing consent, a date, time and location for the focus group discussions (FGDs) were agreed upon with the participants. Five FGDs were conducted in a private meeting room on the university campus, and led by the same member of the research team. The FGDs lasted 40-60 min. The FGDs were audiotaped and written notes were taken. All FGDs were conducted in English, which is the teaching language of the university. After the discussion groups, participants completed a short demographic questionnaire online. Materials The FGD guide is presented in Table 1. This was developed by the researchers, with questions on benefits of, barriers to, enablers of, and recommendations to improve physical activity participation for female Emirati university students. Participants were initially asked to discuss benefits of physical activity as an ice-breaker activity. Clarification probes were used to explore participants' responses about intrapersonal (i.e., relating to the individual), interpersonal (i.e., relating to other people) and contextual (i.e., relating to the academic, physical and cultural setting) factors. 1. Do you think that there are benefits of participating in physical activity? What are these? 2. What makes it difficult to do physical activity? (Probed for: intrapersonal, interpersonal, social, contextual factors e.g., "How might other people make physical activity difficult") 3. What helps/would help you to participate in physical activity? (Probed for: intrapersonal, interpersonal, social, contextual factors e.g., "What is it about where physical activity is done that could make it easier to do?") 4. If we wanted to have a physical activity program for female university students here, what would you suggest? 5. Are there any other points you would like to go back on or talk about? The online demographic questionnaire included items about age (years), height and weight (used to derive body mass index (BMI)), general health (excellent, very good, good, fair, poor), ability to manage on available income (easy, not too bad, difficult some of the time, difficult most of the time, difficult all of the time), and life satisfaction (rated from 1-10; 10 being high). Some data were categorized for descriptive purposes, e.g., age was categorised as 18-20, 21-24, 25+ years; life satisfaction was categorised as 1-3 = low, 4-6 = moderate, 7-10 = high. Data Management and Analysis Data from the audiotape were transcribed verbatim and entered into Nvivo. The data transcripts were read and analysed by the separate member of the research team. The data were coded and organized into major and minor themes, according to frequency, which were then discussed with the senior author (NWB) to reach consensus. Emergent themes were labelled using constructs from the Theoretical Domains Framework [27], and are presented using the headings: intrapersonal, interpersonal, and contextual (academic, cultural/environmental). Results After the fifth FGD, it was decided that a saturation point had been reached as no new themes had emerged. A total of 25 female university students completed the FGDs. The mean age of participants was 20.4 years (SD 2.6) and 40% were categorised as overweight/obese (BMI > 30). The majority of the participants reported high life satisfaction and excellent/very good health. Additional characteristics of the participants are presented in Table 2. Benefits of Physical Activity Participation Major themes identified for benefits of physical activity participation were improved mood, improved health, weight management, disease prevention, and stress reduction. Minor themes identified were disease management, reducing "negative energy" and improving self-confidence, meeting new people, social interactions, improving focus and concentration for academic work, and reducing academic related stress. Barriers to Physical Activity All the identified barrier themes, with example participant statements, are summarised in Table 3. Strong interpersonal and contextual themes emerged. Commonly reported factors were low support from family, and competing time demands from household/family responsibilities. Contextual barriers included lack of time because of a busy academic schedule, lack of suitable accessible facilities and hot weather. Notes. BMI was based on self-reported height and weight. Table 3. Barriers to physical activity among female Emirati university student participants (n = 25). Intrapersonal Intentions "A lot of them would just sit and use their laptops instead of going outside and doing exercises" Emotions Positive/negative affect "If you have good mood you will do anything. If you are sad you will not do sports" "Some ladies get bored quickly. If they get bored at the activity today they will not go tomorrow" "Some ladies don't like to go to the gym-they only think for shopping and going out" Beliefs about Capabilities Self-efficacy "People don't have enough courage to go and do exercise-they are not confident" Reinforcement Punishment-"If they make any exercise they will get tired and their body will hurt" Knowledge "People don't know about the benefits of physical activities Interpersonal Social Influences Low social support (family)-"Maybe they don't have the support from their parents" "Parents don't like the girls to go out and do exercise" Social norms "Home responsibilities-taking care of the children, and for other family like our mothers" Social Influences Social pressure (friends) "Sometimes my friends will stop me from doing activities" Contextual: Academic Environmental Context and Resources Environmental stressors "The schedule is really busy for us and it's not convenient" "We have more exams and a lot of projects in the course-all the time we are busy" Table 3. Cont. Major Themes Minor Themes Contextual: Cultural/Environmental Environmental Context and Resources Social Influences Material resources "There is no gyms here near us to exercise" Group norms "In other cultures, they accept this muscle for women. In Emirati female, it is not so acceptable" "Most of the places are mixed for ladies and men so we can't go there" Environmental Context and Resources Environmental stressors "In the summer we cannot do exercise as it is very hot" Resources "We are only students-for the majority the cost is the problem" At the intrapersonal level, participants described that some women preferred more passive leisure activities, in particular using social media. "A lot of them would just sit and use their laptops instead of going outside and doing exercises" At the interpersonal level, low support from family members was one of the most commonly cited barriers to physical activity. It was said that parents and other family members may not encourage women to do physical activity if physical activity was perceived as primarily for weight loss. "Families think differently to us and maybe sometimes when you are slim they ask why are you going to exercise-they don't want you to be more slim so they prevent you from doing anything. Like my friend is not allowed to do exercise because she is slim-her family will not allow her". Another reason for low family support was related to sociocultural norms regarding appropriate lifestyle activities for women. Participants described needing family permission to engage in activities, or support to enable participation (e.g., transport, money). "Some people have a culture where the family don't let them go to gyms and do exercise. Like in UAE, the family -we are more private in our life and they don't like the girls to go out and do exercise. Parents don't like it". "Doing exercise disagrees with the Emirati culture-the parents are not used to letting the girls go out at 6pm and 8pm to do walking. And there is no time in the day to do exercise". "Some parents think that you are a girl and you should learn the kitchen work and you don't do exercise-this is from the culture". Many women described that family and domestic duties created a competing time demand against physical activity. The women would attend university in the day and then return home to care for the household, which resulted in little discretionary time. "Because most Emirati women are working mothers they don't have time from their work and they have to take responsibility for their children so they don't have time for exercise". "Not enough time, home responsibilities-taking care of the children, responsibility for other family like our mothers". At the contextual level, many of the students described that the intense academic study and assessment schedule left little time for physical activities. Additionally, it was noted that many of the physical activities held at the campus sports complex were at times that clashed with classes, and therefore prevented participation. "We have many college work and exams and projects, and we don't have time for gym and exercise". "The schedule is really busy for us and it's not convenient with the schedule of the sports complex. We can't manage it with our studies". Lack of suitable, accessible and affordable physical activity facilities was commonly identified as a barrier. Because of sociocultural reasons, women-only physical activity facilities were required, but it was reported that many of these were expensive to join and located far away from the home. "And as we are women, we don't have places to do exercises. Maybe two or three places we have-we don't have enough clubs for ladies only, and even if there is, it is expensive. Before was cheaper but now double the price. We have ladies' clubs but it's expensive". "It is not only the gym that we want to go to-there are also parks and there are some for ladies but there are not enough of them and they are a long distance away. Maybe I live in Sharjah but the place is in Abu Dhabi and I have no way to get there". The climate of the UAE was also identified as a significant barrier to physical activity. Many of the women said they would like to do outdoor activities, but the hot weather made this uncomfortable. Some women wore traditional clothing in public, and this heightened the discomfort for specific activities such as walking. "Because it is too hot, we cannot go outside and do exercise especially because we are wearing abaya and shayla, it is very hot for all weather. It is a very big problem to us because sometimes when we go to a walk they tell us that you have to wear the sport dress and come and walk so we can't walk in these areas as we have to wear our abaya and shayla" Enablers of Physical Activity All enablers themes are presented in Table 4. Strong interpersonal, contextual and academic themes emerged, which often mirrored the barriers reported previously. Commonly reported enablers were low-cost activities, accessible women-only facilities, friend and family support (including via social media) and physical activity classes integrated with the academic schedule. Table 4. Enablers of physical activity among female Emirati university student participants (n = 25). Major Themes Minor Themes Intrapersonal Behavioural regulation "The solution is to organise our schedule-it's just one hour for we will do the activities so we can manage it" Knowledge "The knowledge of the importance of sports" Interpersonal Social Influences Social support (friends) "They need motivation and to go with groups of friends. They will encourage" Social support (family) "The parents can give them money to join centers" Social modelling "I watch videos on health care and that will motivate me" Social Influences Social support "More coaches or trainers coming to our homes because many parents don't allow us to go to the gym" Table 4. Cont. Major Themes Minor Themes Contextual: Academic Environmental Context and Resources Resources "They should make time in our schedule to allow us to do exercise" Social Influences "Maybe they can do workshops in the school and invite the parents to help the children to do exercise" Reinforcement "The college to give course credit to people who participate in the gym programs" Contextual: Cultural/Environmental Environmental Context and Resources Resources-"I think make the clubs near to the home so that people can walk to them and then transport won't be a problem" "More private places for girls will help but not just gym-like parks for girls" "Make the sports complex nearer to the college to make the student go" Reinforcement "Making offers will help-if they do many exercise then they have to pay less to use the gym" Environmental Context and Resources Resources "Having the equipment will help. I have a small gym in my house and that makes me excited to do some exercise" "The gym should provide kindergarten so the lady will feel comfortable" At the interpersonal level, social support was commonly reported as a key enabler of physical activity. Social support sources included friends, family, and from social media (e.g., Instagram influencers, etc.). Friends were reported as a preferred source of support, primarily for providing motivation. "First of all, I think its support from friends-if we go to the sport together and encourage each other to exercise it will help for me especially". Family was also an important source of emotional and material support, including encouragement, transport, and costs for membership fees. "Encouragement from parents is important when the parents give their children help to be more healthy and do activities and sometimes it's about cost-the parents can give them money to join centres". Social media was identified as a source of knowledge and modelling. "Social media maybe-for awareness and when you see some videos that advise you how to do the activities and what's the benefits for it. Some applications like Instagram when you see the posts-there are some people who post about the health and the fitness. They put daily tips about the fitness and how you will be a fit person". At the contextual level, exercise classes as part of the college curriculum was identified as enabling students to engage in physical activity. "Before last year there was one subject where the girls go to the gym but now they have cancelled. If this came back it would be good for the girls to take this subject and go to the gym because some girls do not have enough time but if it is in the schedule then it is easier". Also at the contextual level, one of the most commonly cited enablers was womenonly facilities which were close to the residential area (i.e., within walking distance) and low cost. "Now they say in each urban area there must be a club close. Now in our area they do it for each area for ladies only. From government, they approve it and each area must have a closed club to go and do exercise and also have activities". Recommendations to Improve Physical Activity Participation Participants identified the importance of engaging with female students to understand their physical activity interests. "Ask the females what they prefer to do and ask their opinions on what they like" Suggested types of physical activity for Emirati female university students included jogging, cardio/aerobic classes, cycling, swimming, and Zumba/dance classes. Participants described that activities should be fun and led by an instructor, who should have both fun, motivational, and educational qualities. "The program should be fun-not only this is activity and do it" "I think the people (the instructors) have to show that they care about us-they are friendly and funny and confident and motivate us. And tell us the benefit that we can get from each exercise". Structured activities were also important, with participants interested in physical activities that were well planned. "Not just take the ball and go play basketball-it needs more structure-not just trying to show that we are doing sport". The majority of participants preferred that physical activities be held at the university sports complex or at classrooms on the campus, and many agreed that this would encourage them to participate during breaks in their academic schedule. "On the campus-not outside-like they can allocate some classrooms (for activities) on the campus and we can start. If it is closer to the campus then we can come in the break in our schedule". Discussion This study offers insight into the barriers and enablers of physical activity among female Emirati university students. Commonly cited barriers reflected sociocultural norms for women and included low family support, competing time demands due to domestic responsibilities, and lack of women-only facilities near to home. Other common barriers were competing time demands from academic schedules and discomfort associated with the hot weather. Commonly cited enablers were social support from friends and family, the availability of low-cost women-only facilities and opportunities in close proximity to home, and organised physical activity sessions integrated with the academic schedule. Participants described a preference for activities that were fun, structured, led by a coach, and held on campus. Jogging and cardio/aerobics were commonly cited as preferred types of activity. Distinct sociocultural factors were described as influencing physical activity participation. In particular, lack of convenient access to women-only facilities was constantly highlighted. This is consistent with a previous review which indicated the paucity of gender-segregated fitness facilities contributed to low levels of activity among women in Arab countries [26]. Accordingly, having women-only physical activity facilities and clubs in local residential areas was identified as enabling participation. The cost of these facilities is important, with comments that clubs need to be more affordable. However, providing free or subsidized women-only facilities in each residential area would be costly. It may be beneficial to conduct environmental analyses to identify underserved areas that could benefit from cost-reduced physical activity facilities-past research has shown that provision of free access to leisure facilities can increase participation in underrepresented groups [28,29]. There could also be opportunities to provide facilities and activity sessions in group residential buildings. This is important as other participants, noted that they were able to do physical activity within, but not outside, the home. This is consistent with previous research with Qatari women who commented on restrictions in participating in activities outside of the home [30]. Home-based resources, such as instructional materials, equipment and digital applications, may enable physical activity participation for these young women. Some participants indicated that within their culture, there was a lack of awareness of the range of benefits of physical activity participation for women, other than weight loss. Weight loss as a motivation for exercise is more commonly identified among women than men [31]. However, weight loss may not be salient to all women, and an understanding of the broader range of benefits may motivate participation. The main benefits of physical activity identified by the young women in this study were improved mood, improved health, disease prevention, and stress reduction. Other benefits were improving selfconfidence, meeting new people, social interactions, improving focus and concentration for academic work, and reducing academic-related stress. More research is needed to identify what physical activity benefits align with cultural and family values, so that participation is seen as advantageous for those young women for whom weight loss is not a concern. Previous research has demonstrated that health literacy is a consistent predictor of physical activity participation [32], including among female university students in the United Arab Emirates [33], and it may also impact on family support for physical activity. As in previous studies with university students [16][17][18], lack of time was a commonly identified physical activity barrier. Competing time demands included academic work and family responsibilities, as well as social media use. Academic and family commitments have been commonly identified in previous research [10,19], and our study participants suggested on-campus activities, and physical activity sessions incorporated into the academic schedule, as potential enablers of participation. Time spent using social media may reflect the high availability, accessibility and affordability of smart devices and internet use, as well as the range of popular social media applications, such as Facebook, Instagram and Twitter. Digital media statistics estimate that 99% of the population of the United Arab Emirates are active social media users (UAE Social Media Statistics, 2020). Previous quantitative research with American college students has, however, indicated no association between social media use and physical activity [34]. It may be, therefore, that it is not the time spent on social media, but rather the preference for this as a leisure activity, that constrains physical activity. It is interesting to note that in the current study, use of social media, as a source of education and modelling, was also identified as a potential enabler of physical activity participation. Research on physical activity interventions using social media for these purposes among American university students has shown mixed results. One study reported that a social media intervention did not produce greater awareness of physical activity than an education intervention [35]. Another study found that social comparisons via social media were more effective for increasing physical activity than social support [36]. Other research with women provides some promising results when social media is combined with other intervention components. One study with female college freshmen showed a Facebook social support group improved results from a walking and pedometer self-monitoring intervention [37]. Another study with African American women demonstrated that a Facebook and text message intervention decreased sedentary behaviour and increased self-regulation for activity and light-to-moderate activity [38]. Therefore, more research is needed to understand how social media could be used among Arabic-speaking women to support physical activity participation. As with other studies [10,39], the hot climate was commonly cited as a barrier to participation, and this was particularly salient for outdoor physical activity such as walking, and for women who wore traditional clothing. The temperature in the UAE can reach upwards of 40 degrees Celsius (104 Fahrenheit) in the summer. Past research with women in Qatari demonstrated a significant decrease in physical activity participation (measured by pedometers) in the summer months [40]. However, some participants in the current study expressed an interest in participating in activities that were outdoors. This interest could be considered during planning activity opportunities, for example, having indoor activities in the hotter months and scheduling outdoor activities for early morning or in cooler months of the year. Participants reported that physical activities should be fun. Given the stressful nature of university life [41,42], students may prefer physical activities that are not evaluative or results-oriented. This may be particularly salient if there is low confidence/competence for physical activity, which is a key (inverse) predictor of participation among women [43]. Fun activities may also be seen as less effortful, competitive, aggressive, and skills-basedcharacteristics which are often linked to the traditional male stereotype. Activities perceived as inconsistent with feminine stereotypes can risk negative judgements among young women [44]. The reported preference for scheduled activities may reflect time constraints associated with academic demands and family responsibilities. Participants also preferred instructor-led activities, which was consistent with the preferred types of physical activity identified (e.g., Zumba, cardio). A limitation of the current study is that participants were recruited through convenience sampling and the summary demographics indicate that a high proportion of the participants had excellent/very good health and high life satisfaction, and 60% were categorised within the healthy weight range. Results may have differed if the sample comprised more women with poor health, low life satisfaction/mood, or high body mass index (BMI), as these concerns are associated with specific barriers to physical activity [45][46][47]. We did not assess the physical activity levels of participants, so we cannot make comments about their experience with physical activity. We used self-reported weight and height data, which is often associated with underestimation of BMI, in particular among those with high body weight [48]; however, BMI was not a focus of this study. Focus groups were conducted in the English language, and even though the students attended an Englishspeaking university, this may have led to constrained communication as English was not the native language of the participants. Focus groups were led by a male researcher, which may have constrained the female participants' disclosure of more sensitive information. The main strength of this study is that is provides a descriptive insight into factors which can constrain or enable physical activity among female university students in an Arab-speaking country. It builds on previous research with university students which has also identified enablers and barriers related to social support, convenient facilities, costs, academic time pressures, competing time demands from family/domestic responsibilities, and hot weather. Our research contextualises these factors for this specific demographic group, and highlights the importance of sociocultural processes. This evidence can be used to generate hypotheses about behaviour and inform other studies. One imperative for future research is assessment with families to understand how physical activity among young adult women can be valued in the culture. A recent review of published physical activity interventions in the Arabic-speaking region concluded that culture is critical to success [49], and our research suggests some key sociocultural components such as aligning physical activity benefits and participation with cultural norms and values; use of social media for education and modelling; providing support for home-based exercise; and creating local, affordable, gender-segregated opportunities for physical activity. Conclusions The findings of this qualitative study suggest that there are specific sociocultural factors associated with physical activity participation among female Emirati university students. It is important to note that the potential impact of such factors may be moderated by the strength of sociocultural norms, which will differ across people. This evidence can be used to understand patterns of behaviour and inform the development of culturally sensitive interventions to promote physical activity participation. Family support for young women to engage in physical activities will be important across a range of intervention strategies. At the university level, integrating instructor-led physical activity classes, which include social and fun aspects, into the academic curriculum could be considered. Upstream approaches could focus on providing low-cost women-only physical activity opportunities in local residential areas and support for home-based activities. If successful, such strategies could make a significant contribution to the physical and mental health of Emirati women. Data Availability Statement: Data from the audiotape were transcribed verbatim and entered into Nvivo. The data transcripts were read and analysed by the separate member of the research team. The data were coded and organized into major and minor themes, according to frequency, which were then discussed with the senior author (N.W.B.) to reach consensus.
2021-04-04T06:16:26.851Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "1818dcd5143d3a5c821cb284eca955efd30052c2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/7/3380/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ac388577c3d350b8dc748f1ac1135b069a756f6a", "s2fieldsofstudy": [ "Education", "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
14559698
pes2o/s2orc
v3-fos-license
Correlated flares in models of a magnetized"canopy" A model of the Lu-Hamilton kind is applied to the study of critical behavior of the magnetized solar atmosphere. The main novelty is that its driving is done via sources undergoing a diffusion. This mimics the effect of a virtual turbulent substrate forcing the system. The system exhibits power-law statistics not only in the size of the flares, but also in the distribution of the waiting times. I. INTRODUCTION One of the most interesting properties of spatially extended dynamical systems in nature is that they can exhibit critical behavior. That terminology loosely derives from the theory of phase transitions in equilibrium statistical mechanics but has come to mean more generally that the system manifests power-law statistics in its characteristic space-time distributions. There may be long range or long term correlations and the structure of fluctuations could have very nonlocal features. There is so far no general theory dealing with critical behavior for general nonequilibrium systems. At this moment certain schemes are tried, even for some aspects seemingly unrealistic ones, and it is hoped that they retain some general validity when confronted with a larger circle of phenomena. One of the attempts for describing in a more unified way nonequilibrium and dynamical power-laws has been called self-organized criticality (SOC) [1,2,3,4]. Normally SOC may be expected in slowly loaded extended systems with local instabilities evolving according to a threshold dynamics, namely being active only when some level of "stress" is larger than a threshold. Local instabilities may trigger further ones upon relaxing, generating avalanches of relaxation that bring the system from a metastable state to another. The occurrence of scale-free avalanches has been well documented in cellular automata, often named sandpiles, which provide a surprisingly simple way of simulating SOC [1,2,3,4]. The scale-invariant distribution of avalanches in sandpiles is the hallmark of SOC, and it is a robust feature, suggesting that avalanching processes can be valid explanations of several natural scale-free phenomena [2]. For example, one can argue that the power-law distribution of the energies released by earthquakes or by solar emissions are due to the avalanching nature of these processes. The present paper revisits some earlier attempts of modeling the critical state of the solar atmosphere via SOC models. The atmosphere of the Sun is very complex and inhomogeneous. Even though there are a growing amount of data concerning solar flare activity, e.g. in [5,6], we still lack detailed information about statistical-topological aspects. The spatial and temporal resolution of the observation are too "rough" for the detection of the small scale structures of the solar atmosphere participating in the considered processes at either photospheric, chromospheric or coronal levels. Various first questions have not been answered. For example, the manifest dynamical features of the solar activity or the mechanisms of heating of the outer atmosphere have not been resolved to a sufficient degree. However, it is believed that the heating and the eruptive phenomena in the solar atmosphere are related to magnetic structures that are constantly being driven and that dissipate via reconnection and wave mechanisms [7] (see also recent new developments reported in [8,9]). While simplified models must obviously be treated with caution, they can also be welcomed as highlighting single essential features. The relevance of SOC models for the study of the solar atmosphere has been realized since the pioneering work of Lu and Hamilton [10] (see also [11]). We will refer to it as the LH-model. The idea was to develop a cellular automaton model for the solar atmosphere that would realize some of the heuristics and of the ideas stated (i) in [12,13] that solar flares might represent a cascade of smaller events of magnetic reconnection and (ii) in [14,15] that in the coronal heating a big number of small non-thermal events could make a significant contribution. These works initiated investigating whether cascades of small size dissipations of the magnetic field can avalanche in solar flares to support the observed dynamics and heating rate of the solar atmosphere. The LH-model indicated that under certain conditions for a 3D domain that is slowly "fed" by the magnetic field, the system evolves into a critical state showing power-law statistics in the energy released by avalanches (flares). While these first attempts in the context of the solar atmosphere had opened the possibility to model SOC events under coronal conditions, there were also several and significant limitations. As was pointed out in [16,17,18,19], the LH-model faced some difficulties. For example, there was a problem with the correct physical interpretation of the applied magnetic field. On the other hand the latter authors suggested to consider a large 2D domain, which is uniformly fed by sources of different type and topology. Moreover, it has been emphasized ever since [20] that the LH-model and other sandpile models have time series with exponentially separated events. Hence they do not reproduce real waiting time probability distributions. That obviously has casted doubts on whether the concept and modeling of SOC is useful at all for studying the dynamical processes in the solar atmosphere. Recently in [21] it has been suggested that the basic reason for the unrealistic temporal statistics of some sandpiles comes from the feeding uniformly randomized in space and time. Indeed, this feature is likely to be artificial in several contexts. For example, earthquake epicenters are clustered in space and time. Thus, it appears natural that SOC models will not display clustering and correlation of events in time when a randomization in space of the "epicenters" is forced by the chosen driving. A new sandpile cellular automaton was devised having a more natural feeding mechanism, namely a feeding associated to the position of a random walker, mimicking the spatial correlations of diffusing epicenters. The result was a time series with correlated avalanches [24], in particular with power-law tails in the waiting time distributions, which also collapse onto a single scaling function when rescaled by the rates of events, as found for earthquakes [25,26] and solar flares [27]. Existing SOC models imply a rather simplified configuration of the magnetic fields and of the external drivers supporting the system. Moreover the updating mechanisms are only qualitatively representing some of the very complex magnetic dynamics. While we continue in the same SOC-tradition of mathematical modeling, we add here the novel feature discussed above, namely the diffusing feeding sources. Thus, they are not fixed in space nor do they jump from one site to another in an absolutely random way, but they perform a random walk motion, which is the prototype of a correlated evolution in space and time. In the present work we consider a sandpile model of a local area of the solar atmosphere in which we have a two-dimensional slice through which the perpendicular components of the magnetic field lines contribute to the dynamics. The feeding should be thought of as the complicated result of a turbulent substrate. Therefore, a diffusive feeding seems suited for this problem as well, giving an additional temporal scale in the system, the rate of source diffusion. The work is organized as follows: in the next section we present the details of our models and the specifics of the simulation code. In Section III we discuss the results of our simulations, after which we conclude. II. THE MODEL Our model is similar to the one considered by Lu and Hamilton [10]. The differences are as follows: (i) we take a two-dimensional square lattice of side L. (ii) The feeding is not spatially uniform but its position is subject to a random walk. (iii) Additionally, after studying the standard case in which the boundary conditions are open and one perturbs the system with a source of given sign, we also investigate the case in which two sources of opposite polarity perturb the system. The magnetic field h(i) at site i is thought to be orthogonal to the two-dimensional domain. We consider each site in combination with its z = 4 neighboring ones. For a lattice model, the Laplacian of the magnetic field is defined as This quantifies the local curvature of the field at site i and if it is too high, (we set the energy scale by choosing C = 1), by definition there is an instability in the local magnetic field. Thus, this approach attempts to model, within a restricted set-up, the dissipation of high electric currents. We do not include e.g. the reconnection mechanism, which would require a more sophisticated modeling. Instabilities arise in the system because of some feeding mechanism. In a first model (model I), a perturbation chosen in the range [0, δ] is added to a site i + at each time step. This corresponds to the standard LH perturbation. As in the LH-model, we use open boundary conditions. On the other hand, in "model II" we also allow another source to put independently a perturbation chosen in the symmetric range [−δ, 0] at a site i − . As a result, in this case the average field is zero. The double feeding is done simultaneously at each time step. Furthermore, in model II we use periodic boundary conditions. An instability (2) is resolved by setting h(i) and all its neighbors h(j) equal to the local average field, This redistribution can in turn induce instabilities to neighboring sites, eventually generating avalanches of updates. We chose to do the transitions (3) in parallel for all sites i resulting unstable, as in the LH-model, iterating this process until the relaxation is over (when all sites are stable again). The whole set of updates constitutes an avalanche at a given time step, and the total number local relaxations (3) is the size of the avalanche. In the next time step, with probability p move , i + (and independently i − , in model II) jump to a nearest neighbor within the L × L square. Note that the LH-model instead would pick at random a new i + from all the sites of the lattice. In real observations, it is not very simple to define clearly the time and the amplitude of a solar flare. For our model that more or less amounts to making a reasonable convention. As a measure of the strength of an avalanche, we use the size, but we checked that on average it scales linearly with the released magnetic energy, which is the sum of all releases of magnetic energy by local relaxations of an avalanche. In order to define waiting times between avalanches, it is important first to fix the scale of events to be studied [28]. One can say that events with size < s min compose quiet periods where no avalanches are"detected", and waiting times t w between avalanches can be defined as the number of seed additions between two avalanches. (One should write t smin w with an index recalling that a threshold s min is used, but it is omitted for simplicity). The introduction of many thresholds s min allows for more detailed analysis of the time series of events, eventually leading to the discovery of scale-invariant properties [25,26,27]. III. RESULTS We have two characteristic scales of the driving mechanism. The first one sets a typical scale of the perturbation, as given by δ. In this study we set δ = 1. The other scale is diffusive and comes with the mobility p move (diffusion constant) of the feeding sources. Our results will depend on this parameter. A. Model I We first discuss what we obtain with model I. It is not our purpose to show a comprehensive spectrum of results for many choices of the parameters. We just present the results that we believe are the most interesting for our discussion. First in Fig. 1 we show the size distributions for p move = 10 −1 and several L's, P L (s). These distributions are power-laws cutoff at a size ∼ L ν with ν = 3, and the good data collapse in the inset of Fig. 1 shows that an exponent γ ≃ 1.34 can be used to describe the size distribution with a scaling form that is typical of SOC systems. In particular, one extrapolates a diverging power-law range for L → ∞. In Fig. 2 we show the distributions of waiting times, P (t w ), between event with size larger than s min = 3000, for a lattice with side L = 100, and three values of the source moving rate p move = 1 (filled squares connected with dotted lines), p move = 10 −1 (empty squares connected with dotted lines), p move = 10 −2 (crosses connected with dotted lines). For comparison we also plot the distributions corresponding to random uniform feeding, which are almost exponential, as previously found [20]. The other curves instead exhibit a power-law tail that becomes wider for decreasing p move . This behavior could be expected, as a small p move implies a slower diffusion of the sources and thus a higher spatial correlation between the sites were avalanches occur. In Fig. 3 we see the same distribution functions corresponding to a fixed value of the source moving rate p move = 10 −1 and different thresholds. The simulations are done for the lattice with size L = 150. The absolute value of the exponent of the power-law decreases with the growth of the threshold s min . Power-laws with negative exponents that are smaller in absolute value of course decay slower. Thus, in this model, correlations between events detected by using larger thresholds are qualitatively different from the ones between small events, as they have a slower decay with the waiting time. Fig. 4 shows the same distributions rescaled by the mean waiting time. It is evident that the distribution curves do not collapse into the power-law with a given exponent, thus representing the fact that the exponent values depend on the thresholds. These results have a twofold meaning: they again show that correlated avalanches can be found in SOC models. On the other hand, the recent results on the data collapse of the waiting time distributions of flares [27] cannot be reproduced, which means that the model is missing some important feature. This should not be surprising, as our model is an extremely oversimplified system. It is also fair to say that the analysis of the results via rescaling of the waiting time distributions is a test that has not yet been used to analyze data from models outside the SOC domain. B. Model II Model II has two sources of opposite polarities that execute a random walk as the result of an idealized complex convective motion of an underlying turbulent atmosphere. While model I displays the usual profiles of the field h (not shown), with a maximum at the center of the lattice, we can observe far more complex configurations in model II. Two typical configurations of the system, one for L = 100 and one for L = 300 (both with p move = 10 −2 ), are shown in Fig. 5. One can see that a non-trivial field landscape arises because of the complex interplay between the relaxation dynamics and the diffusion of the two sources of perturbation. In Fig. 6 one can appreciate that there is also a non-trivial relation between the field h (top panel) and its corresponding curvature field ∆h (bottom panel). It is possible that the system dynamics generates a length scale corresponding e.g. to the average distance between local minima and local maxima: the comparison between the two configurations in Fig 5, L = 100 and L = 300, in this case seems to confirm this hypothesis, as the typical size of the white and black areas in both "magnetograms" are similar. This length scale contrasts with the scale-free nature of SOC, as the lattice size L does. A new length scale in the model, in addition to the lattice size, could prevent the model from becoming asymptotically critical for L → ∞. This scenario is supported quantitatively by Fig. 7, in which the distribution of avalanche sizes, while displaying the usual power-law range of SOC models, is cutoff at a size that essentially does not scale with L. While for practical purposes this is not a problem (we have a distribution with a power-law range that occupies some decades, like experimental ones), the mathematical assessment of criticality in SOC models would require also this additional length scale to diverge. This might be achieved by letting p move → 0, as suggested by figure 8, in which we see that the size distribution develops a wider power-law range for decreasing p move . (Similar results are found in several SOC models when some dissipation is introduced. In our model the mobility of the sources might play a role similar to the rate of dissipation during a toppling in e.g. the Abelian sandpile.) Unfortunately, the simulation of systems with a low p move is much more time-consuming. Indeed, it becomes difficult to achieve a correct sampling, because it takes a too long time for the drivers to span a significant fraction of the lattice sites. Thus, we cannot assess any precise statement concerning this limit. Model II shows distribution of waiting times with the features that we also found in model I, see figure 9. In particular, the power-law tails of the distributions have different exponents and hence the curves would not collapse upon rescaling of the times. IV. CONCLUSIONS The magnetized atmosphere of the Sun is represented in our model as a magnetic "canopy". The magnetic field ideally is entering perpendicular to a two-dimensional domain, with sources of perturbation undergoing a diffusion and producing local instabilities whenever the curvature of the magnetic field is too high. These instabilities are flattened out in a dynamics producing avalanches of energy release that we interpret as flares. In a first model, we have performed numerical simulations with perturbation and boundary conditions typical of the LH-model, and we have measured the size of the avalanches and the time between avalanches of a size bigger than a given threshold. The analysis has shown that the diffusive character of the feeding sources is related to the power-law tail in the waiting time distributions between avalanches. The power-law exponents of these tails depend on the value of the diffusion rate p move and on the thresholds in the size of the avalanches. Thus, they are non-universal and the avalanche time series at a give scale is quantitatively and also qualitatively different from the same time series at another scale, at variance with real time series [27]. With two sources pumping perturbations of opposite polarities we obtain a second model with novel features, like typical configurations with a complex field landscape (magnetograms), and with patches of positive and negative curvature that are non-trivially related to the corresponding magnetic field. When the sources move slow enough, also this system reaches a critical regime with power-law distributions for the avalanche sizes and waiting times. This behavior is achieved even without a mechanism of magnetic reconnection, and without open boundary conditions. A length scale independent of the system size but depending on the source diffusion rate is present in this case, which prevents the system from approaching a pure SOC state for larger and larger lattices. In order to develop a more detailed model, which could be used for a direct comparison with the observations, one should add more and different types of mechanism of energy release, for example, like the ones introduced in [29,30]. It would ultimately amount to a systematic study of the processes of transformation and redistribution of the magnetic energy. The results here already reveal that temporal correlations in the energy accumulation and in the release processes can be expected from a SOC model, realizing and incorporating the clustering in space and time of the active areas of the system.
2007-09-13T12:04:23.000Z
2007-09-13T00:00:00.000
{ "year": 2007, "sha1": "3cdc84aba59e761a4cac3719cb711f44e8200d0c", "oa_license": null, "oa_url": "https://lirias.kuleuven.be/bitstream/123456789/222134/1/Physicaa2008.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3cdc84aba59e761a4cac3719cb711f44e8200d0c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15973220
pes2o/s2orc
v3-fos-license
Effect of Low Perceived Social Support on Health Outcomes in Young Patients With Acute Myocardial Infarction: Results From the VIRGO (Variation in Recovery: Role of Gender on Outcomes of Young AMI Patients) Study Background Social support is an important predictor of health outcomes after acute myocardial infarction (AMI), but social support varies by sex and age. Differences in social support could account for sex differences in outcomes of young patients with AMI. Methods and Results Data from the Variation in Recovery: Role of Gender on Outcomes of Young AMI Patients (VIRGO) study, an observational study of AMI patients aged ≤55 years in the United States and Spain, were used for this study. Patients were categorized as having low versus moderate/high perceived social support using the ENRICHD Social Support Inventory. Outcomes included health status (Short Form‐12 physical and mental component scores), depressive symptoms (Patient Health Questionnaire), and angina‐related quality of life (Seattle Angina Questionnaire) evaluated at baseline and 12 months. Among 3432 patients, 21.2% were classified as having low social support. Men and women had comparable levels of social support at baseline. On average, patients with low social support reported lower functional status and quality of life and more depressive symptoms at baseline and 12 months post‐AMI. After multivariable adjustment, including baseline health status, low social support was associated with lower mental functioning, lower quality of life, and more depressive symptoms at 12 months (all P<0.001). The relationship between low social support and worse physical functioning was nonsignificant after adjustment (P=0.6). No interactions were observed between social support, sex, or country. Conclusion Lower social support is associated with worse health status and more depressive symptoms 12 months after AMI in both young men and women. Sex did not modify the effect of social support. S ocial support is an important predictor of prognosis after acute myocardial infarction (AMI) in older populations, with numerous studies finding that patients with low perceived social support have worse outcomes after AMI, including higher mortality, 1-4 more cardiac events, 5,6 and reduced health status. [7][8][9] In fact, social support has been shown to be equivalent to many classic risk factors predicting prognosis after AMI, 10 highlighting its utility as both a tool for risk stratification and a potential target for interventions to improve post-AMI outcomes. Although the literature on social support and cardiovascular (CV) outcomes is abundant, most has been conducted in populations of predominately older men. Relatively little is known about the role of social support in younger patients, particularly women. Compared with elderly patients, young patients with AMI are in an entirely different stage of life, with different social connections and support structures. Research in the general population has shown that whereas older individuals are more likely to rely on their immediate family for help, young people tend to include fewer family members and more friends and coworkers in their support networks. [11][12][13] In addition, young people may experience more stress from work, raising a family, or social obligations, which may compromise their established support structures. 14 In fact, studies have consistently shown that younger people require larger social networks than older people to maintain a sense of well-being. 15 Collectively, these observations suggest that the quantity and function of social support varies across the lifespan, which may limit the generalizability of findings in older AMI populations. Additionally, young women with AMI may represent a group at particularly high risk of low social support. Although population-based studies have found that both receiving and giving support decline as age increases, 13,16,17 reports in the cardiac literature have generally shown lower levels of social support in young patients after AMI. 3,7 However, in all of these studies, the average age of patients was still >60 years. Almost nothing is known about the magnitude of social support in younger patients (<55 years old) and whether the associations found in older populations translate to their younger counterparts. Important gender differences in social support have also been noted at the time of AMI. Whereas studies in the general population report larger and morevaried social networks in women than men, 18,19 nearly all studies in cardiac populations have noted lower support in women across the age spectrum. 20,21 Researchers have hypothesized that these gender differences may be the result of women's roles as the primary caretakers, prompting them to minimize the impact of their disease in order to avoid burdening others. 20 In addition, women may receive less information about their cardiac disease and experience a lack of belief in their heart problems from providers. 20 Thus, young women may be at increased risk of low social support both at the time of AMI and during the course of recovery, which may place them at higher risk of adverse outcomes. The Variation in Recovery: Role of Gender on Outcomes of Young AMI Patients (VIRGO) study provides a unique opportunity to examine social support in young women with AMI. This prospective, multicenter study contains detailed information on patients' sociodemographic and psychosocial characteristics as well as data on mental health, depressive symptomatology, and quality of life during follow-up. Whereas previous studies have focused primarily on mortality and physical functioning, VIRGO allows for the investigation of both the physical and mental health consequences of low social support after AMI. We sought to characterize gender differences in the distribution of social support after AMI in young patients and the association of low social support with short-term health outcomes, including health status, depression, and disease-specific quality of life. Patient Population VIRGO is a prospective, observational study designed to examine presentation, treatment, and outcomes of young patients with AMI. The methods of this study have been described previously. 22 In brief, between August 2008 and May 2012, patients 18 to 55 years of age were recruited into the VIRGO study from 103 U.S. and 24 Spanish hospitals. Of the 5585 patients eligible for the VIRGO study in the United States, Spain, and Australia, 3752 were enrolled. Given the small number of patients enrolled in Australia, we limited the analyses to only patients enrolled in the United States and Spain (n=3501). The diagnosis of AMI was confirmed by the presence of elevated cardiac enzymes (troponin or creatine kinase) and supporting evidence of myocardial ischemia, including at least one of the following: symptoms of ischemia; ECG changes suggestive of new ischemia; or other evidence of myocardial necrosis on imaging. Patients transferred from other institutions >24 hours after symptom onset and patients with elevated cardiac markers as a complication of elective coronary revascularization were not eligible for inclusion. We also excluded patients with missing social support data (ENRICHD Social Support Instrument; ESSI) at baseline (n=69, 2% of patients). Patients missing ESSI data were less likely to be white (63.8% vs. 78.8%; P=0.003) and more likely to report financial instability (48.1% vs. 32.5%; P=0.018). No differences in gender, age, or baseline or 12month health status were observed between those with and without recorded ESSI scores. The final study cohort included 3432 patients. Information on patient demographics, clinical presentation, and treatment was collected by medical chart abstraction and standardized in-person interviews administered by trained personnel during the index AMI admission. Study outcomes (mortality and health status) were assessed through follow-up telephone interviews at 1 and 12 months administered by the Yale Follow-Up Center in the United States and by ANAGRAM in Spain. Institutional review board approval was obtained at each participating center, and all patients provided written informed consent. Variable Definitions Perceived social support was measured during the index hospitalization using the ESSI. This scale is a reliable and valid assessment of social support in cardiac populations 23,24 and has been used by several studies to evaluate social support after AMI. 2,3,7,8 The full-length ESSI is a 7-item self-report survey that assesses 4 domains of social support: emotional, instrumental, informational, and appraisal. For this particular study, we examined marital status and instrumental support separately from perceived social support and thus omitted them from the overall ESSI assessment (items 4 and 7). The remaining 5 items (1, 2, 3, 5, and 6) were summed to create a total score ranging from 5 to 25, with higher scores indicating greater perceived social support. This 5-item scale has been previously validated and is highly correlated with the fulllength 7-item scale. 23 It has also been used in previous studies of patients with coronary artery disease (CAD). 7,25,26 Using standard criteria, we defined low social support as a score ≤3 on at least 2 items and a total score of ≤18. Outcomes after AMI included mortality, health status, quality of life, and depressive symptoms at 1 and 12 months. Health status was evaluated using the Short Form-12 (SF-12) physical and mental component scores (PCS and MCS) administered during the index hospitalization and at 1 and 12 months post-AMI. The SF-12 has been demonstrated to be both a valid and reliable instrument and is the most widely used generic health status instrument to quantify patients' mental and physical functional status. 27 Scores for the PCS and MCS range from 0 to 100, with lower numbers indicating poorer health status. On both, a score of 50 reflects the population mean and 10 points reflects 1 standard deviation from the mean. Diseasespecific quality of life was evaluated using the Seattle Angina Questionnaire (SAQ-QoL), a 19-item self-administered questionnaire that measures 5 dimensions of CAD. 28 This measure has been shown to be both valid and reliable in patients with AMI and has been used extensively in cardiovascular research. 29,30 For this study, we focused on the quality-of-life component, which ranges from 0 to 100, with lower numbers indicating poorer quality of life. Finally, depressive symptoms were assessed using the 9-item Patient Health Questionnaire (PHQ-9). 31,32 Scores range from 0 to 27; higher scores represent greater depressive symptomatology, and a score of ≥10 is suggestive of moderate depressive symptoms. 31 Statistical Analyses We compared ESSI scores and the percentage of patients with low social support at baseline between men and women using Wilcoxon's rank-sum tests and chi-squared tests overall and by country. Baseline characteristics of patients with low social support were compared with those with moderate or high social support using chi-squared tests for categorical variables and Wilcoxon's rank-sum tests for continuous variables. Unadjusted associations between low social support and 1-and 12-month outcomes after AMI were evaluated visually by plotting mean health status over time and statistically by using chi-squared tests for mortality and Student t tests for SF-12, SAQ-QoL, and PHQ-9 scores. To assess the independent relationship between low social support and 12-month outcomes, we used linear regression to evaluate differences in 12-month SF-12, SAQ-QoL, and PHQ-9 scores between social support groups while adjusting for patient characteristics. Given the low mortality rate in our sample (2% overall), we did not evaluate mortality in multivariable models. Potential covariates for multivariable analyses were selected using a combination of clinical and statistical judgment. These included patient demographic data (gender, age, race, marital status, living alone, education, employment, financial solvency [defined as the ability to make ends meet each month], and insurance status), medical history (hypertension, diabetes, previous coronary disease, smoking status, alcohol abuse, and depression), clinical presentation (GRACE [Global Registry of Acute Coronary Events] score, presence of ST-elevation AMI), and treatment (reperfusion therapy and cardiac rehabilitation referral). Baseline scores for the health status measure being analyzed were included in the model in order to examine the effect of social support on 12-month health status independent of differences in baseline scores. A backwards elimination strategy was used to identify the most parsimonious model for each outcome. Specifically, we evaluated all available variables for those that we thought could be associated with both social support and health outcomes based on previous reports, face validity, and clinical judgment. Nineteen candidate variables were identified, which we included in the initial model and then removed sequentially from least to most significant. Changes in the likelihood ratio and other parameter estimates were evaluated, and variables were retained if they were significant (P<0.05) in the model or with likelihood ratio testing. Because social support status was the primary variable of interest, baseline and 12-month social support were retained in all models regardless of significance. In addition, we tested interactions between gender and low social support in each of the adjusted models. Finally, given the observed differences in ESSI scores between Spanish and U.S. patients at baseline, we repeated all analyses stratified by country to determine whether the relationship between low social support and 12-month health status differed for Spanish and U.S. patients. In addition, we formally evaluated the interaction between country and low social support in each of the adjusted models. Missing covariate data were minimal, with 14.5% of patients missing any covariate data (12.9% missing 1 covariate and 1.6% missing >1 covariate). Missing covariates were imputed using a multiple imputation approach in SAS (SAS Institute Inc., Cary, NC), which allowed incorporation of all patients into multivariable models. At 12 months, 799 (23.3%) participants were missing information on at least 1 health status measure, of whom 716 patients were missing all 4 scores. To examine whether missing data affected our results, we performed a sensitivity analysis by imputing missing health status measures. The multiple imputation models contained all variables used in the multivariable model in addition to other variables that provided information for the imputation (eg, 1-month health status scores to impute 12-month scores). Deceased patients (n=83) were excluded from the sensitivity analyses. All statistical analyses were conducted with SAS 9.2. Results Of the 3432 patients included in this study, 728 (21.2%) were classified as having low social support using the 5-item ESSI. Fewer Spanish patients were classified as having low social support than U.S. patients (17.6% vs. 22%; P=0.031). No gender differences in the distribution of ESSI scores or percentage of patients with low social support were observed in the overall cohort; however, Spanish men were less likely than Spanish women to be classified as having low social support (Table 1). Because there were no observed differences in social support between men and women overall, we chose to model the entire cohort as a whole rather than stratifying by gender. Patients with low social support were more likely to be single, to live alone, and to be unemployed, as compared with patients with moderate/high social support ( Table 2). In addition, they were more likely to have cardiovascular risk (CVR) factors, including hypertension, diabetes, and depression, and to smoke or abuse alcohol. No differences in clinical presentation or rates of revascularization were observed between social support levels. During the initial hospitalization, patients with low social support reported lower functional status and quality-of-life scores and more depressive symptomatology, on average, than patients with moderate/high social support ( Table 3). The 1-and 12-month scores presented in Table 3 are adjusted for baseline health status. Therefore, differences between social support groups represent the absolute differences in 1-or 12-month health status that remain after adjustment for differences in baseline health status. These differences in physical and mental health status persisted at 1 and 12 months after AMI. Although mean health status, quality of life, and depression scores improved in all patients over the 12 months of follow-up regardless of social support status, patients with low social support reported poorer health status, lower quality of life, and more depressive symptoms at all time points than their counterparts with moderate/high support (Figure). Crude mortality at 1 and 12 months was very low in this cohort of young patients (%2% overall) and did not differ by social support status. In risk-adjusted models, patients with low social support continued to have lower mean mental functioning scores, lower quality of life, and more depressive symptoms at 12 months (all P<0.01). In contrast, mean physical functioning scores were comparable between groups (P=0.6; Table 4). No interactions between female gender and low social support were observed in any of the models (all P>0.1). Given differences in baseline social support between the United States and Spain, we examined interactions between country and low social support and repeated analyses stratified by country to determine whether health outcomes in Spanish patients with low social support were different from those of U.S. patients. Interactions between country and social support were nonsignificant in all models (all P>0.1). In the United States, low social support was associated with lower functional status and quality-of-life scores and more depressive symptoms at baseline, but in Spain, only mental health status and depressive symptoms differed by social support at baseline (Table 3). After adjustment for demographic and clinical factors, low social support was not significantly associated with 12-month SF-12 MCS, PHQ-9, or SAQ-QoL scores in the Spanish cohort; however, the magnitude and directionality of these adjusted associations were similar for Spanish and U.S. patients (Table 5). In contrast, the relationship between low social support and the SF-12 PCS differed between countries. In the United States, 12month SF-12 PCS scores were similar between social support groups after adjustment for patient characteristics (P=0.9), whereas in Spain, low social support predicted significantly lower physical functioning at 12 months even after multivar- (Tables 6 and 7). Discussion In this study of young women and men with AMI, patients with low social support presented with poorer mental health functioning and more depressive symptoms at the time of AMI than patients with moderate/high social support. These differences across social support groups persisted at 12 months following AMI, which resulted in poorer 12-month mental health and quality-of-life outcomes in patients with low social support. No differences in physical functioning at 12 months were observed by social support in the overall population after adjustment for patient demographics. Although female gender was independently associated with lower health status, quality of life, and more depressive symptoms at 12 months, the association between social support and health outcomes did not differ by gender. Collectively, our results suggest that young patients with low social support have poorer mental health functioning and more depressive symptoms at the time of AMI, which may place them at higher risk of poorer mental health outcomes over the year following the AMI. However, social support does not explain the differences in young women's poorer baseline or 12-month health status, as compared to men. It is important to note that although there were significant differences in mental functioning, depressive symptoms, and quality of life between patients with low and moderate/high social support, the absolute magnitude of these differences was relatively small. There are no published criteria for comparing health status scores between 2 distinct populations as we have done in this study; however, there are established values for assessing clinically important differences within patients over time. In general, a change of ≥5 to 15 points on either the SF-12 physical or mental component scores, ≥5 to 10 on the SAQ, and ≥5 on the PHQ-9 are considered clinically meaningful changes within a single patient indicating improvement or worsening of health status. Although these criteria are not directly applicable to our study, they suggest that the differences in health status between social support groups observed in our study are small and may not be clinically meaningful. Nevertheless, the comparisons reported in this study are overall mean differences and thus there is a wide distribution around these means, with some patients having markedly worse health status, particularly in the low social support group. Additionally, our findings were consistent across all mental health assessments and all time points. These observations suggest that regardless of the absolute magnitude of the difference in scores, patients with low social support appear to be at increased risk of poorer mental health status outcomes after AMI. These findings are consistent with studies in older populations that have examined the role of social support on health outcomes in cardiac populations. 7,8,25,33,34 Using data from the Prospective Registry Evaluating Myocardial Infarction: Events and Recovery (PREMIER) cohort study of patients with AMI, Leifheit-Limson et al. showed that patients with low social support had lower mental functioning and more depressive symptoms at 12 months than patients with high social support, but physical functioning was similar across social support levels. 7 Similarly, Barry et al. found that among patients undergoing coronary artery bypass grafting, increased instrumental support was associated with larger increases in mental health, but not physical functioning, at 6 months. 25 As with our study, both Leifheit-Limson et al. and Barry et al. found differences in mental, but not physical, functioning by social support after adjustment for other patient characteristics. Unlike previous studies, however, we found no gender differences in social support at baseline or in the effect of social support on health outcomes among U.S. patients. 20 However, the young women and men in our study had nearly identical distributions of ESSI scores at the time of AMI, which were largely clustered at the high end of the social support spectrum. This observation suggests that gender differences in social support may be less pronounced among younger patients, as compared to older patients. Several studies have also noted significant social support and gender interactions, whereby the relationship between social support and post-cardiac outcomes is stronger for women than for men. 7,35-37 However, in our study of younger AMI patients, we did not observe any interactions between gender and social support for any of the health status measures. There are several potential explanations for this observation. First, it is possible that the similar, narrow distributions of social support scores for men and women in our study precluded us from finding a differential effect by gender. Alternatively, because younger patients have lower social support needs relative to older patients, gender differences in the relationship between low social support and health outcomes after AMI are less pronounced. Finally, it is possible that gender acts as an effect modifier for only certain types of social support. Studies in older populations have suggested that tangible and informational support from family and friends generally increases with advancing age, but emotional support does not. 13 Whereas older individuals tend to receive more instrumental support, younger persons generally have higher levels of emotional support. 38 Thus, we can hypothesize that interactions between gender and social support may only occur with certain subtypes of support. Although we did not observe differences in social support by gender, we did find interesting differences between the United States and Spain. Among U.S. patients, low social support was associated with poorer mental health and disease-related quality of life; however, it was not associated with physical functioning at 12 months post-AMI. The reverse was true in the Spanish cohort; low social support was associated with worse physical, but not mental, health status. It is important to note, however, that although low social support was not significantly associated with mental health, depression, and disease-related quality of life in the Spanish cohort, the magnitude and directionality of these associations were similar in Spanish and U.S. patients, suggesting that we may have been underpowered to detect an effect within the Spanish cohort. These country-specific results likely stem from differences in household structures and family ties between the United States and Spain. The sociology literature has long recognized differences between Europe and the United States with regard to family ties. 39 Compared with families in the United States, Spanish families are characterized by lower divorce rates and larger household sizes because children tend to leave home at an older age. 40,41 In addition, there are strong cultural norms relevant to family responsibilities and obligations in Spain that make coresidency of older people with children more common. 39,42,43 In fact, we noticed marked differences between the United States and Spain in marital status and living arrangements among patients with low social support. Compared with Spain, a greater percentage of patients with low social support in the United States were single (60.9% vs. 39.5%) and lived alone (20.7% vs. 13.6%). This suggests that social support structures likely differ between the 2 countries, which may affect the relationship between social support and health outcomes. Further research is needed to elucidate why these international differences exist and how to develop country-specific interventions that address them. Although the mechanisms by which low social support negatively affects patient outcomes remain unclear, numerous psychological, behavioral, and physiological theories have been proposed. 2,44 These range from poor self-care and negative health behaviors to increased financial strain and elevated stress responses. Indeed, we found that patients with low social support had a higher prevalence of all CVR factors and more financial instability than patients with moderate/high social support; however, the effect of social support on health status persisted after adjustment for these factors. Depression also plays an intimate role in the relationship between social support and outcomes after AMI. In our sample, patients with low social support had higher rates of depression and more depressive symptoms at all time points during follow-up, and depression was strongly associated with poorer functional status at 12 months. Although we hypothesized that low social support leads to poorer mental health and quality of life after AMI, the reverse may also be true. It is possible that poorer mental health may lead to lower social support through depression and social isolation, or that depression augments or modifies the effect of social support on health outcomes. 45 However, we found that the association between social support and poorer 12month health status persisted even when the analyses were limited to patients without depression at baseline (Table 8). Finally, It is worth commenting on the absence of an association between social support and clinical presentation or treatment. In our study, we found no difference in time to presentation, severity of AMI, reperfusion rates, or receipt of quality measures in-hospital. This suggests that much of the association between low social support and negative health outcomes occurs outside of the index hospitalization, either before admission or during follow-up. Our study has several limitations that should be considered when interpreting these results. First, we examined only perceived social support. Although perceived support may be subject to different interpretations by patients, previous studies have hypothesized that perceived support is more beneficial for those who receive support in times of stress, including illnesses such as AMI. 46,47 Nevertheless, received support may play an important role during follow-up in determining long-term health outcomes after AMI. Second, we evaluated a summary estimate of social support, rather than evaluating individual components of social support, such as emotional, instrumental, or informational support. Thus, we were unable to assess whether patient outcomes varied by type of social support. Third, we used social support measured only at the index hospitalization and thus were unable to characterize changes in social support during follow-up. Results from PREMIER showed that changes in social support during early AMI recovery were not uncommon and were also important for predicting outcomes in elderly patients. 48 Given the differences in patient ages between the VIRGO and PREMIER studies, however, it is unclear whether changes in social support after AMI are as common in young patients. Fourth, there was a shift in the interview mode from in-person interviews at baseline to telephone interviews during followup. Although this change in interview mode may have influenced patient responses to questions, trained interviewers administered all interviews, and interview modes were consistent across all patients at each time point. Any changes in patients response resulting from interview mode should be the same for all patients regardless of social support status. Finally, it is possible that patients with low social support tended to report poorer health status as a result of a response shift rather than a causal association between these characteristics. These response shifts may occur if patients with low social support have different internal standards or conceptualization of health status than patients with moderate/high social support. Nevertheless, these scores still reflect patient perception of self-health and quality of life, which are important outcomes in their own right. Regardless of whether differences in these patient-reported outcomes translate into objective differences in health by social support, these differences still warrant attention from physicians to improve mental health and quality of life in patients with low social support. In summary, we found that among young patients with AMI, those with low social support had poorer mental health status, quality of life, and more depressive symptoms 12 months after the event. This effect was independent of other demographic and clinical factors and comparable for men and women. These findings are most relevant for risk stratification and identifying patients who could benefit from additional support posthospitalization. Future studies should aim to understand the mechanisms underlying the relationship between low social support and poorer mental health outcomes after AMI and to evaluate potential interventions for reducing this risk. Given the low mortality rate in young patients with AMI, it is important to focus on outcomes such as health status, depression, and quality of life when designing interventions for patients with low social support.
2018-04-03T02:26:36.664Z
2014-09-30T00:00:00.000
{ "year": 2014, "sha1": "7e0c2701fcd63dc5507c434a32c861990905a41b", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.114.001252", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e0c2701fcd63dc5507c434a32c861990905a41b", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
215749084
pes2o/s2orc
v3-fos-license
Semiparametric integrative interaction analysis for non-small-cell lung cancer In the genomic analysis, it is significant while challenging to identify markers associated with cancer outcomes or phenotypes. Based on the biological mechanisms of cancers and the characteristics of datasets as well, this paper proposes a novel integrative interaction approach under the semiparametric model, in which the genetic factors and environmental factors are included as the parametric and nonparametric components, respectively. The goal of this approach is to identify the genetic factors and gene-gene interactions associated with cancer outcomes, and meanwhile, estimate the nonlinear effects of environmental factors. The proposed approach is based on the threshold gradient directed regularization (TGDR) technique. Simulation studies indicate that the proposed approach outperforms in the identification of main effects and interactions, and has favorable estimation and prediction accuracy compared with the alternative methods. The analysis of non-small-cell lung carcinomas (NSCLC) datasets from The Cancer Genome Atlas (TCGA) are conducted, showing that the proposed approach can identify markers with important implications and have favorable performance in prediction accuracy, identification stability, and computation cost. dataset as a whole. However, because one gene can have different effects on different cancer subtypes, the early integration approach neglects the intrinsic heterogeneity of the two subtypes, leading to low reliability of the analysis results. With late integration, i.e. meta-analysis, each dataset is analysed independently, and the results are then combined across datasets. However, both the LUAD and LUSC datasets have 'small n, large p' characteristics (p $ 18; 000 and n $ 200), leading to unsatisfactory results for each individual dataset, and hence the overall meta-analysis. 6 The third category is intermediate integration, which is often called 'integrative analysis' in biostatistics. Integrative analysis aims to preserve the structure of multiple datasets and only merges them during the modelling process. This has been shown to outperform other multi-dataset integration methods. 7,8 In this paper, we adopt the integrative analysis approach to handle multiple datasets. Within an integrative analysis framework, multiple datasets can be described using the homogeneity structure or the heterogeneity structure. 7 Under the homogeneity structure, the same set of genetic factors is identified across multiple datasets. The heterogeneity structure differs from the homogeneity structure by allowing multiple datasets to have different sets of important genetic factors. The data analysed in this study are composed of two datasets corresponding to the two subtypes of NSCLC. The common biological mechanism underlying diverse subtypes of the same cancer is such that it is reasonable to expect that each gene exerts either no effect or significant but varying effects on the different subtypes. To this end, we performed integrative analysis under the homogeneity structure. That is, we identified the same set of genetic factors for multiple subtypes of the same cancer, while allowing for different magnitudes of effects. Gene-gene interactions Possible gene-gene interactions pose new challenges to data analysis. Accumulating evidence suggests that genegene interactions contribute to explaining and predicting disease outcomes or phenotypes. 9,10 The introduction of gene-gene interactions into the statistical model significantly increases the number of covariates, and, hence, aggravates the high dimensionality issue. Consider a dataset with n samples and p genetic measurements. In interaction analysis, the total number of unknown parameters is pðpþ1Þ 2 , which often exceeds n even for a moderate p. Moreover, it has been a widely recognised practice that a statistical model with interactions should meet the strong hierarchical constraint. That is, if an interaction is identified, then the two main effects involved must also be identified. 11 Under the strong hierarchical constraint, if k < p, genes are expected to be relevant to the cancer outcome, and there are at most kðkÀ1Þ 2 interactions associated with the outcome. Hence, there is a selection problem. It is unfeasible to apply traditional variable-selection methods directly, as they may violate this hierarchical structure. For a single dataset, delicate variable-selection methods that ensure a hierarchical structure have been proposed. 11 In this study, we conducted integrative interaction analysis while satisfying the hierarchical structure of selection results in multiple datasets simultaneously. Environmental factors Like genetic factors, many environmental factors have non-negligible effects on cancer outcomes. For example, smoking is by far the leading cause of lung cancer, 12 and age is found to be associated with the development and progression of lung cancer. 13 Unlike genetic measurements, however, environmental factors often display nonlinear relationships with cancer outcomes. Figure 1 shows scatterplots and fitted curves of smoking and age against the percentage reference values for the pre-bronchodilator forced expiratory volume in one second (FEV1) in LUAD and LUSC. It is clear that, except for the curve of age in LUAD, the curves have significantly nonlinear trends. For this reason, we propose a semiparametric approach to analysing NSCLC data with genetic factors as the parametric parts and environmental factors as the nonparametric parts. Nonparametric functions are approximated using the B-spline technique. Variable selection is only conducted for the parametric parts. In this paper, motivated by the NSCLC data in TCGA, we propose an integrative interaction analysis approach under a semiparametric model. The proposed approach identifies genetic factors and gene-gene interactions associated with disease outcomes while estimating the nonlinear effects of environmental factors. Compared to standard integrative analysis approaches that have been developed to analyse cancer data, 7,8 the proposed approach considers gene-gene interactions and some environmental risk factors. Building on existing approaches, 9,10 our proposal jointly models multiple datasets, and helps to reveal common mechanisms as well as dataset-specific cancer genomic characteristics. Li et al. 14 analysed NSCLC data with p ¼ 100 genes in a semiparametric model by using a penalisation method. They demonstrated the effectiveness of the penalisation method for that data. This paper aims to analyse a 'larger' dataset (p ¼ 300). Indeed, genetic datasets are often large in scale, with which a penalisation method will encounter convergence problems. As shown in Figure 2, the estimated coefficients of PYGB, a randomly selected gene, cannot converge to a fixed point. This result motivated us to develop a new approach with good convergence ability beyond the penalised method. The proposed approach is based on the threshold gradient directed regularisation (TGDR) technique, which is popular in highdimensional analysis. 15 Despite it being similar to some penalisation methods such as fused lasso, TGDR is a completely different method of conducting variable selection and estimation. It offers widespread applications, high efficiency, and robust performance, and it has been extensively discussed in literature. [16][17][18] In particular, TGDR offers good convergence with NSCLC data (Figure 2). From the perspective of methodology, we developed a novel TGDR for semiparametric integrative interaction analysis, as an extension to the work by Li et al. 19 Moreover, data on other cancers in TCGA have characteristics similar to those of NSCLC: multiple datasets, high dimensionality, a small sample size, gene-gene interactions, and environmental factors. Consequently, the proposed approach can be seen as a general method of analysing cancer data. The remainder of this article is organised as follows. Section 2 introduces the model, method, and algorithm. In Section 3, we describe our evaluation of the performance of the proposed method with extensive simulations. We present our analysis of NSCLC data with two types of response variables in Section 4. Finally, Section 5 offers our conclusions. Additional technical details and numerical results are provided in the supplementary Appendix. is the unknown coefficient of gene j in dataset m, and c ðmÞ jk is the corresponding coefficient of the interaction term X ðmÞ j X ðmÞ k . The link function U is assumed to have a known form, and h l ðÁÞ is an unknown smoothing function. Here, the data have been normalised such that there is no intercept in model (1). We adopt the spline technique to approximate the unknown function h l Á ð Þ l ¼ 1; . . . ; q ð Þwith cubic B-spline as the basis function. Assume that E is a length-D vector of coefficients. Then model (1) can be approximated by In this study, we also consider right-censored survival data under the accelerated failure time (AFT) model. Details on the model settings and loss functions are provided in supplementary Appendix A. Algorithm We adopt the TGDR method to select important covariates, estimate unknown coefficients and predict outcomes. TGDR, a regularisation method, was first proposed in a linear regression model with high-dimensional covariates by Friedman and Popescu, 15 and then generalised to other types of models, such as the Cox model, 17 generalised linear model, 16,20 and single-index model. 18 The basic idea of this method is to first define a set of candidate models as a path through the space of joint parameter values, and then choose a point on this path to be the final model by minimising an appropriate objective function. Compared to other types of regularisation methods, TGDR has many desirable advantages, including generality, robustness, high speed, and satisfactory prediction performance. 15 Li et al. 19 proposed TGDR for integrative interaction analysis. The model they considered, however, is a parametric one. As such, TGDR is unable to analyse NSCLC data. Thus, we developed a new TGDR method. Under the assumption of a semiparametric model, TGDR for integrative interaction analysis consists of the following iterative steps: 1. Initialisation. We set t ¼ 0. We denote by b ðmÞ ðtÞ; c ðmÞ ðtÞ, and Z ðmÞ ðtÞ the estimates of b ðmÞ , c ðmÞ , and Z ðmÞ in the tth iteration, respectively. We then initialise b ðmÞ ðtÞ ¼ 0; c ðmÞ ðtÞ ¼ 0, and Z ðmÞ ðtÞ ¼ 0, respectively. 2. Computing gradients. We update t ¼ t þ 1. For m ¼ 1; . . . ; M, we compute the negative gradients Hereg jk andf j (f k ) follow the strong hierarchy restriction. That is, where the step size D is set to 0.01. Iterating. Repeat Steps 2-4 T times. In Step 4, o denotes the Hadamard product. The threshold s controls the degree of regularisation, and, in this study, it is fixed to 0.9. Compared to standard TGDR algorithms, the proposed algorithm has three unique characteristics. First, in Step 3, the indicators are determined by considering all M datasets simultaneously. Second, the algorithm ensures that the strong hierarchy constraint is satisfied. That is, if an interaction is selected, then its corresponding two main effects are also selected. Finally, the update of nonparametric spline parameters is added as a new part of the algorithm. Parametric and nonparametric components are treated in different ways. Owing to the high dimensionality of parametric components, we select important parametric components and estimate their coefficients. However, the dimensions of nonparametric components are kept low. This is true with many datasets containing real data, as for example, with the NSCLC data analysed in this study. Consequently, we here leave nonparametric components unselected. The number of iterations was selected using five-fold cross-validation. 3 Numerical study Simulation settings We simulated M ¼ 3 independent datasets with sample sizes n 1 ¼ 180; n 2 ¼ 170, and n 3 ¼ 150, respectively (total sample size n ¼ 500). There were q ¼ 2 environmental factors, and p genetic factors with p ¼ 50 and 100, respectively. The genetic factors had marginally normal distributions with mean 0 and variance 1. In each dataset, there were 10 important genetic factors and 10 important gene-gene interactions. We considered eight cases for the coefficients of parametric components: ðb 1 ; . . . The genetic factors were generated independently. The strong hierarchical constraint was satisfied in all cases, except in Case V. Compared to Case I, the nonzero coefficients in Case II had less variation across the three datasets, whereas those in Case III had more variation. Even coefficients of the same covariate had different signs across datasets. In Case IV, the sum of coefficients of the same covariate could be zero, suggesting that the overall effect of this covariate could be cancelled out. In Case VI, the genetic factors had an autoregressive correlation structure with the correlation coefficient of genes j and k as 0:5 jjÀkj . Other settings were the same as those in Case I. The only difference between Cases VII and I was that, in Case VII, the main effects and interactions were estimated independently without meeting the strong hierarchical constraint. For the environmental factors, we considered two scenarios: they were nonlinearly (a) and linearly (b) associated with the response variables. In (a), the environmental factors were included in the model as nonparametric parts, and the true model was semiparametric. Specifically, Cases I-VII had the same nonparametric functions in the three datasets g 1 ðEÞ ¼ sinð4pEÞ; g 2 ðEÞ ¼ 10 e À3:25E þ 4e À6:5E þ 3e À9:75E ð Þ In Case VIII, the nonparametric functions differed in the three datasets g and the parametric settings were the same as those in Case I. For scenario (b), the environmental measurements were treated as parametric parts, and the true model was a parametric one. The nonparametric functions were for all three datasets in cases I-VII, and in Case VIII. For all settings, the environmental factors were generated independently from the uniform distribution Uð0; 1Þ. To better gauge the performance of the proposed method, we compared it to three alternatives. (1) Metaanalysis ('Meta') involves conducting semiparametric TGDR on each dataset separately. (2) Pool-analysis ('Pool') combines all datasets directly into one and applies semiparametric TGDR. (3) Parametric-analysis ('Parametric') involves TGDR under a parametric model, 19 which treats all predictors as parametric ones. We measured the performance of effect identification, including identification of the main effects and interactions, according to the true positive rate (TPR) and false positive rate (FPR), of which the definitions were the same as those given in the literature. The prediction and estimation performance was also evaluated. Specifically, the prediction performance was measured by the prediction error (PE), which is defined as 1 n X M m¼1 kY ðmÞ ÀŶ ðmÞ k 2 2 . The estimation performance was measured by the mean squared error (MSE) of the coefficients of parametric components, Results A summary of the statistics was computed based on 100 independent replicates. The results of the linear regression model are shown in Tables 1 and 2, corresponding to the true model being semiparametric and parametric, respectively. The measurement errors were generated from N(0, 1). When the data were generated from the semiparametric model, the proposed method was observed to have superior performance at identifying interactions-for example, in Case III in Table 1 Table 2, it follows that the 'Parametric' method had the best performance. This result was not unexpected, because the data were generated from a parametric model. Nevertheless, the proposed method had comparable performance in this scenario. As such, it is a 'safe' choice in practice. Similar observations were also made under other settings, as presented in supplementary Appendix B. We also plotted the nonparametric fitting curves and the 95% point-wise confidence intervals using the bootstrap method (see supplementary Appendix B). In Case I, fitting curves produced by the proposed method were close to the true ones, and the confidence intervals contained the true curves completely, regardless of whether the true model was adopted. Other cases had similar results. Owing to space constraints, however, we do not describe them here. We also conducted simulations under the AFT model. The censoring rates were around 10%. The covariates were generated in the same way as in Case III, and the environmental factors were modelled as nonparametric components. The simulation results are provided in Table B.3 (supplementary Appendix B). The proposed method was again observed to have favourable performance in terms of identification, estimation, and prediction. In addition, we also conducted a simulation to examine the robustness of the proposed method to the signal-tonoise ratio. Specifically, two scenarios were considered: (S1) with measurement errors generated from N(0, 4), and (S2) with coefficients divided by 2 and measurement errors still generated from N(0, 1). Table B.3 presents the results of Case III under the linear regression model. Overall, the proposed method had favourable performance for p ¼ 50, although its identification performance suffered to some extent for p ¼ 100, though it still outperformed the alternative methods in terms of estimation and prediction. Analysis of NSCLC data In this section, we apply the proposed method to NSCLC data. As described in Section 1, in TCGA, there are two NSCLC datasets: LUAD and LUSC. On the one hand, because the two subtypes belong to NSCLC, it is expected that they may share some biological mechanisms. 3 Consequently, we assume that the two subtypes have the same set of important genes. On the other hand, the differences between the two subtypes have already been acknowledged. 4 To this end, we allow each identified gene and interaction to have varying magnitudes of effects in the two subtypes. In this analysis, we are interested in quantifying the influence of gene expressions and environmental risk factors on the forced expiratory volume in one second (FEV1), an important measure of lung function, and on the survival time of patients. A total of 18,277 gene expressions were collected for the patients in LUAD and LUSC. To tackle the high dimensionality and improve the reliability of the analysis, we conducted a marginal screening procedure and kept the first 300 genes with the smallest p-values. The total number of genetic factors was thus 45,150, including 300 main effects and 44,850 interactions. Motivated by existing studies and our preliminary analysis (Figure 1), age and smoking (measured by packs per year) were added as the environmental factors in the form of nonparametric functions. FEV1 as the response variable In the raw datasets, there were 517 patients in LUAD and 501 patients in LUSC. After removing missing values from FEV1, tumour stage, and environmental factors, a total of 378 patients (207 from LUAD and 171 from LUSC) were included. More details on sample selection are provided in Figure C.1 (supplementary Appendix C). The median FEV1 was 78 (range: 0.94-156), the median age at diagnosis was 68 (range: 40-90, year), and the median amount of smoking was 40 (range: 0-180, packs/year). The proposed method identified 15 main effects and five interactions. The detailed estimation results are shown in Table 3. A literature review suggested that the findings were biologically meaningful. First, some identified genes are known as the oncogenes of lung cancers. For example, gene EIF4A3 is especially involved in the development of NSCLC. 21 Miki et al. 22 revealed that genetic variation in TP63 is significantly associated with lung adenocarcinoma susceptibility in Asian populations. Gene IL6 has been extensively studied. Research has found that IL6 blockage inhibits the promotion of disease, and a high level of IL6 has been observed in some lung cancer patients. 23 In the identified interactions, SLC27A2 expression was confirmed to have a reduction in CD166þ lung cancer stem cells of NSCLC samples. 24 The over-expression of IRS4 contributed to the promotion of tumours in lung cancers. 25 Figure 3 displays estimates of nonparametric functions. The point-wise confidence intervals of curves were obtained using the bootstrap method over 50 random replicates drawn from the original datasets. All the functions diverged from a straight line, suggesting that the semiparametric model is more appropriate for describing the environmental effects on FEV1 in our data. We also adopted the alternative methods described above to this data to generate different findings. Table 4 provides the summary of the comparison. Meta-analysis identified 19 (16) main effects, and 7 (3) interactions in LUAD (LUSC). Only two main effects were shared by the two subtypes. Pool-analysis identified 18 main effects and 5 interactions. With its particular property, the coefficients of the identified main effects and interactions were the same in the two subtypes. Parametric-analysis identified 13 main effects and 3 interactions, of which the magnitudes differed among the two subtypes. Less overlap between the proposed method and 'Parametric' in the For real data, because the main effects and interactions that influence the response variables are unknown (or, at least, partially unknown), we cannot directly evaluate and compare the identification performance of different methods. Instead, we evaluated the stability of identification and prediction. This provides some insight into the validity of the methods. Specifically, each dataset was randomly partitioned into a training set and a testing test, at a ratio of 2:1. Estimation was conducted with the training set and prediction was made with the testing set. The average prediction error over 50 independent repetitions was 0.877 (Proposed), 0.923 (Meta), 0.881 (Pool), and 1.135 (Parametric). This shows that the proposed method has higher prediction accuracy. We also evaluated the identification stability of each method using the Observed Occurrence Index (OOI). The results are given in Figure C Survival time as the response variable We now investigate the influence of genetic and environmental factors on the survival time (measured in months) of NSCLC patients. Note that similar data have been analysed by Li et al. 14 Our study differed from theirs in that (1) patients of all stages were considered, and (2) the amount of smoking, measured by packs each year, was introduced as another environmental factor, in addition to age. After a series of sample selection processes (see Figure C.2 in supplementary Appendix C), 833 subjects were included in the analysis, of which the censoring rates were 64.50% (LUAD) and 56.00% (LUSC), the median age at diagnosis was 67 (range: 38-90 years), and the median amount of smoking was 40 (range: 0-240 packs/year). We adopted the proposed method for this data under the AFT model, of which 18 main effects and four interactions were identified. Table 5 provides the estimation results of the coefficients. Once again, the identification results were meaningful. For example, some lung cancer-related genes such as PYGB and CBR1 were identified. Over-expression of PYGB has been found in high-risk groups of lung carcinomas. 26 Moreover, CBR1 mRNA, CBR1 protein levels, and CBR1 SNPs show significant associations with the development of lung cancer. 27 We plot the estimates of nonparametric functions in Figure 4. The survival time of LUAD patients decreased gradually as their age increased. For LUSC patients, however, the trend was slightly different: survival time decreased in patients 70 years old and younger, and increased slowly after 70 years old. This differs from observations in existing work. 14 For both LUAD and LUSC patients, their survival time decreased nonlinearly with more cigarette smoking. The data were also analysed using the alternative methods. Table 6 shows summarized results, which again indicate that different methods lead to different results. Detailed identification and estimation results are provided in supplementary Appendix C. In terms of prediction, we used log-rank test statistics to measure the prediction accuracy, where a higher log-rank test value indicates higher prediction accuracy. Using the same resampling procedure described above, the average log-rank test statistics were 14 Comparison with the penalisation method To further gauge the performance of the proposed method, we systematically compared it to the penalisation method 14 in the NSCLC data. Because the penalisation method diverges when there are 300 genes (Figure 2), we reduced the number of genes to 100 to render the two methods comparable. Following the novel PCS framework outlined by Yu and Kumbier, 28 we evaluated the performance of the two methods in terms of prediction, computation, and stability. To evaluate the prediction performance, we split each dataset randomly into a training and a testing set at a ratio of 3:1. This process was repeated 50 times. The average prediction errors were 0.871 (Proposed) and 0.995 (Penalisation) where the response variable was FEV1, and 4.322 (Proposed) and 4.348 (Penalisation) where the response variable was the survival time. The proposed method led to an improvement in prediction. With the same splitting approach, we also evaluated the stability of methods at effect identification. Figure C.5 (supplementary Appendix C) shows the OOI results under the linear model and AFT model. The proposed method always had a higher OOI than the penalisation method, regardless of the main effects and interactions. This suggests that the proposed method is more stable when identifying outcome-associated genetic factors. Finally, we compared the computation time of two methods. The results for a single replicate are provided in Table C.3 and Figure C.6 (supplementary Appendix C). The penalisation method needed more computation time than the proposed method, and the gap between the two methods in terms of computation time drastically increased as more genes were included in the model. For example, for p ¼ 150 with the survival time as the response variable, the penalisation method required more than 15,000 s, whereas the proposed method required only 300 s. Overall, the proposed method outperformed the penalisation method with the reduced NSCLC data in terms of prediction, stability, and efficiency. Conclusion In this paper, we presented a semiparametric approach to jointly model multiple datasets, where genetic measurements and environmental measurements are included as parametric and nonparametric components, respectively. Gene-gene interactions were also considered. A novel TGDR method was developed to estimate the parameters of the main effects, interactions, and nonparametric functions, while meeting the strong hierarchical constraint. The same set of genetic measurements were thus identified for different datasets, but their coefficients were allowed to vary. Simulations and analyses of the NSCLC data showed the superiority of the proposed method over alternative methods. In the simulations, the proposed method demonstrated favourable performance when identifying the main effects and interactions, and it significantly outperformed the other methods in terms of estimations and predictions. Moreover, the proposed method was robust to the simulation settings. When analysing NSCLC data under the linear model and the AFT model with right-censored data, it yielded biologically meaningful findings. The higher prediction accuracy and selection stability of the proposed approach confirmed its validity. The proposed semiparametric model can be extended to accommodate more complex data structures, such as clustered covariates and time-varying covariates. We considered a small number of nonparametric parts and left them unselected. In future research, we will consider selecting and estimating these nonparametric parts simultaneously when there are a large number of environmental factors. Further, heterogeneous sparsity for multiple datasets can be assumed. That is, multiple models can have an overlapping but not necessarily identical set of important covariates. This can be done by modifying the update criterion for parameters in TGDR. Moreover, in this paper, nonlinear associations between environmental factors and the response variable were identified using descriptive statistics. It is worth adopting a data-driven procedure-e.g. the parametricness index (PI) 29 -to select between parametric and nonparametric models. In addition, our future work will explore the application of the proposed method to other types of cancers.
2020-04-14T13:03:29.268Z
2020-04-13T00:00:00.000
{ "year": 2022, "sha1": "ea2cb606b55961db08f49062b3b9a1a0a632899a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f6235a633160910e234c9d1f45ab4bd1b4c9e630", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }